CN107094262B - Wireless media interaction method, system and server - Google Patents

Wireless media interaction method, system and server Download PDF

Info

Publication number
CN107094262B
CN107094262B CN201610088055.1A CN201610088055A CN107094262B CN 107094262 B CN107094262 B CN 107094262B CN 201610088055 A CN201610088055 A CN 201610088055A CN 107094262 B CN107094262 B CN 107094262B
Authority
CN
China
Prior art keywords
information
data stream
content
audio
content data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610088055.1A
Other languages
Chinese (zh)
Other versions
CN107094262A (en
Inventor
盛亚婷
王天尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202110008610.6A priority Critical patent/CN112752144B/en
Priority to CN201610088055.1A priority patent/CN107094262B/en
Publication of CN107094262A publication Critical patent/CN107094262A/en
Application granted granted Critical
Publication of CN107094262B publication Critical patent/CN107094262B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • H04N21/2335Processing of audio elementary streams involving reformatting operations of audio signals, e.g. by converting from one coding standard to another
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4126The peripheral being portable, e.g. PDAs or mobile phones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video or multiplex stream to a specific local network, e.g. a IEEE 1394 or Bluetooth® network
    • H04N21/43637Adapting the video or multiplex stream to a specific local network, e.g. a IEEE 1394 or Bluetooth® network involving a wireless protocol, e.g. Bluetooth, RF or wireless LAN [IEEE 802.11]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8126Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts

Abstract

The embodiment of the application discloses a wireless media interaction method, a system and a server, wherein the method comprises the following steps: the program publishing platform sends a first content data stream to a server; the server acquires first audio data or generates first sound wave data; the server determines first characteristic information corresponding to the first content data stream; the server sends the first content data stream to the first terminal for playing; the second terminal receives and records the audio information played by the first terminal and sends the received and recorded audio information or second characteristic information determined according to the audio information to the server; the server matches the second characteristic information with the first characteristic information; when the two characteristic information are matched, the server pushes second content information associated with the first content data stream to the second terminal. The wireless media interaction method, the wireless media interaction system and the server can ensure that a user can conveniently and successfully realize interaction with media.

Description

Wireless media interaction method, system and server
Technical Field
The present application relates to the field of media information interaction technologies, and in particular, to a wireless media interaction method, system, and server.
Background
With the continuous progress of internet technology, viewers do not satisfy the pure visual enjoyment when watching television, and want to participate in media programs to realize interaction with the media programs.
Existing media interaction methods generally include: the method comprises the steps that when media programs are displayed, television station marks or two-dimensional codes are displayed on a screen, a user watching a television can scan the television station marks or the two-dimensional codes by using a mobile phone, a tablet computer and other clients, and according to the scanned television station marks or the two-dimensional codes, the user can be linked to an interaction platform corresponding to the currently played media programs, so that interaction with the media programs is achieved.
In the process of implementing the present application, the inventor finds that at least the following problems exist in the prior art: when a user uses a client to scan a television station logo or a two-dimensional code, the scanning frequency of a television screen may cause that the identification cannot be successful, the identification rate is low, and the user needs to scan for many times to succeed. Meanwhile, when a client scans, in order to improve the success rate of identification, the client needs to be ensured to be within a close range from a television screen, and the user is inconvenient to operate.
Disclosure of Invention
The embodiment of the application aims to provide a wireless media interaction method, a wireless media interaction system and a wireless media interaction server, so as to ensure that a user can conveniently and successfully realize interaction with media.
To solve the above technical problem, embodiments of the present application provide a wireless media interaction method, system, and server, which are implemented as follows:
a method of wireless media interaction, comprising:
acquiring first audio data in a first content data stream; the first content data stream comprises audio data of a media program; the first audio data has first feature information;
sending the first content data stream to a first terminal for playing;
receiving audio information recorded by a second terminal, and determining second characteristic information according to the audio information; or receiving second characteristic information determined by the second terminal according to the audio information;
matching the second characteristic information with the first characteristic information;
and when the second characteristic information is matched with the first characteristic information, pushing second content information pre-associated with the first content data stream to the second terminal.
A method of wireless media interaction, comprising:
generating first acoustic data corresponding to the first content data stream; the first content data stream comprises audio data of a media program; the first acoustic data has first characteristic information;
sending a first content data stream mixed with the first sound wave data to a first terminal for playing;
receiving audio information recorded by a second terminal, and determining second characteristic information according to the audio information; or receiving second characteristic information determined by the second terminal according to the audio information;
matching the second characteristic information with the first characteristic information;
and when the second characteristic information is matched with the first characteristic information, pushing second content information pre-associated with the first content data stream to the second terminal.
A method of wireless media interaction, comprising:
generating first acoustic data corresponding to the first content data stream; the first content data stream comprises audio data of a media program;
sending a first content data stream mixed with the first sound wave data to a first terminal for playing; and the first sound wave data is used for the second terminal to start the media interaction function.
A wireless media interaction system, comprising: the system comprises a program issuing platform, a server, a first terminal and a second terminal; wherein the content of the first and second substances,
the program publishing platform is used for sending a first content data stream to the server;
the server is used for acquiring first audio data in a first content data stream; the first audio data has first feature information; the server sends the first content data stream to a first terminal for playing;
the first terminal is used for receiving and playing a first content data stream sent by the server;
the second terminal is used for receiving and recording the audio information played by the first terminal and sending the received and recorded audio information to the server; or, the system is used for receiving and recording the audio information played by the first terminal and determining the second terminal according to the audio information
The characteristic information is sent to the server;
the server is further configured to receive the recorded audio information sent by the second terminal, and determine second feature information according to the audio information, or further configured to receive the second feature information sent by the second terminal; matching the second characteristic information with the first characteristic information; and when the second characteristic information is matched with the first characteristic information, the server pushes second content information pre-associated with the first content data stream to the second terminal.
A wireless media interaction system, comprising: the system comprises a program issuing platform, a server, a first terminal and a second terminal; wherein the content of the first and second substances,
the program publishing platform is used for sending a first content data stream to the server;
the server is used for generating first sound wave data corresponding to the first content data stream; the first acoustic data has first characteristic information; the server sends a first content data stream mixed with the first sound wave data to a first terminal for playing;
the first terminal is used for receiving and playing a first content data stream which is sent by the server and mixed with the first sound wave data;
the second terminal is used for receiving and recording the audio information played by the first terminal and sending the received and recorded audio information to the server; or, the server is used for receiving and recording the audio information played by the first terminal, determining second characteristic information according to the audio information, and sending the second characteristic information to the server;
the server is further configured to receive the recorded audio information sent by the second terminal, and determine second feature information according to the audio information, or further configured to receive the second feature information sent by the second terminal; matching the second characteristic information with the first characteristic information; and when the second characteristic information is matched with the first characteristic information, the server pushes second content information pre-associated with the first content data stream to the second terminal.
A wireless media interaction server, comprising: the system comprises an audio/sound wave data acquisition module, a first characteristic information determination module, a first content data stream sending module, an information receiving module, a characteristic information matching module and a second content information pushing module; wherein the content of the first and second substances,
the audio/sound wave data acquisition module is used for acquiring audio data in the first content data stream; the first content data stream comprises audio data of a media program;
the first characteristic information determining module is used for determining first characteristic information corresponding to a first content data stream according to first audio data in the audio/sound wave data acquiring module;
the first content data stream sending module is used for sending the first content data stream to a first terminal for playing;
the information receiving module is used for receiving audio information recorded by a second terminal and determining second characteristic information according to the audio information; or the second characteristic information is used for receiving second characteristic information determined by the second terminal according to the audio information;
the characteristic information matching module is used for matching the second characteristic information with the first characteristic information;
and the second content information pushing module is used for pushing second content information pre-associated with the first content data stream to the second terminal when second characteristic information in the characteristic information matching module is matched with the first characteristic information.
A wireless media interaction server, comprising: the system comprises an audio/sound wave data acquisition module, a first characteristic information determination module, a first content data stream sending module, an information receiving module, a characteristic information matching module and a second content information pushing module; wherein the content of the first and second substances,
the audio/sound wave data acquisition module is used for generating first sound wave data corresponding to the first content data stream; the first content data stream comprises audio data of a media program;
the first characteristic information determining module is used for determining first characteristic information corresponding to a first content data stream according to the first sound wave data in the audio/sound wave data acquiring module;
the first content data stream sending module is used for sending the first content data stream mixed with the first sound wave data to a first terminal for playing;
the information receiving module is used for receiving audio information recorded by a second terminal and determining second characteristic information according to the audio information; or the second characteristic information is used for receiving second characteristic information determined by the second terminal according to the audio information;
the characteristic information matching module is used for matching the second characteristic information with the first characteristic information;
and the second content information pushing module is used for pushing second content information pre-associated with the first content data stream to the second terminal when second characteristic information in the characteristic information matching module is matched with the first characteristic information.
A computer storage medium having stored thereon computer instructions which, when executed by a processor, perform the steps of:
acquiring first audio data in a first content data stream; the first content data stream comprises audio data of a media program; the first audio data has first feature information;
sending the first content data stream to a first terminal for playing;
receiving audio information recorded by a second terminal, and determining second characteristic information according to the audio information; or receiving second characteristic information determined by the second terminal according to the audio information;
matching the second characteristic information with the first characteristic information;
and when the second characteristic information is matched with the first characteristic information, pushing second content information pre-associated with the first content data stream to the second terminal.
A computer storage medium having stored thereon computer instructions which, when executed by a processor, perform the steps of:
generating first acoustic data corresponding to the first content data stream; the first content data stream comprises audio data of a media program; the first acoustic data has first characteristic information;
sending a first content data stream mixed with the first sound wave data to a first terminal for playing;
receiving audio information recorded by a second terminal, and determining second characteristic information according to the audio information; or receiving second characteristic information determined by the second terminal according to the audio information;
matching the second characteristic information with the first characteristic information;
and when the second characteristic information is matched with the first characteristic information, pushing second content information pre-associated with the first content data stream to the second terminal.
According to the technical scheme provided by the embodiment of the application, the user can connect with the media program in a mode of audio data or sound wave data at the second terminal used by the user to realize the wireless interaction of the media, and the volume of the user playing the media program can completely ensure that the client can record the audio data related to the interactive content, so that the success of the user in connecting to the media interactive content can be improved, and the user can successfully realize the media interaction. Meanwhile, the method provided by the embodiment of the application only needs the distance between the second terminal and the first terminal playing the media to acquire the audio data, and the user operation is convenient. On the other hand, the wireless media interaction method provided by the embodiment of the application directly overlaps the sound wave data with the first content data stream, and does not need to perform lossy processing such as compression on the first content data stream, and cannot cause audio-visual influence on the playing of the media program.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
FIG. 1 is a block diagram of an embodiment of a wireless media interaction system according to the present application;
FIG. 2 is a flow chart of one embodiment of a method for wireless media interaction according to the present application;
FIG. 3 is a flowchart of an embodiment of a server-based wireless media interaction method of the present application;
fig. 4 is a block diagram of one embodiment of a server in the wireless media interaction system of the present application.
Detailed Description
The embodiment of the application provides a wireless media interaction method, a wireless media interaction system and a server.
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a schematic diagram illustrating an embodiment of a wireless media interaction system according to the present application. Fig. 1 shows a data connection relationship between devices in the wireless media interactive system of the present application.
Fig. 2 is a flow chart of an embodiment of a wireless media interaction method of the present application. As shown in fig. 2, the wireless media interaction method may include:
s101: the program distribution platform sends a first content data stream to a server.
The first content data stream may be used to describe media program content. The first content data stream may comprise audio data of the media program. The first content data stream may further include picture data of the media program and/or program information of the media program.
The program information may include: program identification, program name, and/or program air time.
The program distribution platform may send a first content data stream to a server.
S102: the server acquires first audio data in the first content data stream or first sound wave data corresponding to the first content data generated by a sound wave encoder.
The server obtaining the first audio data in the first content data stream may include: and the server acquires all or part of the content of the audio data in the first content data stream, and takes the acquired all or part of the audio data as first audio data. For example, audio data in the first content data stream that is used to remind the user to participate in the media program interaction may be obtained as the first audio data.
The generated first acoustic data corresponding to the first content data stream may be generated using an acoustic encoder based on program information in the first content data stream. For example, first sonic data may be generated with the sonic encoder based on the program identification or the program name.
Further, the sound wave frequency of the sound wave data may be in a sound frequency range that is not audible to the human ear.
S103: the server determines first characteristic information corresponding to a first content data stream from the first audio data or the first sound wave data.
The server may determine first characteristic information corresponding to a first content data stream from the first audio data or the first sound wave data.
Specifically, the method comprises the following steps: when the server acquires first audio data, audio feature information of the first audio data can be extracted according to a first extraction rule, and the audio feature information is used as first feature information; alternatively, when the server acquires the first acoustic wave data, information for generating the acoustic wave data may be used as the first feature information or the acoustic wave data may be decoded, and the decoded information may be used as the first feature information. For example, the program for generating sound wave data may be identified as the first characteristic information.
The extracting of the audio feature information of the first audio data according to the first rule may specifically include: performing Fourier transform on the first audio data frame by frame; extracting frequency dense points in each frame of the Fourier-transformed first audio data in a frequency domain; forming a cross vector by frequency dense points extracted from two adjacent frames of the first audio data after Fourier transform; and taking the cross vector as first feature information.
After determining the first characteristic information, a corresponding relationship between the first content data stream and the first characteristic information may be established. It should be noted that the first content data stream and the first feature information may be in a one-to-one correspondence relationship or a one-to-many correspondence relationship.
S104: and the server sends the first content data stream or the first content data stream mixed with the first sound wave data to a first terminal for playing.
The first terminal may be a playback device for playing back the first content data stream. For example, the device can be a television, a tablet computer, a mobile phone and the like.
The server may send the first content data stream to a first terminal for playing. When the server acquires first sound wave data corresponding to first content data generated by a sound wave encoder, the server may further send a first content data stream mixed with the first sound wave data to a first terminal for playing.
The first content data stream mixed with the first acoustic data may include: and superposing the first content data stream after the first sound wave data in the first content data stream.
Further, the first acoustic data may be mixed once in the first content data stream at preset time intervals; alternatively, the first sound wave data may be superimposed in the first audio data of the first content data stream.
When the first sound wave data is mixed to the first content data stream, the first sound wave data is superposed to the first content data stream, and data processing such as compression or encoding and the like is not required to be performed on the first content data stream, so that the first content data is not damaged, and the audio-visual effect of a program is not influenced when the first content data stream is played.
S105: and the first terminal receives and plays a first content data stream sent by the server or a first content data stream mixed with the first sound wave data.
S106: and the second terminal records the audio information played by the first terminal and sends the recorded audio information or second characteristic information determined according to the audio information to a server.
The second terminal may be a multimedia device having a recording function. For example, the device may be a mobile phone or a tablet computer.
In another embodiment, the second terminal may start recording the audio information played by the first terminal after receiving a trigger signal for starting to acquire information. The trigger signal may be user-actively triggered. For example, a vibration signal of a user shaking the mobile phone can be received, or the user touches a trigger area on the display screen of the second terminal, or the user triggers a trigger button of the second summary, and the like.
The trigger signal may further include: the second terminal is turned on in the background.
And after receiving the trigger signal, the second terminal can record the audio information played by the first terminal.
The audio information included in the second terminal may include: second audio data or second audio data mixed with second sound wave data.
The second terminal may send the included audio information to a server.
In another embodiment, the second terminal may determine second feature information according to the included audio information, and send the second feature information to the server.
The determining the second feature information according to the included audio information may specifically include: when the included audio information includes second audio data, extracting feature information of the second audio data according to a first extraction rule, and using the extracted feature information as second feature information; alternatively, when the recorded audio information is second audio data in which second audio data is mixed, the second audio data may be decoded, and the decoded information may be used as second feature information.
S107: the server may receive the included audio information or the second feature information from the second terminal.
S108: and the server matches the second characteristic information determined or received according to the audio information with the first characteristic information.
The server may match second feature information determined or received from the audio information with the first feature information.
If the server receives second feature information sent by the second terminal, the second feature information may be matched with the first feature information.
If the server receives the recorded audio information sent by the second terminal, second characteristic information can be determined according to the audio information, and the second characteristic information is matched with the first characteristic information.
The determining of the second feature information according to the audio information may be the same as the determining of the second feature information according to the included audio information by the second terminal in step S106, and is not described herein again.
S109: and when the second characteristic information is matched with the first characteristic information, the server pushes second content information pre-associated with the first content data stream to the second terminal.
When the second characteristic information matches the first characteristic information, the server may push second content information pre-associated with the first content data stream to the second terminal. The second content information may include: interactive content information of the first content data stream. For example, it may be an interactive message or an interactive page, etc.
A wireless media interaction method based on a server according to the present application is described below.
Fig. 3 is a flowchart of an embodiment of a server-based wireless media interaction method according to the present invention. As shown in fig. 3, the method may include:
s201: first audio data in the first content data stream is acquired or first acoustic data corresponding to the first content data stream is generated.
S202: first characteristic information corresponding to a first content data stream is determined from the first audio data or first sound wave data.
S203: and sending the first content data stream or the first content data stream mixed with the first sound wave data to a first terminal for playing.
S204: and receiving audio information recorded by the second terminal or receiving second characteristic information determined by the second terminal according to the audio information.
S205: and matching the second characteristic information determined or received according to the audio information with the first characteristic information.
S206: and when the second characteristic information is matched with the first characteristic information, pushing second content information pre-associated with the first content data stream to the second terminal.
The specific content of each step in the foregoing embodiments may refer to the embodiment of the wireless media interaction method shown in fig. 1, and is not described herein again.
According to the wireless media interaction method provided by the embodiment, the user can connect with the media program in the mode of audio data or sound wave data at the second terminal used by the user, so that the wireless interaction of the media is realized, and the volume of the user playing the media program can completely ensure that the client can record the audio data related to the interactive content, so that the success of the user in connecting to the media interactive content can be improved, and the user can be ensured to successfully realize the media interaction. Meanwhile, the method provided by the embodiment of the application only needs the distance between the second terminal and the first terminal playing the media to acquire the audio data, and the user operation is convenient. On the other hand, the wireless media interaction method provided by the embodiment of the application directly overlaps the sound wave data with the first content data stream, and does not need to perform lossy processing such as compression on the first content data stream, and cannot cause audio-visual influence on the playing of the media program.
A wireless media interactive system of the present application is described below. Fig. 1 is a schematic diagram illustrating an embodiment of a wireless media interaction system according to the present application. As shown in fig. 1, the wireless media interaction system may include: a program distribution platform 100, a server 200, a first terminal 300, and a second terminal 400.
Wherein the content of the first and second substances,
the program distribution platform 100 may be configured to send a first content data stream to the server 200.
The server 200 may be configured to obtain first audio data in a first content data stream or first sound wave data corresponding to the first content data stream and generated by a sound wave encoder; the server 200 determines first characteristic information corresponding to a first content data stream according to the first audio data or the first sound wave data; the server 200 sends the first content data stream or the first content data stream mixed with the first sound wave data to the first terminal 300 for playing;
the first terminal 300 may be configured to receive and play the first content data stream sent by the server 200 or the first content data stream mixed with the first sound wave data.
The second terminal 400 may be configured to record the audio information played by the first terminal 300, and send the recorded audio information to the server 200; or, the method may be configured to receive and record audio information played by the first terminal 300, determine second feature information according to the audio information, and send the second feature information to the server 200.
The server 200 is further configured to receive the included audio information sent by the second terminal 400 or second feature information sent by the second terminal 400, and match the second feature information determined or received according to the audio information with the first feature information; when the second characteristic information matches the first characteristic information, the server 200 pushes second content information associated with the first content data stream in advance to the second terminal 400.
Fig. 4 is a block diagram of one embodiment of a server in the wireless media interaction system of the present application. As shown in fig. 4, the server 200 may include: the system comprises an audio/sound wave data acquisition module 201, a first characteristic information determination module 202, a first content data stream sending module 203, an information receiving module 204, a characteristic information matching module 205 and a second content information pushing module 206.
Wherein the content of the first and second substances,
the audio/sound wave data acquiring module 201 may be configured to acquire audio data in the first content data stream or generate sound wave data corresponding to the first content data stream.
The first characteristic information determining module 202 may be configured to determine first characteristic information corresponding to a first content data stream according to the first audio data or the first sound wave data in the audio/sound wave data acquiring module 201.
The first content data stream sending module 203 may be configured to send the first content data stream or the first content data stream mixed with the first sound wave data to the first terminal 300 for playing.
The information receiving module 204 may be configured to receive audio information recorded by the second terminal 400 or receive second feature information determined by the second terminal 400 according to the audio information.
The characteristic information matching module 205 may be configured to match second characteristic information determined according to the audio information received in the information receiving module 204 or received by the information receiving module 204 with the first characteristic information.
The second content information pushing module 206 may be configured to push second content information pre-associated with the first content data stream to the second terminal 400 when the second feature information matches the first feature information in the feature information matching module 205.
The wireless media interaction system, the server and the client provided by the embodiments correspond to the method embodiments of the present application, respectively, and the method embodiments of the present application can be implemented and achieve the technical effects of the method embodiments. This application is not described in detail herein.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate a dedicated integrated circuit chip 2. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Language Description Language), traffic, pl (core unified Programming Language), HDCal, JHDL (Java Hardware Description Language), langue, Lola, HDL, laspam, hardbyscript Description Language (vhr Description Language), and the like, which are currently used by Hardware compiler-software (Hardware Description Language-software). It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory.
Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. With this understanding in mind, the present solution, or portions thereof that contribute to the prior art, may be embodied in the form of a software product, which in a typical configuration includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The computer software product may include instructions for causing a computing device (which may be a personal computer, a server, or a network device, etc.) to perform the methods described in the various embodiments or portions of embodiments of the present application. The computer software product may be stored in a memory, which may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium. Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include transitory computer readable media (transient media), such as modulated data signals and carrier waves.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
While the present application has been described with examples, those of ordinary skill in the art will appreciate that there are numerous variations and permutations of the present application without departing from the spirit of the application, and it is intended that the appended claims encompass such variations and permutations without departing from the spirit of the application.

Claims (23)

1. A method for wireless media interaction, comprising:
acquiring first audio data in a first content data stream; the first content data stream comprises audio data of a media program; the first audio data has first feature information;
sending the first content data stream to a first terminal for playing;
receiving audio information recorded by a second terminal, and determining second characteristic information according to the audio information;
matching the second characteristic information with the first characteristic information;
when the second characteristic information is matched with the first characteristic information, second content information pre-associated with the first content data stream is pushed to the second terminal; wherein the second content information includes: interactive content information of the first content data stream.
2. The method of claim 1, wherein the first content data stream further comprises: the picture data of the media program and/or the program information of the media program.
3. The method of claim 2, wherein the program information comprises: program identification, program name, and/or program air time.
4. The method of claim 1, wherein the obtaining the first audio data in the first content data stream comprises: and acquiring all or part of the content of the audio data in the first content data stream, and taking all or part of the acquired audio data as first audio data.
5. The method of claim 1, wherein the first feature information comprises: and extracting the audio characteristic information of the first audio data according to a first extraction rule.
6. The method of claim 5, wherein the second feature information comprises: and extracting the characteristic information of the included audio information according to the first extraction rule.
7. The method of claim 5, wherein the first extraction rule specifically comprises:
performing Fourier transform on the audio data frame by frame;
extracting frequency dense points in each frame of the Fourier-transformed audio data in a frequency domain;
forming a cross vector by frequency dense points extracted from two adjacent frames of the audio data after Fourier transform;
and taking the cross vector as characteristic information.
8. The method of claim 1, wherein the first content data stream and the first feature information are in a one-to-one correspondence or in a one-to-many correspondence.
9. A method for wireless media interaction, comprising:
generating first acoustic data corresponding to the first content data stream; the first content data stream comprises audio data of a media program; the first acoustic data has first characteristic information;
sending a first content data stream mixed with the first sound wave data to a first terminal for playing;
receiving audio information recorded by a second terminal, and determining second characteristic information according to the audio information;
matching the second characteristic information with the first characteristic information;
when the second characteristic information is matched with the first characteristic information, second content information pre-associated with the first content data stream is pushed to the second terminal; wherein the second content information includes: interactive content information of the first content data stream.
10. The method of claim 9, wherein the first content data stream further comprises: the picture data of the media program and/or the program information of the media program.
11. The method of claim 10, wherein the program information comprises: program identification, program name, and/or program air time.
12. The method of claim 10, wherein generating the first sonic data corresponding to the first content data stream comprises: first sonic data is generated using a sonic encoder based on program information in the first content data stream.
13. The method of claim 9, wherein the first feature information comprises: data information for generating the acoustic wave data, or information obtained by decoding the acoustic wave data.
14. The method of claim 9, wherein the first content data stream mixed with the first sound wave data comprises: and superposing the first content data stream after the first sound wave data in the first content data stream.
15. The method of claim 14, wherein superimposing the first sonic data on the first content data stream comprises: the first acoustic data is mixed in the first content data stream once every preset time interval.
16. The method of claim 14, wherein superimposing the first sonic data on the first content data stream comprises: superimposing the first sonic data in first audio data of the first content data stream; the first audio data is all or part of the audio data in the first content data stream.
17. The method of claim 9, wherein the audio information recorded by the second terminal comprises: audio data mixed with the second sound wave data.
18. The method of claim 17, wherein the second characteristic information specifically includes: and decoding the second acoustic data to obtain information.
19. The method of claim 9, wherein the first content data stream and the first feature information are in a one-to-one correspondence or in a one-to-many correspondence.
20. A wireless media interaction system, comprising: the system comprises a program issuing platform, a server, a first terminal and a second terminal; wherein the content of the first and second substances,
the program publishing platform is used for sending a first content data stream to the server;
the server is used for acquiring first audio data in a first content data stream; the first audio data has first feature information; the server sends the first content data stream to a first terminal for playing;
the first terminal is used for receiving and playing a first content data stream sent by the server;
the second terminal is used for receiving and recording the audio information played by the first terminal and sending the received and recorded audio information to the server;
the server is further used for receiving the recorded audio information sent by the second terminal and determining second characteristic information according to the audio information; matching the second characteristic information with the first characteristic information; when the second characteristic information is matched with the first characteristic information, the server pushes second content information pre-associated with the first content data stream to the second terminal; wherein the second content information includes: interactive content information of the first content data stream.
21. A wireless media interaction system, comprising: the system comprises a program issuing platform, a server, a first terminal and a second terminal; wherein the content of the first and second substances,
the program publishing platform is used for sending a first content data stream to the server;
the server is used for generating first sound wave data corresponding to the first content data stream; the first acoustic data has first characteristic information; the server sends a first content data stream mixed with the first sound wave data to a first terminal for playing;
the first terminal is used for receiving and playing a first content data stream which is sent by the server and mixed with the first sound wave data;
the second terminal is used for receiving and recording the audio information played by the first terminal and sending the received and recorded audio information to the server;
the server is further used for receiving the recorded audio information sent by the second terminal and determining second characteristic information according to the audio information; matching the second characteristic information with the first characteristic information; when the second characteristic information is matched with the first characteristic information, the server pushes second content information pre-associated with the first content data stream to the second terminal; wherein the second content information includes: interactive content information of the first content data stream.
22. A wireless media interaction server, comprising: the system comprises an audio/sound wave data acquisition module, a first characteristic information determination module, a first content data stream sending module, an information receiving module, a characteristic information matching module and a second content information pushing module; wherein the content of the first and second substances,
the audio/sound wave data acquisition module is used for acquiring audio data in the first content data stream; the first content data stream comprises audio data of a media program;
the first characteristic information determining module is used for determining first characteristic information corresponding to a first content data stream according to first audio data in the audio/sound wave data acquiring module;
the first content data stream sending module is used for sending the first content data stream to a first terminal for playing;
the information receiving module is used for receiving audio information recorded by a second terminal and determining second characteristic information according to the audio information;
the characteristic information matching module is used for matching the second characteristic information with the first characteristic information;
the second content information pushing module is configured to push second content information pre-associated with the first content data stream to the second terminal when second feature information in the feature information matching module matches the first feature information; wherein the second content information includes: interactive content information of the first content data stream.
23. A wireless media interaction server, comprising: the system comprises an audio/sound wave data acquisition module, a first characteristic information determination module, a first content data stream sending module, an information receiving module, a characteristic information matching module and a second content information pushing module; wherein the content of the first and second substances,
the audio/sound wave data acquisition module is used for generating first sound wave data corresponding to the first content data stream; the first content data stream comprises audio data of a media program;
the first characteristic information determining module is used for determining first characteristic information corresponding to a first content data stream according to the first sound wave data in the audio/sound wave data acquiring module;
the first content data stream sending module is used for sending the first content data stream mixed with the first sound wave data to a first terminal for playing;
the information receiving module is used for receiving audio information recorded by a second terminal and determining second characteristic information according to the audio information;
the characteristic information matching module is used for matching the second characteristic information with the first characteristic information;
the second content information pushing module is configured to push second content information pre-associated with the first content data stream to the second terminal when second feature information in the feature information matching module matches the first feature information; wherein the second content information includes: interactive content information of the first content data stream.
CN201610088055.1A 2016-02-17 2016-02-17 Wireless media interaction method, system and server Active CN107094262B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110008610.6A CN112752144B (en) 2016-02-17 2016-02-17 Wireless media interaction method and system
CN201610088055.1A CN107094262B (en) 2016-02-17 2016-02-17 Wireless media interaction method, system and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610088055.1A CN107094262B (en) 2016-02-17 2016-02-17 Wireless media interaction method, system and server

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202110008610.6A Division CN112752144B (en) 2016-02-17 2016-02-17 Wireless media interaction method and system

Publications (2)

Publication Number Publication Date
CN107094262A CN107094262A (en) 2017-08-25
CN107094262B true CN107094262B (en) 2021-02-12

Family

ID=59645973

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110008610.6A Active CN112752144B (en) 2016-02-17 2016-02-17 Wireless media interaction method and system
CN201610088055.1A Active CN107094262B (en) 2016-02-17 2016-02-17 Wireless media interaction method, system and server

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202110008610.6A Active CN112752144B (en) 2016-02-17 2016-02-17 Wireless media interaction method and system

Country Status (1)

Country Link
CN (2) CN112752144B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108833964B (en) * 2018-06-11 2022-01-25 阿依瓦(北京)技术有限公司 Real-time continuous frame information implantation identification system
CN108769262B (en) * 2018-07-04 2023-11-17 厦门声连网信息科技有限公司 Large-screen information pushing system, large-screen equipment and method
CN112637147B (en) * 2020-12-13 2022-08-05 青岛希望鸟科技有限公司 Method, terminal and server for establishing and connecting communication service through audio
WO2023102804A1 (en) * 2021-12-09 2023-06-15 青岛希望鸟科技有限公司 Method for creating and connecting communication service through audio, and terminal and server

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103123787A (en) * 2011-11-21 2013-05-29 金峰 Method for synchronizing and exchanging mobile terminal with media
CN103402118A (en) * 2013-07-05 2013-11-20 Tcl集团股份有限公司 Media program interaction method and system
CN103873935A (en) * 2012-12-17 2014-06-18 联想(北京)有限公司 Data processing method and device
CN104378683A (en) * 2014-05-29 2015-02-25 腾讯科技(深圳)有限公司 Program based interaction method and device
CN104519373A (en) * 2014-12-16 2015-04-15 微梦创科网络科技(中国)有限公司 Media program interaction method and related equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6393137B1 (en) * 1999-06-17 2002-05-21 Raytheon Company Multi-resolution object classification method employing kinematic features and system therefor
US8593502B2 (en) * 2006-01-26 2013-11-26 Polycom, Inc. Controlling videoconference with touch screen interface
US9965524B2 (en) * 2013-04-03 2018-05-08 Salesforce.Com, Inc. Systems and methods for identifying anomalous data in large structured data sets and querying the data sets
CN103763586B (en) * 2014-01-16 2017-05-10 北京酷云互动科技有限公司 Television program interaction method and device and server
CN104050259A (en) * 2014-06-16 2014-09-17 上海大学 Audio fingerprint extracting method based on SOM (Self Organized Mapping) algorithm

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103123787A (en) * 2011-11-21 2013-05-29 金峰 Method for synchronizing and exchanging mobile terminal with media
CN103873935A (en) * 2012-12-17 2014-06-18 联想(北京)有限公司 Data processing method and device
CN103402118A (en) * 2013-07-05 2013-11-20 Tcl集团股份有限公司 Media program interaction method and system
CN104378683A (en) * 2014-05-29 2015-02-25 腾讯科技(深圳)有限公司 Program based interaction method and device
CN104519373A (en) * 2014-12-16 2015-04-15 微梦创科网络科技(中国)有限公司 Media program interaction method and related equipment

Also Published As

Publication number Publication date
CN107094262A (en) 2017-08-25
CN112752144B (en) 2024-03-08
CN112752144A (en) 2021-05-04

Similar Documents

Publication Publication Date Title
CN107094262B (en) Wireless media interaction method, system and server
TW201132122A (en) System and method in a television for providing user-selection of objects in a television program
EP2044592A1 (en) A device and a method for playing audio-video content
US10904617B1 (en) Synchronizing a client device with media content for scene-specific notifications
CN104284238A (en) Video playing method and device based on two-dimension code
CN105122370A (en) Syntax-aware manipulation of media files in a container format
KR101991188B1 (en) Promotion information processing method, device, and apparatus, and non-volatile computer storage medium
CN111131848A (en) Video live broadcast data processing method, client and server
CN111314773A (en) Screen recording method and device, electronic equipment and computer readable storage medium
JP2014534513A (en) Method and user interface for classifying media assets
CN112866776A (en) Video generation method and device
CN104010197A (en) Method and device for generating video thumbnails
US20130117464A1 (en) Personalized media filtering based on content
US10223525B2 (en) Display apparatus and method for controlling display apparatus thereof
CN112492382B (en) Video frame extraction method and device, electronic equipment and storage medium
KR20160011532A (en) Method and apparatus for displaying videos
CN112911373B (en) Video subtitle generating method, device, equipment and storage medium
CN112738564B (en) Data processing method and device, electronic equipment and storage medium
CN103631876A (en) Method and system for displaying network page and terminal device
CN110633117B (en) Data processing method, device, electronic equipment and readable medium
WO2019237965A1 (en) Human-computer interaction and television operation control method, apparatus and device, and storage medium
Kelly Mobile video platforms and the presence of Aura
WO2022156646A1 (en) Video recording method and device, electronic device and storage medium
CN109151523B (en) Multimedia content acquisition method and device
CN114979729A (en) Video data processing method and device and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1240000

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant