CN108024121B - Voice barrage synchronization method and system - Google Patents

Voice barrage synchronization method and system Download PDF

Info

Publication number
CN108024121B
CN108024121B CN201711145376.1A CN201711145376A CN108024121B CN 108024121 B CN108024121 B CN 108024121B CN 201711145376 A CN201711145376 A CN 201711145376A CN 108024121 B CN108024121 B CN 108024121B
Authority
CN
China
Prior art keywords
audio
voice
sub
data
video information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711145376.1A
Other languages
Chinese (zh)
Other versions
CN108024121A (en
Inventor
王强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Wei Yao Technology And Culture Co Ltd
Original Assignee
Wuhan Wei Yao Technology And Culture Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Wei Yao Technology And Culture Co Ltd filed Critical Wuhan Wei Yao Technology And Culture Co Ltd
Priority to CN201711145376.1A priority Critical patent/CN108024121B/en
Publication of CN108024121A publication Critical patent/CN108024121A/en
Application granted granted Critical
Publication of CN108024121B publication Critical patent/CN108024121B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • H04L51/046Interoperability with other network applications or services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72433User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for voice messaging, e.g. dictaphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions

Abstract

The embodiment of the invention provides a voice barrage synchronization method and a system, which are applied to an electronic terminal and a server which can communicate with each other, wherein the method comprises the steps that audio and video information which is being played in audio equipment and is collected by the electronic terminal is sent to the server; the server acquires voice data corresponding to the audio and video information according to the audio and video information and judges whether the data size of the voice data is larger than a preset value or not, if so, the voice data is divided into a plurality of sub data packets with preset lengths and stored; and the server sequentially sends the plurality of sub-data packets to the audio equipment according to the time corresponding relation between the plurality of sub-data packets and the voice data so that the audio equipment synchronously displays the sub-data packets in the played audio and video in a bullet screen mode. The embodiment of the invention can effectively avoid the problems of too slow voice barrage loading and poor synchronism caused by overlarge voice data in the audio and video playing process.

Description

Voice barrage synchronization method and system
Technical Field
The invention relates to the technical field of wireless communication, in particular to a voice barrage synchronization method and a voice barrage synchronization system.
Background
For the existing WeChat applet and H5, the user experience can only be improved by a preloading mode when the audio is processed by a large-segment audio stream, but the user needs to occupy a large amount of user traffic and waste a long audio downloading time when downloading the large-segment audio stream due to the fact that the audio stream is too large, and the user experience is very bad for voice bullet screen users. In addition, in the present stage, the audio bullet screen has poor synchronism due to the fact that large voice packets are downloaded to the mobile phone client in advance and then played, for example, the real-time synchronous audio bullet screen based on the micro-shaking TV scene is not suitable for downloading large audio streams in advance.
Disclosure of Invention
In view of the above, the present invention provides a method and an apparatus for synchronizing a voice barrage, which can effectively solve the above problems.
The preferred embodiment of the invention provides a voice barrage synchronization method, which is applied to an electronic terminal and a server which are communicated with each other, and comprises the following steps:
the electronic terminal collects audio and video information being played in the audio equipment and sends the audio and video information to the server;
the server acquires voice data corresponding to the audio and video information according to the audio and video information and judges whether the data size of the voice data is larger than a preset value or not, if so, the voice data is divided into a plurality of sub data packets with preset lengths and stored;
and the server sequentially sends the plurality of sub-data packets to the audio equipment according to the time corresponding relation between the plurality of sub-data packets and the voice data so that the audio equipment synchronously displays the sub-data packets in the playing audio and video in a bullet screen mode.
In the selection of the preferred embodiment of the present invention, the step of obtaining the corresponding voice data according to the audio/video information includes:
creating an index according to the playing content, the episode identification and the current playing time contained in the audio and video information;
and comparing the index with pre-stored audio and video data, and acquiring corresponding voice data according to a comparison result.
In an option of the preferred embodiment of the present invention, the method further comprises:
the electronic terminal or the server correspondingly stores each sub-data packet displayed in a bullet screen and audio and video data played in the audio equipment to a local file; or
And the server correspondingly adds the sub-data packets to corresponding positions of the pre-stored audio and video data and stores the sub-data packets.
In an option of the preferred embodiment of the present invention, the step of the electronic terminal acquiring the audio/video information being played in the audio device includes:
and responding to an audio and video information acquisition instruction to acquire audio and video information being played in the audio equipment, wherein the audio and video information comprises playing content, episode identification and current playing time.
In a selection of a preferred embodiment of the present invention, the responding to the audio/video information acquisition instruction includes the following implementation manners:
detecting the shaking state of the electronic terminal, and judging that audio and video information needs to be acquired when the shaking state meets a preset value; or
And detecting a screen pressure value in the electronic terminal, and judging that audio and video information acquisition is required when the pressure value meets a preset value.
In an option of the preferred embodiment of the present invention, the audio/video information may be audio information or an audio/video playing screen.
The preferred embodiment of the present invention further provides a voice barrage synchronization method, which is applied to a server capable of being in communication connection with an electronic terminal, and the method includes:
receiving audio and video information which is being played in audio equipment and is collected by the electronic terminal;
acquiring voice data corresponding to the audio and video information according to the audio and video information, judging whether the data size of the voice data is larger than a preset value or not, and if so, dividing the voice data into a plurality of sub-data packets with preset lengths and storing the sub-data packets;
and sequentially sending the plurality of sub-data packets to the audio equipment based on the corresponding relation between the sub-data packets and the time of the voice data so that the audio equipment synchronously displays the sub-data packets in the playing audio and video in a bullet screen mode.
The preferred embodiment of the invention also provides a voice barrage synchronization system, which comprises an electronic terminal, audio equipment and a server, wherein the server is respectively in communication connection with the electronic terminal and the audio equipment;
the electronic terminal is used for collecting audio and video information which is being played in the audio equipment and sending the audio and video information to the server;
the server is used for acquiring the voice data corresponding to the audio and video information according to the audio and video information and judging whether the data size of the voice data is larger than a preset value or not, if so, the voice data is divided into a plurality of sub data packets with preset lengths and stored; and
and the server sequentially sends the plurality of sub-data packets to the audio equipment according to the time corresponding relation between the plurality of sub-data packets and the voice data so that the audio equipment synchronously displays the sub-data packets in the playing audio and video in a bullet screen mode.
In an option of a preferred embodiment of the present invention, the server includes:
the information receiving module is used for receiving audio and video information which is being played in the audio equipment and is collected by the electronic terminal;
the judging module is used for acquiring the voice data corresponding to the audio and video information according to the audio and video information, judging whether the data size of the voice data is larger than a preset value or not, and if so, dividing the voice data into a plurality of sub-data packets with preset lengths and storing the sub-data packets;
and the voice synchronization module is used for sequentially sending the plurality of sub-data packets to the audio equipment based on the corresponding relation between the sub-data packets and the voice data time so that the audio equipment synchronously displays the sub-data packets in the playing audio and video in a bullet screen mode.
In an alternative embodiment of the present invention, the determining module includes:
the index creating unit is used for creating an index according to the playing content, the episode identification and the current playing time contained in the audio and video information;
and the voice acquisition unit is used for comparing the index with pre-stored audio and video data and acquiring the corresponding voice data according to the comparison result.
Compared with the prior art, the voice barrage synchronization method and the system provided by the invention have the advantages that the mode of dividing larger voice data is adopted, so that the problems of too slow voice barrage loading and poor synchronism caused by overlarge voice data in the audio and video playing process can be avoided. Meanwhile, on the premise of ensuring the loading synchronism of the bullet screen voice data, the real-time downloading amount of the audio data is reduced, and the user experience is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic view of an interaction scene of a voice barrage synchronization system according to an embodiment of the present invention.
Fig. 2 is a block diagram of the server shown in fig. 1.
Fig. 3 is a flowchart illustrating a voice barrage synchronization method according to an embodiment of the present invention.
Fig. 4 is a sub-flow diagram of a voice barrage synchronization method according to an embodiment of the present invention.
Fig. 5 is another flowchart of a voice barrage synchronization method according to an embodiment of the present invention.
Fig. 6 is a schematic block diagram of a voice barrage synchronizer according to an embodiment of the present invention.
Fig. 7 is a block diagram of the determining module shown in fig. 6.
Icon: 10-an electronic terminal; 20-a server; 100-voice barrage synchronizer; 110-an information receiving module; 120-a judgment module; 121-index creation unit; 122-a voice acquisition unit; 130-a voice synchronization module; 200-a memory; 300-a memory controller; 400-a processor; 30-audio equipment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
As shown in fig. 1, a schematic view of an interaction scenario of a voice barrage synchronization system according to an embodiment of the present invention is provided, where the interaction scenario includes an electronic terminal 10, a server 20, and an audio device 30, and the electronic terminal 10, the server 20, and the audio device 30 are communicatively connected to each other through a network.
Specifically, the electronic terminal 10 is configured to collect audio and video information being played in the audio device 30 and send the audio and video information to the server 20, and the server 20 is configured to obtain voice data corresponding to the audio and video information according to the audio and video information and judge whether the data size of the voice data is greater than a preset value, and if so, divide the voice data into a plurality of sub-packets with preset lengths and store the sub-packets; and the server 20 sequentially sends the plurality of sub-packets to the audio device 30 according to the time correspondence between the plurality of sub-packets and the voice data, so that the audio device 30 synchronously displays the sub-packets in the playing audio and video in a bullet screen manner.
Alternatively, as shown in fig. 2, the server 20 includes a voice bullet screen synchronizing device 100, a memory 200, a memory controller 300, and a processor 400. The memory 200, the memory controller 300 and the processor 400 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components are electrically connected to each other through one or more communication buses or signal lines. The voice barrage synchronizer 100 includes at least one software functional module which can be stored in the memory 200 in the form of software or firmware or solidified in the operating system of the electronic terminal 10. The processor 400 accesses the memory 200 under the control of the memory controller 300, so as to execute executable modules stored in the memory 200, such as software functional modules and computer programs included in the voice bullet screen synchronization apparatus 100. In addition, the server 20 may include a bullet screen server 20, a video server 20, an application server 20, and the like.
Alternatively, the electronic terminal 10 may be, but is not limited to, a smart phone, an IPAD, a computer, a server 20, etc. The audio device 30 may be, but is not limited to, a cell phone, a television, an MP4, etc. In addition, the audio device 30 may be integrally formed with the electronic terminal 10 according to actual requirements.
It should be understood that the configuration shown in fig. 2 is merely illustrative. The server 20 may have more or fewer components than shown in fig. 2, or may have a different configuration than shown in fig. 2. Wherein the components shown in fig. 2 may be implemented by software, hardware, or a combination thereof.
Fig. 3 is a schematic flow chart of a voice barrage synchronization method according to a preferred embodiment of the present invention. The voice barrage synchronization method is applied to the electronic terminal 10 and the server 20 which are communicatively connected with each other as shown in fig. 1. The specific process and steps of the voice barrage synchronization method will be described in detail with reference to fig. 3.
In step S110, the electronic terminal 10 collects the audio/video information being played in the audio device 30 and sends the information to the server 20.
Specifically, the audio/video information may include, but is not limited to, a playing content, an episode identifier, a current playing time, and the like, and the manner in which the electronic terminal 10 collects the audio/video information being played in the audio device 30 may be: the audio and video information being played in the audio device 30 is acquired by responding to the audio and video information acquisition instruction. Optionally, the audio/video information may be audio information, video image information, and the like, and this embodiment is not limited in this embodiment.
Further, the implementation manner of responding to the audio/video information acquisition instruction includes: detecting the shaking state of the electronic terminal 10, and judging that audio and video information needs to be acquired when the shaking state meets a preset value; or detecting a screen pressure value in the electronic terminal 10, and determining that audio/video information acquisition is required when the pressure value meets a preset value.
For example, taking a wechat shake in a mobile phone as an example, when the barrage information needs to be displayed in the audio and video being played, a user may start a shake function and shake the mobile phone, an application program in the mobile phone detects a shake state of the mobile phone itself, and when the shake state meets a threshold, starts an audio and video information acquisition function. Wherein the threshold may be a shaking duration, a frequency, etc.
Step S120, the server 20 obtains the corresponding voice data according to the audio/video information and determines whether the data size of the voice data is greater than a preset value, and if so, the voice data is divided into a plurality of sub-packets with preset lengths and stored.
In this embodiment, the voice data may be a pre-recorded voice file, such as a star voice, or a voice file that is recorded and uploaded by a user through a mobile phone terminal or the like in real time during a watching process. When searching for a corresponding voice file according to the audio/video data, the corresponding voice file can be searched for by comparing the received audio/video information source file with the pre-stored audio/video data one by one, or the corresponding index file obtained by processing the audio/video information can be searched for keywords. For example, as shown in fig. 4, the steps of generating the index when performing the search according to the index file provided in this embodiment are specifically as follows.
A substep S121, creating an index according to the playing content, the episode identification and the current playing time contained in the audio and video information;
and a substep S122, comparing the index with pre-stored audio/video data, and acquiring corresponding voice data according to a comparison result.
Here, it should be noted that the index may be generated by the server 20 itself after the server 20 receives the audio and video information sent by the electronic terminal 10, or may be generated directly after the electronic terminal 10 collects the audio and video information, so as to improve the data transmission rate and the search efficiency in the voice data search process.
Further, after the server 20 finds the corresponding voice data, in order to reduce the voice downloading rate, the network load, and the like, and improve the synchronism between the voice bullet screen and the audio and video being played, it is necessary to determine the size of the found voice data, and if the size of the voice data exceeds a preset value, the voice data needs to be divided to form a plurality of sub-packets with preset lengths. For example, when the size of the voice data exceeds 20M, the voice data is divided into a plurality of sub-packets of 45K size. If the size of the voice data is 20M, the user only needs to download and play the voice sub-data packet about 900K, so that the synchronism between the voice bullet screen and the played audio/video is greatly improved, and the problem that the content of the voice bullet screen is not synchronous with the content of the video due to overlong loading time is avoided.
In step S130, the server 20 sequentially sends a plurality of sub-packets to the audio device 30 according to the time correspondence between the plurality of sub-packets and the voice data, so that the audio device 30 synchronously displays the sub-packets in the audio/video being played in a bullet screen manner.
In this embodiment, when the divided sub-packets are transmitted to the audio device 30 and displayed synchronously, the transmission order of the sub-packets should be ensured, and the synchronization between the sub-packets and the audio device and the continuity of the voice data should be ensured. Here, the sending sequence of each sub-packet may be determined based on the time sequence of the original audio data, and the synchronicity between the audio bullet screen and the audio video being played may be ensured based on the relationship between the received audio and video information and the consumed time of the audio and video being played and data transmission, processing, and the like.
For example, when the episode identification and the accurate video playing time (which may be accurate to seconds) are identified through the wechat shake, the server 20 may obtain the current voice barrage in units of minutes and the voice barrage in the next minute through the time offset, the episode identification, and the like, and then directly locate the current voice data through seconds and the like, thereby achieving the effect of synchronizing the voice barrages in real time and greatly improving the user experience.
Step S140, the electronic terminal 10 or the server 20 correspondingly stores each sub-data packet displayed in a bullet screen and the audio/video data played in the audio device 30 to a local file; or the server 20 correspondingly adds the sub data packets to corresponding positions of the pre-stored audio and video data and stores the sub data packets.
The audio and video are correspondingly stored through the synchronously displayed voice bullet screen and the audio and video, so that the situation of watching the audio and video at first can be known when the user watches the audio and video, and particularly, the voice bullet screen is recorded and uploaded by the user, so that the watching experience of the user is effectively improved.
Fig. 5 is a schematic flow chart of a voice barrage synchronization method according to a preferred embodiment of the present invention. The voice bullet screen synchronization method is applied to the server 20 shown in fig. 1. The specific process and steps of the voice barrage synchronization method will be described in detail with reference to fig. 5.
Step S210, receiving the audio/video information being played in the audio device 30 collected by the electronic terminal 10, and processing the audio/video information to obtain target detection.
Step S220, obtaining the corresponding voice data according to the audio/video information and determining whether the data size of the voice data is greater than a preset value, if so, dividing the voice data into a plurality of sub-packets with preset lengths and storing the sub-packets.
Step S230, sequentially sending the plurality of sub-packets to the audio device 30 based on the corresponding relationship between the sub-packets and the time of the voice data, so that the audio device 30 synchronously displays the sub-packets in the playing audio and video in a bullet screen manner.
The method in this embodiment has the same technical features as the method in the previous embodiment, and reference may be made to the description in the foregoing embodiment, which is not repeated herein.
Further, as shown in fig. 6, a schematic block structure diagram of the voice barrage synchronizer 100 applied to the server 20 is provided, where the voice barrage synchronizer 100 includes an information receiving module 110, a determining module 120, and a voice synchronizing module 130.
The information receiving module 110 is configured to receive audio and video information being played in the audio device 30, which is acquired by the electronic terminal 10. In this embodiment, the detailed description of step S110 shown in fig. 3 may be specifically referred to for the description of the information receiving module 110, that is, step S110 may be executed by the information receiving module 110.
The determining module 120 is configured to obtain the voice data corresponding to the audio/video information according to the audio/video information, determine whether the data size of the voice data is greater than a preset value, and if so, divide the voice data into a plurality of sub-data packets with preset lengths and store the sub-data packets. In this embodiment, the description of the determining module 120 may specifically refer to the detailed description of step S120 shown in fig. 3, that is, the step S120 may be executed by the determining module 120. Alternatively, as shown in fig. 7, the determining module 120 includes an index creating unit 121 and a voice acquiring unit 122.
The index creating unit 121 is configured to create an index according to the playing content, the episode identification, and the current playing time included in the audio/video information. In this embodiment, the description of the index creating unit 121 may specifically refer to the detailed description of step S121 shown in fig. 4, that is, the step S121 may be executed by the index creating unit 121.
The voice acquiring unit 122 is configured to compare the index with pre-stored audio/video data, and acquire corresponding voice data according to a comparison result. In this embodiment, the detailed description of step S122 shown in fig. 4 may be specifically referred to for the description of the voice acquiring unit 122, that is, the step S122 may be executed by the voice acquiring unit 122.
The voice synchronization module 130 is configured to sequentially send the multiple sub-packets to the audio device 30 based on the corresponding relationship between the sub-packets and the voice data time, so that the audio device 30 synchronously displays the sub-packets in the playing audio and video in a bullet screen manner. In this embodiment, the detailed description of the step S130 shown in fig. 3 may be specifically referred to for the description of the voice synchronization module 130, that is, the step S130 may be executed by the voice synchronization module 130.
In summary, the voice barrage synchronization method and system provided by the present invention adopt a way of dividing larger voice data, so as to avoid the problems of too slow voice barrage loading and poor synchronization caused by too large voice data in the audio/video playing process. Meanwhile, on the premise of ensuring the loading synchronism of the bullet screen voice data, the real-time downloading amount of the audio data is reduced, and the user experience is effectively improved.
In the description of the present invention, the terms "disposed", "connected" and "connected" should be interpreted broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In the embodiments provided in the embodiments of the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus and method embodiments described above are illustrative only, as the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to a predetermined number of embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code. The module, segment, or portion of code, comprises one or a predetermined number of elements designed to implement a specified logical function.
It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A voice barrage synchronization method is applied to an electronic terminal and a server which are communicated with each other, and is characterized by comprising the following steps:
the electronic terminal collects audio and video information being played in the audio equipment and sends the audio and video information to the server;
the server acquires voice data corresponding to the audio and video information according to the audio and video information and judges whether the data size of the voice data is larger than a preset value or not, if so, the voice data is divided into a plurality of sub data packets with preset lengths and stored;
and the server sequentially sends the plurality of sub-data packets to the audio equipment according to the time corresponding relation between the plurality of sub-data packets and the voice data so that the audio equipment synchronously displays the sub-data packets in the playing audio and video in a bullet screen mode.
2. The voice barrage synchronization method according to claim 1, wherein the step of obtaining the corresponding voice data according to the audio/video information comprises:
creating an index according to the playing content, the episode identification and the current playing time contained in the audio and video information;
and comparing the index with pre-stored audio and video data, and acquiring corresponding voice data according to a comparison result.
3. The method of claim 2, further comprising:
the electronic terminal or the server correspondingly stores each sub-data packet displayed in a bullet screen and audio and video data played in the audio equipment to a local file; or
And the server correspondingly adds the sub-data packets to corresponding positions of the pre-stored audio and video data and stores the sub-data packets.
4. The voice barrage synchronization method according to claim 1, wherein the step of the electronic terminal collecting audio/video information being played in the audio device comprises:
and responding to an audio and video information acquisition instruction to acquire audio and video information being played in the audio equipment, wherein the audio and video information comprises playing content, episode identification and current playing time.
5. The voice barrage synchronization method according to claim 4, wherein the responding to the audio/video information acquisition instruction comprises the following implementation modes:
detecting the shaking state of the electronic terminal, and judging that audio and video information needs to be acquired when the shaking state meets a preset value; or
And detecting a screen pressure value in the electronic terminal, and judging that audio and video information acquisition is required when the pressure value meets a preset value.
6. The voice barrage synchronization method according to claim 1, wherein the audio/video information is audio information or an audio/video playing picture.
7. A voice barrage synchronization method is applied to a server which can be in communication connection with an electronic terminal, and is characterized by comprising the following steps:
receiving audio and video information which is being played in audio equipment and is collected by the electronic terminal;
acquiring voice data corresponding to the audio and video information according to the audio and video information, judging whether the data size of the voice data is larger than a preset value or not, and if so, dividing the voice data into a plurality of sub-data packets with preset lengths and storing the sub-data packets;
and sequentially sending the plurality of sub-data packets to the audio equipment based on the corresponding relation between the sub-data packets and the time of the voice data so that the audio equipment synchronously displays the sub-data packets in the playing audio and video in a bullet screen mode.
8. A voice barrage synchronization system is characterized by comprising an electronic terminal, audio equipment and a server, wherein the server is in communication connection with the electronic terminal and the audio equipment respectively;
the electronic terminal is used for collecting audio and video information which is being played in the audio equipment and sending the audio and video information to the server;
the server is used for acquiring the voice data corresponding to the audio and video information according to the audio and video information and judging whether the data size of the voice data is larger than a preset value or not, if so, the voice data is divided into a plurality of sub data packets with preset lengths and stored; and
and the server sequentially sends the plurality of sub-data packets to the audio equipment according to the time corresponding relation between the plurality of sub-data packets and the voice data so that the audio equipment synchronously displays the sub-data packets in the playing audio and video in a bullet screen mode.
9. The system according to claim 8, wherein the server includes a memory, a storage controller, a processor, and a voice barrage synchronizer, the memory, the storage controller, and the processor are electrically connected to each other, the voice barrage synchronizer is stored in the memory, and the voice barrage synchronizer includes:
the information receiving module is used for receiving audio and video information which is being played in the audio equipment and is collected by the electronic terminal;
the judging module is used for acquiring the voice data corresponding to the audio and video information according to the audio and video information, judging whether the data size of the voice data is larger than a preset value or not, and if so, dividing the voice data into a plurality of sub-data packets with preset lengths and storing the sub-data packets;
and the voice synchronization module is used for sequentially sending the plurality of sub-data packets to the audio equipment based on the corresponding relation between the sub-data packets and the voice data time so that the audio equipment synchronously displays the sub-data packets in the playing audio and video in a bullet screen mode.
10. The system of claim 9, wherein the determining module comprises:
the index creating unit is used for creating an index according to the playing content, the episode identification and the current playing time contained in the audio and video information;
and the voice acquisition unit is used for comparing the index with pre-stored audio and video data and acquiring the corresponding voice data according to the comparison result.
CN201711145376.1A 2017-11-17 2017-11-17 Voice barrage synchronization method and system Active CN108024121B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711145376.1A CN108024121B (en) 2017-11-17 2017-11-17 Voice barrage synchronization method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711145376.1A CN108024121B (en) 2017-11-17 2017-11-17 Voice barrage synchronization method and system

Publications (2)

Publication Number Publication Date
CN108024121A CN108024121A (en) 2018-05-11
CN108024121B true CN108024121B (en) 2020-02-07

Family

ID=62079810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711145376.1A Active CN108024121B (en) 2017-11-17 2017-11-17 Voice barrage synchronization method and system

Country Status (1)

Country Link
CN (1) CN108024121B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111026558B (en) * 2019-11-25 2020-11-17 上海哔哩哔哩科技有限公司 Bullet screen processing method and system based on WeChat applet

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102937972A (en) * 2012-10-15 2013-02-20 上海外教社信息技术有限公司 Audiovisual subtitle making system and method
CN104822093A (en) * 2015-04-13 2015-08-05 腾讯科技(北京)有限公司 Comment issuing method and device thereof
CN104994401A (en) * 2015-07-03 2015-10-21 王春晖 Barrage processing method, device and system
CN105657482A (en) * 2016-03-28 2016-06-08 广州华多网络科技有限公司 Voice barrage realization method and device
US9602858B1 (en) * 2013-01-28 2017-03-21 Agile Sports Technologies, Inc. Method and system for synchronizing multiple data feeds associated with a sporting event
CN106878805A (en) * 2017-02-06 2017-06-20 广东小天才科技有限公司 A kind of mixed languages subtitle file generation method and device
CN107105324A (en) * 2017-03-31 2017-08-29 武汉斗鱼网络科技有限公司 A kind of method and client of protection barrage information
CN107277594A (en) * 2017-07-06 2017-10-20 广州华多网络科技有限公司 A kind of video and audio and barrage synchronous method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9257148B2 (en) * 2013-03-15 2016-02-09 JBF Interlude 2009 LTD System and method for synchronization of selectably presentable media streams

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102937972A (en) * 2012-10-15 2013-02-20 上海外教社信息技术有限公司 Audiovisual subtitle making system and method
US9602858B1 (en) * 2013-01-28 2017-03-21 Agile Sports Technologies, Inc. Method and system for synchronizing multiple data feeds associated with a sporting event
CN104822093A (en) * 2015-04-13 2015-08-05 腾讯科技(北京)有限公司 Comment issuing method and device thereof
CN104994401A (en) * 2015-07-03 2015-10-21 王春晖 Barrage processing method, device and system
CN105657482A (en) * 2016-03-28 2016-06-08 广州华多网络科技有限公司 Voice barrage realization method and device
CN106878805A (en) * 2017-02-06 2017-06-20 广东小天才科技有限公司 A kind of mixed languages subtitle file generation method and device
CN107105324A (en) * 2017-03-31 2017-08-29 武汉斗鱼网络科技有限公司 A kind of method and client of protection barrage information
CN107277594A (en) * 2017-07-06 2017-10-20 广州华多网络科技有限公司 A kind of video and audio and barrage synchronous method and device

Also Published As

Publication number Publication date
CN108024121A (en) 2018-05-11

Similar Documents

Publication Publication Date Title
CN111213385B (en) Content modification method, media client and non-transitory computer readable medium
CN105991962B (en) Connection method, information display method, device and system
CN109089127B (en) Video splicing method, device, equipment and medium
CN108632676B (en) Image display method, image display device, storage medium and electronic device
CN104012106A (en) Aligning videos representing different viewpoints
CN113542795B (en) Video processing method and device, electronic equipment and computer readable storage medium
KR20150115617A (en) Method and apparatus for prompting based on smart glasses
US10021433B1 (en) Video-production system with social-media features
CN109714622A (en) A kind of video data handling procedure, device and electronic equipment
CN108391089B (en) Monitoring flow pushing method and device of multi-path camera and monitoring system
CN104079955A (en) Method, device and system of OTT (Over The Top) live broadcast
CN111147911A (en) Video clipping method and device, electronic equipment and storage medium
CN110087141A (en) Method of transmitting video data, device, client and server
CN107040825B (en) Terminal, television, multi-screen interaction system and screen capture parameter setting method
CN111050204A (en) Video clipping method and device, electronic equipment and storage medium
CN104837048A (en) Screen mirror implementation method and system
CN108024121B (en) Voice barrage synchronization method and system
CN113010135B (en) Data processing method and device, display terminal and storage medium
CN114584821A (en) Video processing method and device
CN113259759A (en) Network connection state evaluation method and device, terminal equipment and storage medium
CN113542909A (en) Video processing method and device, electronic equipment and computer storage medium
CN112584189A (en) Live broadcast data processing method, device and system and computer readable storage medium
CN109862385B (en) Live broadcast method and device, computer readable storage medium and terminal equipment
CN114466145B (en) Video processing method, device, equipment and storage medium
CN107734278B (en) Video playback method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant