Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, the national 'network live broadcast' is roughly divided into two types, one is that television signals are provided on the internet for watching, such as the live broadcast of various sports competitions and literary and art activities, the principle of the type of the live broadcast is that television (analog) signals are collected and converted into digital signals to be input into a computer, and the digital signals are uploaded to a website in real time for people to watch, which is equivalent to 'network television'; the other is then "live webcast" in the true sense: the method comprises the steps that independent signal acquisition equipment is erected on a live broadcast site to acquire audio data and video data, and then the audio data and the video data are led into a broadcasting guide end, wherein the broadcasting guide end is generally broadcasting guide equipment or a broadcasting guide platform, and then the audio data and the video data are uploaded to a server through a network and are released to a website for a user client to watch. The greatest difference between these types of live webcasts and the predecessors lies in the autonomy of live webcasts: the independent controllable audio and video acquisition is completely different from the singleness of the rebroadcast television signal. Meanwhile, the network live broadcast can be carried out for the application that television media are difficult to live broadcast, such as government affair open meetings, mass audition meetings, court trial live broadcast in a court, official examination training, product release meetings, enterprise meetings, industry meetings, exhibition live broadcast and the like.
With the development of network live broadcast becoming better and better, more and more anchor broadcasters improve their popularity by connecting with other anchor broadcasters, or the anchor broadcasters want to explain an event together with other anchor broadcasters or watch a video together, the server needs to mix the video data of the anchor broadcasters with the video data of other anchor broadcasters into a mixed picture stream, and push the mixed picture stream to the audience for watching. Therefore, the comic delay of each anchor picture is an important index of the comic stream.
As shown in fig. 1, an application scenario of the method for measuring comic delay is provided in the embodiment of the present invention, wherein a server 10 is in communication connection with a live broadcast initiating terminal 20 and a live broadcast receiving terminal 30. In the present application scenario, the server 10, the live broadcast initiator 20, and the live broadcast receiver 30 are obtained by performing simulation by a device in a product testing stage. The server 10 may be a server that provides a live service for the live initiator 20 and the live receiver 30. The live broadcast initiating terminal 20 may be a terminal device corresponding to an anchor terminal after a subsequent product is online, and the anchor may initiate a request for joining a live broadcast room to the server 10 through the live broadcast initiating terminal 20, and send a live broadcast video stream to the server 10. The live broadcast receiving end 30 may be a terminal device corresponding to a viewer end (user end) after a subsequent product is online, and a viewer may obtain a live broadcast interactive video stream sent by the server 10 through the live broadcast receiving end 30, so as to watch a live broadcast video.
In some implementation scenarios, the live initiator 20 and the live receiver 30 may be used interchangeably. For example, the anchor may use the live originator 20 to provide a live video service to viewers, or may use the live originator 20 as a live viewer to view live video provided by other anchors. For another example, the viewer may use the live broadcast receiving end 30 to watch live video provided by the anchor, or may use the live broadcast receiving end 30 as the anchor to provide live video service for other viewers.
In the embodiment of the present invention, the live broadcast initiating terminal 20 and the live broadcast receiving terminal 30 may be, but are not limited to, a smart phone, a tablet computer, a personal computer, a notebook computer, a virtual reality terminal device, an augmented reality terminal device, and the like. The live broadcast initiator 20 and the live broadcast receiver 30 may have internet products installed therein for providing live broadcast services of the internet, for example, the internet products may be applications APP used in a computer or a smart phone and related to live broadcast services of the internet, World wide Web (Web) pages, applets, and the like.
Only a schematic diagram of the server 10 communicatively coupled to one live initiator 20 and one live receiver 30 is shown in fig. 1, it being understood that the server 10 of the present disclosure may be communicatively coupled to a plurality of live initiators 20 and a plurality of live receivers 30.
As shown in fig. 2, in one implementation of the embodiment of the present invention, the server 10, the live initiator 20, the live receiver 30, and the like may include a storage device 12, a computer-readable medium 13, and a processor 14. Wherein the storage device 12 and the processor 14 are electrically connected, directly or indirectly, to enable the transfer or interaction of data. For example, they may be electrically connected to each other via one or more communication buses or signal lines. The computer-readable medium 13 includes at least one software functional module that can be stored in the storage means 12 in the form of software or firmware (firmware). The processor 14 is configured to execute an executable computer program stored in the storage device 12, for example, a software functional module and a computer program included in the computer readable medium 13, so as to implement the method for measuring comic time delay disclosed in the embodiment of the present invention.
It is understood that the structure shown in fig. 2 is only an illustration, and the server 10, the live initiating terminal 20, the live receiving terminal 30, and the like may further include more or less components than those shown in fig. 2, or have a different configuration from that shown in fig. 2, for example, the server 10, the live initiating terminal 20, the live receiving terminal 30, and the like may further include a communication unit for information interaction with other devices. Wherein the components shown in fig. 2 may be implemented in hardware, software, or a combination thereof.
Based on the application scenario, a method for measuring the comic time delay is introduced below.
Referring to fig. 3, the method for measuring comic delay disclosed in the embodiment of the present invention includes:
s301, the first client generates first video data.
The first video data generated by the first client carries timing data, and the timing data is used for timing the playing of the first video data, and can be understood as the playing time of the first video data. After the first client generates the first video data, the first video data is uploaded to the server 10.
It should be noted that the first client may generate the first video data according to the video content that the first client needs to play, in this case, the timing data may be manually added in the first client, for example, the anchor player opens any one of the online stopwatches on the web page as the timing reference time in the live broadcast process, and adds the online stopwatch to the video data in a screen capture manner. The method of adding the online stopwatch to the video data may be, as shown in fig. 4, but is not limited to the method shown in fig. 4, and after the online stopwatch is added to the video data, the video data to which the online stopwatch is added is transmitted to the server 10.
It should be noted that, no matter before the product is on line, a tester or an expert, or after the product is on line, the tester or the expert is in the process of adding timing data, the tester or the expert can select any position in the video image to add the timing data.
It should be noted that, not only the on-line stopwatch is used as the timing data, but also different tools can be used for timing according to different application scenarios, and the present invention is not limited herein.
Optionally, in another embodiment of the present invention, as shown in fig. 5, another implementation manner of step S301 includes:
s501, second video data generated by a second client side are obtained.
Wherein the second video data carries timing data.
In a specific implementation process of this embodiment, the second video data generated by the second client may be directly obtained, or the second video data generated by the second client may be obtained by the server 10.
S502, in the process of playing the second video data on the display screen of the first client, a video playing image of the display screen of the first client is captured in real time.
Specifically, in the process of playing the second video data on the display screen of the first client, the video playing image of each frame in the display screen of the first client may be continuously captured by using the screen capture function of the first client.
S503, generating first video data by utilizing the video playing image obtained by real-time interception.
Specifically, the timing data in the first video data may be that when the anchor (i.e., the first client) connects to another anchor (i.e., the second client), and in the process of pulling the video data of the other anchor, if the video data of the other anchor includes the timing data, the intercepted video playing image including the timing data is combined to form the first video data.
It should be noted that, if the number of other anchor is greater than one, the video data of one anchor is selected as the reference stream, all the anchors simultaneously acquire the timing data in the video data serving as the reference stream, and capture the video playing image containing the timing data in the video data of the reference stream, and combine to form the first video data.
Optionally, in another embodiment of the present invention, as shown in fig. 6, another implementation manner of step S301 may include:
s601, second video data generated by a second client side are obtained.
Wherein the second video data carries timing data.
The second video data generated by the second client may be directly obtained, or may also be obtained by the server 10.
S602, in the process of playing the second video data on the display screen of the first client, video data shot by the front camera of the first client is obtained.
The front-facing camera of the first client shoots second video data played by a display screen of the first client.
In the specific implementation process of this embodiment, if the first client does not have a screenshot function, the second video data can be projected in the mirror by using a mirror facing the screen of the first client, and then the front camera of the first client is turned on to shoot the second video data in the mirror in real time.
And S302, the second client generates second video data.
In the synchronization step S301, the second video data generated by the second client also carries timing data, and the timing data carried in the first video data and the second video data is synchronous timing data, that is, the first client and the second client synchronously time. And, the second client also uploads the second video data to the server 10.
It should be noted that, not only the on-line stopwatch is used as the timing data, but also different tools can be used for timing according to different application scenarios, and the present invention is not limited herein.
In a specific implementation process of this embodiment, the first client and the second client may be understood as the live broadcast initiator 20 in the application scenario. The first video data and the second video data are used for generating mixed drawing streams, the first video data carries timing data synchronous with each second video data, and the timing data of the first video data and the timing data of the second video data are used for calculating mixed drawing time delay of the first video data and the second video data.
It can be understood that, in this embodiment, the first client and the second client are only used for illustration in the testing stage before the product is on line, and in the actual application process after the product is on line, a plurality of first clients and a plurality of second clients exist, but the specific implementation manner is not changed.
S303, the server acquires timing data in each video data of the mixed picture stream.
The server 10 receives the first video data and the second video data, and generates a mixed stream using the first video data and the second video data. Since the first video data and the second video data both carry timing data, the video data of the comic stream also carries timing data.
After the server generates the comic stream, in order to calculate the comic delay between the video data in the comic stream, the timing data in each video data of the comic stream is acquired.
In this embodiment, first video data generated by a first client and second video data generated by a second client are taken as an example for explanation. It is understood that the comic stream may also include more than two pieces of video data, that is, the server 10 receives the video data uploaded by more than two clients, generates the comic stream using the received video data, and measures a comic delay between any two pieces of video data in the comic stream.
Optionally, in another embodiment of the present invention, as shown in fig. 7, an implementation manner of step S303 includes:
and S701, intercepting a plurality of frames of images in the comic stream.
Specifically, the server 10 intercepts each frame of image in the comic stream in which the comic delay is to be calculated, where the frame of image includes video data and timing data.
It should be noted that, when capturing the image in the comic stream, only one frame of image in the comic stream may be captured, but capturing the image containing the timing data under multiple frames in the comic stream may make the calculation of the subsequent comic delay more accurate.
S702, identifying each frame image in the intercepted mixed drawing stream, and obtaining timing data corresponding to each video data of the mixed drawing stream in each frame image.
In the process of identifying each frame of image in the intercepted comic stream, timing data in each frame of image can be extracted by using a computer image identification technology, and similarly, other methods can be used to continue identifying and intercepting the timing data, which is not limited herein.
Specifically, since the comic stream is a video stream in which a plurality of video data are synthesized frame by frame according to the conventional layout, the comic stream can be analyzed according to the conventional layout, and the position of each video data in the comic stream can be determined.
It should be noted that, in the testing process, the tester may select a position in the image for adding in the process of adding the timing data, so after capturing the multi-frame image in the comic stream, the tester may determine the timing data at the corresponding position in the image.
Of course, the content of each frame of image in the comic stream can also be identified, the position of the timing data is determined, the image at the position of the timing data is intercepted, and the timing data corresponding to each video data of the comic stream in each frame of image is obtained.
S304, the server calculates the mixed drawing time delay of the first video data and the second video data by using the timing data in the first video data and the timing data in the second video data.
The first video data and the second video data are any two video data in the mixed picture stream; the comic stream may be a comic stream formed of a plurality of video data.
It should be noted that, this embodiment is only to explain the method for calculating the comic delay, and is not limited to calculating the comic delay of two pieces of video data, and may also calculate the comic delay of more pieces of video data at the same time.
It should be noted that the steps S303 and S304 may be executed by the server 10, the live broadcast initiator 20, or the live broadcast receiver 30.
Optionally, in another embodiment of the present invention, as shown in fig. 8, an implementation manner of step S304 includes:
s801, calculating to obtain initial mixed drawing time delay of the first video data and the second video data in each frame of image by utilizing timing data corresponding to the first video data and the second video data in each frame of image.
Specifically, as shown in fig. 9, the image data of the same frame of the first video data and the second video data is obtained, where the timing data in the first video data is 11.825S, and the timing data in the second video data is 11.696S, and then the initial blending delay of the first video data and the second video data is 11.825S-11.696S-0.129S with reference to the timing data in the first video data.
In the specific implementation process of the embodiment of the present invention, the image data shown in fig. 9 of multiple frames may be obtained, where the timing data in fig. 9 may change with time, and a plurality of initial comic delays are obtained according to the above method.
Specifically, when the video data in the comic stream is greater than two, taking three video data as an example, as shown in fig. 10, the timing data of the first video data is 52.181S, the second video data is 52.096S, and the third video data is 52.138S, since the images of the second video data and the third video data include the image in the first video data, it can be seen that the timing data in the second video data and the third video data is obtained through the first video data, so it can be said that the comic stream calculates the initial comic delay with the timing data in the first video data as a reference; the initial comic delay is calculated as in the above embodiment, that is, the initial comic delay of the first video data and the second video data is 52.181S-52.096S-0.085S; the initial comixing time delay of the first video data and the third video data is 52.181S-52.138S-0.043S.
In the specific implementation process of the embodiment of the present invention, the image data shown in fig. 10 of multiple frames may be obtained, where the timing data in fig. 10 may change with time, and a plurality of initial comic delays are obtained according to the above method.
It should be noted that, in the process of calculating the initial comic delay, not only difference calculation is performed, but also weighting calculation is performed on the data result obtained by difference calculation according to a preset weight, and the like, which is not limited herein.
S802, averaging the initial mixed drawing time delay of the first video data and the initial mixed drawing time delay of the second video data in each frame of image to obtain the mixed drawing time delay of the first video data and the second video data.
It should be noted that, if the first video data and the second video data have only one frame of image, the initial blending time delay is the blending time delay of the first video data and the second video data; if the first video data and the second video data have multiple frames of images, averaging calculation is carried out on the initial comixing time delay of the first video data and the second video data in each frame of image, so that the obtained comixing time delay is more accurate, but if the initial comixing time delay in each frame of image is found to be large in floating in the calculation process, warning information is sent to technical personnel or an expert group to carry out analysis processing on the system after the comixing time delay is calculated. For example, the preset blending time delay is 0.6S, the initial blending time delay of the first frame image is 1S, and the initial blending time delay of the second frame image is 0.1S, at this time, the blending time delay obtained by averaging the initial blending time delays of the first frame image and the second frame image is 0.55, which is smaller than the preset blending time delay, but since the initial blending time delay of the first frame image is 1S and the initial blending time delay of the second frame image is 0.1S, the system still needs to be analyzed.
Therefore, in the implementation process of the embodiment of the present invention, before the initial aliasing delay of the first video data and the second video data in each frame of image is calculated by averaging to obtain the aliasing delay of the first video data and the second video data, the initial aliasing delay of the first video data and the second video data in each frame of image needs to be preliminarily analyzed, namely, the fluctuation of the initial mixed drawing time delay of the first video data and the second video data in each frame of image is analyzed and judged, when the fluctuation is smaller than a preset fluctuation value, the initial mixed drawing time delay of the first video data and the initial mixed drawing time delay of the second video data in each frame of image can be calculated by averaging, and finally the mixed drawing time delay of the first video data and the second video data is obtained, otherwise, warning information is directly sent to a technician or an expert group to analyze and process the system.
According to the scheme, the method for measuring the mixed drawing time delay provided by the invention comprises the steps of acquiring timing data in each video data of mixed drawing flow; the mixed picture stream comprises a plurality of video data, and the video data are synchronously timed; calculating mixed drawing time delay of the first video data and the second video data by using timing data in the first video data and timing data in the second video data; wherein the first video data and the second video data are any two video data in the comic stream. The aim of testing the mixed drawing time delay of multi-user interactive live broadcasting is achieved.
Another embodiment of the present invention provides a device for measuring comic delay, as shown in fig. 11, including:
an acquisition unit 1101 configured to acquire timing data in each video data of the mixed picture stream.
Wherein the mixed picture stream comprises a plurality of video data, and the plurality of video data are synchronously timed; the plurality of video data are video data sent by the plurality of live broadcast initiating terminals 20, and in the application scenario, the video data are contents that the anchor broadcasts want to live, and may be from screenshots, or from cameras or media files.
The calculating unit 1102 is configured to calculate, by using the timing data in the first video data and the timing data in the second video data, a mixed drawing time delay of the first video data and the second video data.
The first video data and the second video data are any two video data in the mixed picture stream.
For the specific working process of the unit disclosed in the above embodiment of the present invention, reference may be made to the content of the corresponding method embodiment, as shown in fig. 3, which is not described herein again.
Optionally, in another embodiment of the present invention, an implementation manner of the obtaining unit 1101, as shown in fig. 12, includes:
a clipping unit 1201, configured to clip a plurality of frames of images in the comic stream.
The identifying unit 1202 is configured to identify each frame image in the captured comic stream, and obtain timing data corresponding to each video data of the comic stream in each frame image.
For a specific working process of the unit disclosed in the above embodiment of the present invention, reference may be made to the content of the corresponding method embodiment, as shown in fig. 7, which is not described herein again.
Optionally, in another embodiment of the present invention, an implementation manner of the computing unit 1102 includes:
and the calculating subunit is used for calculating the initial comixing time delay of the first video data and the second video data in each frame of image by using the timing data corresponding to the first video data and the second video data in each frame of image.
And the calculating subunit is further configured to perform averaging calculation on the initial comixing time delay of the first video data and the initial comixing time delay of the second video data in each frame of image, so as to obtain the comixing time delay of the first video data and the second video data.
For the specific working process of the units disclosed in the above embodiments of the present invention, reference may be made to the contents of the corresponding method embodiments, as shown in fig. 8, fig. 9, and fig. 10, which are not described herein again.
According to the above scheme, the device for measuring the mixed drawing time delay provided by the invention acquires the timing data in each video data of the mixed drawing stream through the acquisition unit 1101; the mixed picture stream comprises a plurality of video data, and the video data are synchronously timed; then, the computing unit 1102 is used for computing the timing data in the first video data and the timing data in the second video data to obtain the mixed drawing time delay of the first video data and the second video data; wherein the first video data and the second video data are any two video data in the comic stream. The aim of testing the mixed drawing time delay of multi-user interactive live broadcasting is achieved.
Another embodiment of the present invention provides a client, as shown in fig. 13, including:
a generating unit 1301 is configured to generate first video data.
The first video data is used for generating a mixed drawing stream with at least one second video data, and the first video data carries timing data synchronous with each second video data, and the timing data of the first video data and the timing data of any one second video data are used for calculating and obtaining mixed drawing time delay of the first video data and the second video data.
For the specific working process of the unit disclosed in the above embodiment of the present invention, reference may be made to the content of the corresponding method embodiment, which is not described herein again.
Optionally, in another embodiment of the present invention, an implementation manner of the generating unit 1301, as shown in fig. 14, includes:
a second video data obtaining unit 1401, configured to obtain second video data generated by a second client.
Wherein the second video data carries timing data.
The video playing image capturing unit 1402 is configured to capture a video playing image of the display screen of the first client in real time during the process that the display screen of the first client plays the second video data.
A generating sub-unit 1403, configured to generate the first video data by using the video playing image obtained by real-time capturing.
For the specific working process of the unit disclosed in the above embodiment of the present invention, reference may be made to the content of the corresponding method embodiment, as shown in fig. 5, which is not described herein again.
Optionally, in another embodiment of the present invention, as shown in fig. 15, another implementation manner of the generating unit 1301 includes:
a second video data acquisition unit 1501 is configured to acquire second video data generated by a second client.
Wherein the second video data carries timing data.
It should be noted that the functions of the second video acquisition unit 1501 and the second video acquisition unit 1401 are the same, and are not described here again.
The shooting unit 1502 is configured to obtain video data shot by a front-facing camera of the first client in a process of playing the second video data on the display screen of the first client.
The front-facing camera of the first client is used for shooting second video data played by a display screen of the first client.
For a specific working process of the unit disclosed in the above embodiment of the present invention, reference may be made to the content of the corresponding method embodiment, as shown in fig. 6, which is not described herein again.
According to the scheme, the client generates the first video data by using the generating unit; the first video data is used for generating a mixed drawing stream with at least one second video data, the first video data carries timing data synchronous with each second video data, and the timing data of the first video data and the timing data of any one second video data are used for being subsequently sent to the mixed drawing time delay measuring device to calculate and obtain mixed drawing time delay of the first video data and the second video data. The aim of testing the mixed drawing time delay of multi-user interactive live broadcasting is achieved.
Another embodiment of the invention provides a computer readable medium having a computer program stored thereon, wherein the program when executed by a processor implements the method as in any one of the embodiments above.
Another embodiment of the present invention provides a system for measuring comic delay, including:
a measuring device for mixed drawing time delay and a client.
Wherein the measuring device of the comixing time delay is used for executing the method in any one of the embodiments described above, as shown in fig. 3, 7 and 8; the client is configured to perform the method according to any one of fig. 5 and 6 in the above embodiments.
In the above embodiments of the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus and method embodiments described above are illustrative only, as the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present disclosure may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part. The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a live broadcast device, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.