CN109714634B - Decoding synchronization method, device and equipment for live data stream - Google Patents

Decoding synchronization method, device and equipment for live data stream Download PDF

Info

Publication number
CN109714634B
CN109714634B CN201811637340.XA CN201811637340A CN109714634B CN 109714634 B CN109714634 B CN 109714634B CN 201811637340 A CN201811637340 A CN 201811637340A CN 109714634 B CN109714634 B CN 109714634B
Authority
CN
China
Prior art keywords
audio
frame
time
video
decoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811637340.XA
Other languages
Chinese (zh)
Other versions
CN109714634A (en
Inventor
李斌
王玉伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN201811637340.XA priority Critical patent/CN109714634B/en
Publication of CN109714634A publication Critical patent/CN109714634A/en
Application granted granted Critical
Publication of CN109714634B publication Critical patent/CN109714634B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

This specification provides a method, an apparatus and a device for decoding and synchronizing a live data stream, where the method includes: calculating a maximum timestamp according to the specified parameters; comparing the time stamp of the non-first frame audio frame to be added into the audio cache queue with the maximum time stamp, if the time stamp of the non-first frame audio frame is smaller than the maximum time stamp, storing the non-first frame audio frame in the audio cache queue, and if the time stamp of the non-first frame audio frame is larger than the maximum time stamp, discarding the non-first frame audio frame; and acquiring audio frames from the audio buffer queue, and decoding to acquire audio data. According to the method, the time upper limit value corresponding to the audio frame timestamp in the corresponding audio buffer queue is dynamically adjusted, the audio frames with the timestamps exceeding the time upper limit value are discarded, the audio buffer queue is reduced, and then the video data are synchronized to the audio data, so that the purpose of accurately controlling the live video delay is achieved.

Description

Decoding synchronization method, device and equipment for live data stream
Technical Field
The present disclosure relates to the field of internet live broadcast technologies, and in particular, to a method, an apparatus, and a device for synchronizing decoding of a live broadcast data stream.
Background
Live technologies are now becoming more popular but face more challenges. The whole live broadcast process comprises the following key steps: the anchor client pushes the locally acquired video stream to the server; the server caches and forwards the video stream; and the audience client pulls the video stream cached by the server to play.
After downloading the video stream from the network server, the viewer client needs to decode the audio/video data in the video stream before playing the video stream, so that certain network delay inevitably exists. Network latency refers to the time difference between acquisition from the anchor client to the viewer client playing. The network delay can be divided into a delay of a spectator client with respect to a video stream played by a main broadcasting client, a delay of transmitting a video stream between a client and a server, and a delay of transmitting a video stream between CDN (content delivery network) servers as servers. The delay is mainly caused by cache data of a viewer client, and in the prior art, the viewer client generally adjusts the size of the cache in the playing process in real time according to the current network environment monitored in real time to ensure lower delay time, but because the player end is generally divided into a plurality of caches, and the application-controllable cache provided externally by the player is only the cache after demultiplexing, the size of the cache in the playing process is still difficult to accurately control, namely, the live video delay is difficult to accurately control.
Disclosure of Invention
In order to overcome the problems in the related art, the present specification provides a method, an apparatus, and a device for decoding and synchronizing a live data stream.
According to a first aspect of embodiments of the present specification, there is provided a method for synchronizing decoding of a live data stream, the method including:
calculating a maximum time stamp according to a specified parameter, wherein the specified parameter comprises a decoding time delay of an audio frame and an output time delay of decoded audio data, and the maximum time stamp represents a maximum value of the time stamps of the audio frames which can be cached by the audio cache queue;
comparing the time stamp of the non-first frame audio frame to be added into the audio cache queue with the maximum time stamp, if the time stamp of the non-first frame audio frame is not greater than the maximum time stamp, storing the non-first frame audio frame in the audio cache queue, and if the time stamp of the non-first frame audio frame is greater than the maximum time stamp, discarding the non-first frame audio frame;
and acquiring audio frames from the audio buffer queue, and decoding to acquire audio data.
According to a second aspect of embodiments of the present specification, there is provided a decoding synchronization apparatus for a live data stream, including:
the audio buffer module comprises an audio buffer queue for buffering audio frames;
the computing module is used for computing a maximum timestamp according to a specified parameter, wherein the specified parameter comprises decoding delay of an audio frame and output delay of decoded audio data, and the maximum timestamp represents the maximum value of timestamps of the audio frames which can be cached by the audio cache queue;
a judging module, configured to compare a timestamp of a non-first frame audio frame to be added to the audio buffer queue with the maximum timestamp, if the timestamp of the non-first frame audio frame is not greater than the maximum timestamp, store the non-first frame audio frame in the audio buffer queue, and if the timestamp of the non-first frame audio frame is greater than the maximum timestamp, discard the non-first frame audio frame;
and the audio decoding module is used for acquiring the audio frames from the audio buffer queue and decoding the audio frames to acquire audio data.
According to a third aspect of embodiments of the present specification, there is provided a decoding synchronization apparatus for a live data stream, including: a processor and a memory;
the memory is to store executable computer instructions;
wherein the processor when executing the computer instructions implements the steps of:
calculating a maximum time stamp according to a specified parameter, wherein the specified parameter comprises a decoding time delay of an audio frame and an output time delay of decoded audio data, and the maximum time stamp represents a maximum value of the time stamps of the audio frames which can be cached by the audio cache queue;
comparing the time stamp of the non-first frame audio frame to be added into an audio buffer queue with the maximum time stamp, if the time stamp of the non-first frame audio frame is not greater than the maximum time stamp, storing the non-first frame audio frame in the audio buffer queue, and if the time stamp of the non-first frame audio frame is greater than the maximum time stamp, discarding the non-first frame audio frame;
and acquiring audio frames from the audio buffer queue, and decoding to acquire audio data.
The technical scheme provided by the embodiment of the specification can have the following beneficial effects:
in the embodiment of the present specification, a frame loss strategy for audio frames in a live data stream is provided, and based on the frame loss strategy, a decoding synchronization method for the live data stream is specifically designed; compared with the prior art, the frame loss strategy provided by the invention considers the time delay of an audio output module and the time delay of an audio decoding module, and in order to compensate the time delay of the two modules, the invention is realized by dynamically adjusting the maximum value of the time stamp which can receive the buffered audio frame in the corresponding audio buffer queue, namely dynamically adjusting the time upper limit value corresponding to the time stamp of the audio frame in the corresponding audio buffer queue; if the time stamp of the audio frame to enter the audio buffer queue is larger than the maximum value of the time stamp, the live broadcast delay is expanded after the audio frame enters the audio buffer queue, so that the audio frame is discarded, and the purpose of accurately controlling the live broadcast video delay is achieved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the specification.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present specification and together with the description, serve to explain the principles of the specification.
Fig. 1 is a connection diagram of modules in a client player according to an exemplary embodiment in the prior art.
Fig. 2 is a flow chart illustrating a method for decoding synchronization of a live data stream according to an exemplary embodiment.
Fig. 3 is a flow diagram illustrating an audio frame dropping strategy in a live data stream according to an example embodiment.
Fig. 4 is a schematic diagram of a synchronization strategy adopted by an audio-video synchronization module according to an exemplary embodiment.
Fig. 5 is a logic block diagram of a decoding synchronization apparatus for a live data stream according to an exemplary embodiment of the present specification.
Fig. 6 is a logic block diagram of a decoding synchronization apparatus for a live data stream according to an exemplary embodiment of the present specification.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the specification, as detailed in the appended claims.
The terminology used in the description herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the present specification. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The following provides a detailed description of examples of the present specification.
As shown in fig. 1, fig. 1 is a connection diagram of some constituent modules in a client player. As can be seen, the client player generally includes: a network protocol analyzing and downloading module 101, a download data buffer queue 102, a demultiplexing module 103, an audio buffer queue 1041 and a video buffer queue 1042, an audio decoding module 1051 and a video decoding module 1052, an audio and video synchronizing module 106, an audio output module 1071 and a video output module 1072.
The network protocol parsing and downloading module 101 is responsible for parsing the network protocol and downloading the live data stream, and the network protocol parsing and downloading module 101 often stores the downloaded live data stream that is not demultiplexed through a download data buffer queue. However, for the live data stream, it is usually supported to parse while downloading, and by matching the parsing downloading speed of the network protocol parsing and downloading module 101 with the demultiplexing speed of the live data stream, the buffer of the download data buffer queue 102 can be reduced to a negligible extent. In one embodiment of the present invention, the buffer size in the download data buffer queue 102 is negligible.
The demultiplexing module 103 is responsible for decapsulating the downloaded live data stream to obtain separated data such as audio frames, video frames, or subtitle streams. Since the speed of demultiplexing the live data stream and the speed of audio decoding or video decoding are generally not completely matched, at least one audio buffer queue 1041 or video buffer queue 1042 is usually also present after the demultiplexing module 103 for storing audio frames or video frames that have not yet been decoded. In this embodiment, the audio buffer queue 1041 or the video buffer queue 1042 may be an independent module, or may be affiliated to the download data buffer queue 102 or the demultiplexing module 103. Generally, the audio buffer queue 1041 or the video buffer queue 1042 will generally define an upper limit of the number of buffered audio frames or video frames or an upper limit of the buffer size, or an upper limit of the timestamp corresponding to the buffered audio frames or video frames.
The audio decoding module 1051 or the video decoding module 1052 is responsible for decoding compressed audio frames or video frames, respectively. The audio decoding module 1051 or the video decoding module 1052 usually has several frames of data buffered due to the requirement of the decoding algorithm. A buffer queue exists in the audio decoding module or the video decoding module to buffer the several frames of data, so that the length of the buffer queue in the audio decoding module 1051 or the video decoding module 1052 represents the decoding delay.
The audio and video synchronization module 106 is configured to synchronize the decoded audio data and video data; and then sent to the audio output module 1071 and the video output module 1072, respectively.
The audio output module 1071 and the video output module 1072 are responsible for outputting audio data and video data respectively at a live broadcast terminal, that is, a playing terminal; for technical reasons, data buffering may also exist for the audio output module and the video output module. Similarly, the audio output module 1071 or the video output module 1072 also needs a buffer queue to buffer the data, and the length of the buffer queue in the audio output module 1071 represents the output delay of the audio data.
Taking a video output terminal supporting MEMC (Motion estimation and Motion Compensation, i.e. Motion image quality Compensation technology used in lcd tv) as an example, since the MEMC algorithm needs to calculate through video data of multiple frames, the video data cache cannot be deleted. Similarly, data caching exists in the relation of the audio output module and the scheme provider.
In the existing player, only the data buffer in the audio buffer queue 1041 or the video buffer queue 1042 after demultiplexing is usually considered, but because the upper limit of the buffer frame number specified by the audio buffer queue 1041 or the video buffer queue 1042 or the upper limit of the timestamp corresponding to the buffer data frame is fixed, the live broadcast delay of the player changes greatly, and the player still cannot realize accurate control on the live broadcast video delay.
In the invention, in order to solve the problem that the delay control of the live video is still not accurate enough in the prior art, the output delay of the audio output module 1071 and the decoding delay of the audio decoding module 1051 are considered, and a frame loss strategy of an audio frame in the live data stream is designed; based on a designed frame loss strategy, a decoding synchronization method for audio frames in a live data stream is provided, as shown in fig. 2, the method includes steps S202 to S206:
s202, calculating a maximum timestamp according to specified parameters, wherein the specified parameters comprise decoding delay of an audio frame and output delay of decoded audio data, and the maximum timestamp represents the maximum value of timestamps of audio frames which can be cached in the audio cache queue;
s204, comparing the time stamp of the non-first frame audio frame to be added into the audio cache queue with the maximum time stamp, if the time stamp of the non-first frame audio frame is not greater than the maximum time stamp, storing the non-first frame audio frame in the audio cache queue, and if the time stamp of the non-first frame audio frame is greater than the maximum time stamp, discarding the non-first frame audio frame if the time stamp exceeds the maximum time stamp;
s206, obtaining the audio frame from the audio buffer queue, and decoding to obtain audio data.
In this embodiment, after the user starts the live player, waiting for a short delay, the user can view the video content corresponding to the first video frame when the live video is played, and the audio frame corresponding to the first video frame is the first frame audio frame; the system time can be UTC time, also called universal time; each audio or video frame may have a corresponding time stamp from which sequential playback is achieved.
When the video content starts to be live broadcast, an audio buffer queue 1041 and a video buffer queue 1042 inside the player are established, and the audio frame and the video frame after demultiplexing are respectively stored in the audio buffer queue 1041 and the video buffer queue 1042, and then the audio decoding module 1051 takes out and decodes the audio frame in the audio buffer queue; however, the speed of demultiplexing the live data stream is higher than the decoding speed of the audio decoding module, and at this time, audio frames which are not decoded in time are stored in the audio buffer queue 1041; in order to prevent continuous buffering of audio frames in the audio buffer queue 1041 and continuous accumulation of delay time, an upper limit is usually set on the number of frames buffered for audio frames in the audio buffer queue 1041, and when the audio buffer queue reaches the upper limit of the number of frames, the audio frames to be added into the audio buffer queue 1041 are discarded to control delay; another method is to set a fixed upper limit to the timestamp corresponding to the audio frame buffered in the audio buffer queue 1041, and when the timestamp corresponding to the audio frame to be added into the audio buffer queue 1041 is greater than the set upper limit, the audio frame to be added into the audio buffer queue 1041 is also discarded. For example, when the timestamp of the audio frame to be added to the audio buffer queue is 1200, and the fixed upper limit value set by the audio buffer queue is 1000 at this time, the audio frame to be added to the audio buffer queue is selected to be discarded, so as to reduce the delay accumulation.
In this embodiment, considering that there is a delay in both the audio decoding module 1051 and the audio output module 1071, it is not appropriate to adopt the above-mentioned fixed upper limit value, so this embodiment proposes to calculate a maximum timestamp from a specified parameter, the decoding delay, and the output delay, the maximum timestamp representing a maximum value of timestamps of audio frames cacheable in the audio buffer queue. Wherein the specified parameters include: extracting the decoding time delay of the obtained audio frame, the output time delay of the decoded audio data, the system time and the time stamp corresponding to the recorded first frame of audio frame, and the preset maximum playing time delay allowed by the user. The maximum play delay allowed by the user includes a delay caused by the size of an audio buffer queue, an audio decoding delay, an audio output delay, and the like, that is, the maximum play delay allowed by the user represents an upper threshold of the total play delay acceptable to the user. The first frame audio frame corresponds to a video frame at the start of play. The audio buffer queue is adjusted by the maximum timestamp set in this way, so that the upper limit value of the timestamp set in the audio buffer queue 1041 is dynamically changed, that is, the size of the audio buffer queue is dynamically adjusted along with the decoding delay of the audio decoding module and the output delay of the audio output module.
In this embodiment, the maximum playback delay, the timestamp or the maximum timestamp, the decoding delay, the output delay, the system time, and other units allowed by the user are consistent, and may be milliseconds or microseconds.
In one embodiment, the maximum timestamp may be determined by the following steps:
acquiring the time difference from the system time corresponding to the playing of the first frame of audio frame to the current system time; determining a time stamp of an audio frame corresponding to a currently played video frame; and adding the timestamp with a preset maximum playing time delay allowed by a user, and comparing the added timestamp with the decoding time delay and the output time delay to obtain the maximum timestamp.
The overall logic for calculating the maximum timestamp through the above steps is: calculating a time stamp corresponding to an audio frame which is played at present according to a time difference (video playing time) from the video playing to the continuous playing to the present and a time stamp corresponding to a first audio frame when the video is played; a live broadcast player generally limits live broadcast delay allowed during live broadcast, and in order to control the live broadcast delay, maximum play delay allowed by a user is preset; if the decoding delay of the audio decoding module 1051 and the output delay of the audio output module 1071 are not considered, the maximum timestamp corresponding to the audio buffer queue can be obtained by directly adding the timestamp corresponding to the audio frame being played to the maximum playing delay allowed by the user. However, in this embodiment, to compensate the decoding delay of the audio decoding module and the output delay of the audio output module, the time stamp corresponding to the audio frame being played is added to the maximum playing delay allowed by the user, and then the time stamp is compared with the sum of the decoding delay of the audio decoding module and the output delay of the audio output module, so as to obtain the maximum time stamp.
In this embodiment, after determining the maximum timestamp, the non-first frame audio frame to be added to the audio buffer queue may start to be determined, and whether to add the audio frame to the audio buffer queue 1041 is determined; and according to the judgment result, storing the non-first frame audio frame of which the time stamp does not exceed the maximum time stamp in the audio buffer queue, and discarding the non-first frame audio frame of which the time stamp exceeds the maximum time stamp. For example, when the timestamp of the audio frame to be added to the audio buffer queue is 1010 and the maximum timestamp calculated at this time is 1000, the audio frame to be added to the audio buffer queue is selected to be discarded to reduce the delay accumulation.
To clearly illustrate the above-mentioned frame loss situation, in an embodiment, a flowchart of audio frame loss judgment is provided, as shown in fig. 3, the specific steps are as follows:
step 301: judging whether an audio frame (audio _ frame) to be added into an audio buffer queue is a first frame of audio corresponding to a first video frame when a video is played; if yes, go to step 302; if not, go to step 303;
step 302: recording the system time (system _ time) corresponding to the first frameaudio_start) And a timestamp (audio _ start _ time), then step 306 is performed;
step 303: acquiring a real-time buffer size (audio _ decoder _ buffer) of the audio decoding module and a real-time delay (audio _ output _ delay) of the audio output module, and then executing step 304;
step 304: calculating the maximum timestamp (max _ pts) corresponding to the audio buffer queue at the current system time, and then executing step 305;
step 305: judging and comparing the size of the time stamp of the audio frame with the size of the maximum time stamp, and if the time stamp of the audio frame is greater than the maximum time stamp, executing step 307; if the timestamp of the audio frame is not greater than the maximum timestamp, go to step 306;
step 306: adding the audio frame (audio _ frame) to an audio buffer queue;
step 307: the audio frame (audio _ frame) is discarded.
Wherein the maximum timestamp is calculated according to the following formula:
max_pts=(current_system_time-system_timeaudio_start)+audio_start_time
+thresholdupper_limit-(audio_decoder_buffer+audio_output_delay)
wherein max _ pts is the maximum timestamp, and current _ system _ time is the current corresponding system time at the present time of playing; system _ timeaudio_startThe system time corresponding to the first audio frame when the video is played; the audio _ start _ time is a timestamp corresponding to a first audio frame when the video is played; threshold (THRESHOLD)upper_limitMaximum play delay allowed for the user; the audio _ decoder _ buffer is the decoding time delay of the audio decoding module; and the audio _ output _ delay is the output time delay of the audio output module.
In an embodiment, after the non-leading frame audio frame is stored in the audio buffer queue according to the above determination result, the audio frames in the audio buffer queue may also be traversed, and the audio frames whose timestamps exceed the maximum timestamp in the audio buffer queue are discarded. Under the condition, the frame loss judgment is carried out on the audio frames to be entered into the audio buffer queue, the frame loss screening is carried out on the audio frames already entered into the audio buffer queue, the buffer size of the audio buffer queue is reduced again, the buffer size of the audio buffer queue is dynamically controlled according to the maximum time stamp dynamically changing along with the time, and the time delay is further reduced.
To facilitate understanding of the two frame loss situations and the calculation method of the maximum timestamp, for example, a preset maximum play time delay allowed by a user is set to be 100 (the time stamp or the system time unit is ms), a timestamp corresponding to a first audio frame when a video starts playing is 100, and a system time difference from the video starting playing to the current playing is 900 (that is, the timestamp of an audio frame corresponding to a video frame played at this time is 1000); at this time, the sum of the decoding delay of the decoding module and the output delay of the output module is 40, the corresponding maximum timestamp is 1060, an audio frame to be added into the audio buffer queue comes at this time, the timestamp of the frame is 1058, and the frame is added into the audio buffer queue if the timestamp is smaller than the maximum timestamp; along with video playing, the system time difference from video playing to the present is 910, and the sum of the decoding delay of the decoding module and the output delay of the output module is 55, then the corresponding maximum timestamp becomes 1055, the timestamp of the audio frame to be added into the audio buffer queue at this time point is 1053, after the judgment that 1053 is less than 1055, the frame with the timestamp 1053 is added into the audio buffer queue, then the audio frame in the audio buffer queue is traversed by using the standard of the maximum timestamp 1055, the audio frame corresponding to the timestamp larger than the maximum timestamp 1055 is found, that is, the audio frame with the timestamp 1058 is found, and then the audio frame with the timestamp 1058 is deleted.
In this embodiment, after the frame loss processing, the audio frames in the audio buffer queue are taken out and decoded to obtain audio data.
In one embodiment, the method for synchronizing decoding of the live data stream further includes: caching video frames into a video cache queue, and acquiring the video frames from the video cache queue for decoding to obtain video data; and carrying out synchronous processing on the audio data and the video data based on an audio and video synchronization strategy.
In one embodiment, the video frame is directly buffered into the video buffer queue without performing frame loss processing corresponding to the method. Therefore, only the audio frames are subjected to frame loss processing, so that the time stamps of the audio frames corresponding to the audio data are discontinuous, the video frames are not subjected to corresponding frame loss processing, and the obtained time stamps of the video frames corresponding to the video data are relatively continuous. Based on the situation, a synchronization strategy from video synchronization to audio is adopted, and the synchronization strategy specifically comprises the following steps:
updating to obtain an audio time axis according to the time stamp corresponding to the decoded audio frame;
judging whether time corresponding to the time stamp of the video frame exists on the audio time axis, and if so, synchronizing the video data corresponding to the video frame to the audio data corresponding to the time;
if the time corresponding to the time stamp of the video frame does not exist in the audio time axis, synchronizing the video data without the time corresponding to the time to the audio data corresponding to the next time nearest to the time on the audio time axis.
In this embodiment, the synchronization policy adopts a method for synchronizing a video to an audio, and the method specifically includes: updating an audio time axis according to the time stamp of the decoded audio frame and the audio data corresponding to the audio frame, arranging the decoded video frames according to the size sequence of the time stamp, putting the video frames into a queue, taking the video data of one video frame out of the queue through a video rendering process, and judging whether the video frame needs to be rendered according to whether the time stamp of the video frame has the audio data corresponding to the corresponding time on the audio time axis.
As shown in fig. 4, timestamps corresponding to a first frame AFirst and a last frame ALast in an audio buffer queue are aF and aL, respectively, and timestamps corresponding to a first frame VFirst and a last frame VLast in a video buffer queue are vF and vL, respectively; meanwhile, audio frames are also buffered in the audio buffer queue: audio1, Audio2, Audio3, Audio 4; similarly, video frames are cached correspondingly in the video cache queue: video1, Video2, Video3, Video 4;
if the cached audio frames are not discarded after being judged by the maximum timestamp calculated by the embodiment, the audio frames and the video frames are decoded and played in a one-to-one correspondence manner;
another situation is as shown in fig. 4, after the maximum timestamp calculated by this embodiment is determined, 3 Audio frames, i.e., Audio1, Audio2, and Audio3, in the Audio buffer queue are discarded, and after the Audio frames in the Audio buffer queue are decoded, the updated Audio time axis has a jump, i.e., after the playing of the end frame corresponding to the Audio is completed, the Audio data corresponding to the Audio frame Audio4 continues to be played, and the timestamp on the Audio time axis is directly jumped from aL to the timestamp a4 corresponding to the Audio frame Audio 4; however, after the end frame corresponding to the Video is decoded and played synchronously, the Video frame Video1 is also decoded and synchronized, but since the time corresponding to the Video1 does not exist on the Audio time axis, the Video frame Video1 can only be synchronized to the Audio frame Audio4, but since the time stamp v1 of the Video frame Video1 is smaller than the time stamp a4 of the Audio frame Audio4, that is, the Audio output speed on the Audio time axis is ahead of the Video playing speed, the Video data corresponding to the Video frame Video1 needs to be rendered quickly, similarly to Video2 and Video 3; the video frames are synchronized to the audio timeline by fast rendering of the video data of the video frames.
In one embodiment, another audio-video synchronization policy may be further included, where the audio-video synchronization policy includes:
updating an audio time axis according to the time stamp corresponding to the decoded audio frame;
judging whether time corresponding to the time stamp of the video frame exists on the audio time axis, and if the time corresponding to the time stamp of the video frame exists on the audio time axis, synchronizing the video data corresponding to the video frame to the audio data corresponding to the time;
and if the time corresponding to the time stamp of the video frame does not exist on the audio time axis, discarding the video data corresponding to the video frame. Also in the above example, 3 Audio frames of Audio1, Audio2, and Audio3 in the Audio buffer queue are discarded, and Video1, Video2, and Video3 in the corresponding Video buffer queue are also discarded after decoding, thereby ensuring Video synchronization to the Audio timeline.
As can be known from the audio and video synchronization strategy, the audio time axis is a reference time axis for playing, that is, the size of the audio buffer queue 1041 is also the live broadcast delay time; therefore, the size of the audio buffer queue can be controlled by controlling the size of the timestamp of the audio frame to be added into the audio buffer queue or in the audio buffer queue, and the time length of live broadcast delay can be further accurately controlled.
The embodiments corresponding to the live data stream decoding and synchronizing method described in this specification are not limited to the data stream decoding and synchronizing method in the case where the mobile terminal, the computer terminal play live network broadcast or the television terminal play live television broadcast, and the like, and may also be applied to various application scenarios where audio and video data are cached, decoded, and synchronized.
Corresponding to the foregoing embodiment of a method for decoding and synchronizing a live data stream, the present specification further provides a device for decoding and synchronizing a live data stream. As shown in fig. 5, the apparatus 500 includes:
the audio buffering module 501 includes an audio buffering queue 1041 for buffering audio frames;
a calculating module 502, configured to calculate a maximum timestamp according to a specified parameter, where the specified parameter includes a decoding delay of an audio frame and an output delay of decoded audio data, and the maximum timestamp represents a maximum value of timestamps of audio frames cacheable by the audio buffer queue;
a determining module 503, configured to compare a timestamp of a non-leading frame audio frame to be added to the audio cache queue with the maximum timestamp, if the timestamp of the non-leading frame audio frame is not greater than the maximum timestamp, store the non-leading frame audio frame in the audio cache queue if the timestamp does not exceed the maximum timestamp, and if the timestamp of the non-leading frame audio frame is greater than the maximum timestamp, discard the non-leading frame audio frame whose timestamp exceeds the maximum timestamp;
and an audio decoding module 504, configured to obtain an audio frame from the audio buffer queue, and decode the audio frame to obtain audio data.
In this embodiment, the apparatus 500 further includes an obtaining module 505, where the obtaining module 505 is configured to obtain a decoding delay when the audio decoding module decodes the audio frame, and an output delay when the audio output module outputs the audio data after the decoding synchronization.
In this embodiment, the calculating module 502 extracts, from the obtaining module 505, the decoding delay of the audio decoding module 504 when decoding the audio frame and the output delay of the audio output module outputting the audio data after decoding synchronization.
In one embodiment, the apparatus 500 further includes a recording unit 509, which stores the specified parameters, including: the system time and the time stamp corresponding to the first frame of audio frame, and the maximum playing time delay allowed by the preset user.
And calculating and determining the maximum timestamp based on a calculation method corresponding to the method embodiment.
The judgment module 503 obtains the maximum timestamp calculated by the calculation module 502, compares the timestamp corresponding to the non-leading frame audio frame with the maximum timestamp, and determines whether to store the audio frame in the audio buffer queue according to the comparison result.
In one embodiment, after the audio frame of the non-leading frame is stored in the audio buffer queue, the determining module 503 is further configured to traverse the audio frames in the audio buffer queue and discard the audio frames whose timestamps exceed the maximum timestamp.
In one embodiment, the apparatus 500 further comprises: a video cache module 506, including a video cache queue for caching video frames; a video decoding module 507, configured to obtain a video frame from the video buffer queue and decode the video frame to obtain video data; and an audio and video synchronization module 508, configured to perform synchronization processing on the audio data and the video data based on an audio and video synchronization policy.
In one embodiment, the audio and video synchronization module 508 includes:
the time axis updating unit is used for updating the audio time axis according to the time stamp corresponding to the decoded audio frame;
the synchronization unit is used for comparing whether time corresponding to the time stamp of the video frame exists on the audio time axis, and if the time corresponding to the time stamp of the video frame exists, video data corresponding to the video frame is synchronized to audio data corresponding to the time; if the time corresponding to the time stamp of the video frame does not exist on the audio time axis, synchronizing the video data without the time corresponding to the time to the audio data corresponding to the next time nearest to the time on the audio time axis. In one embodiment, the specific steps of the calculating module 502 calculating the maximum timestamp are: acquiring the time difference from the system time corresponding to the playing of the first frame of audio frame to the current system time; determining a time stamp of an audio frame corresponding to a currently played video frame; and adding the timestamp with a preset maximum playing time delay allowed by a user, and comparing the added timestamp with the decoding time delay and the output time delay to obtain the maximum timestamp.
In an embodiment, the audio and video synchronization module 508 further includes:
the time axis updating unit is used for updating the audio time axis according to the time stamp corresponding to the decoded audio frame;
the synchronization unit is used for comparing and judging whether time corresponding to the time stamp of the video frame exists on the audio time axis, and if the time corresponding to the time stamp of the video frame exists on the audio time axis, synchronizing the video data corresponding to the video frame to the audio data corresponding to the time; and if the time corresponding to the time stamp of the video frame does not exist on the audio time axis, discarding the video data corresponding to the video frame.
The implementation process of the functions and actions of each module in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
The embodiment of the apparatus in this specification document can be applied to a computer device, such as a server or a terminal device. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software.
In addition, the present specification further provides a decoding synchronization apparatus for live data stream, as shown in fig. 6, the decoding synchronization apparatus 600 includes: a processor 601 and a memory 602;
the memory 602 is used to store executable computer instructions;
the processor 601 is configured to implement the following steps when executing the computer instructions:
calculating a maximum time stamp according to a specified parameter, wherein the specified parameter comprises a decoding time delay of an audio frame and an output time delay of decoded audio data, and the maximum time stamp represents a maximum value of the time stamps of the audio frames which can be cached by the audio cache queue;
comparing the time stamp of the non-first frame audio frame to be added into the audio cache queue with the maximum time stamp, if the time stamp of the non-first frame audio frame is not greater than the maximum time stamp, storing the non-first frame audio frame in the audio cache queue, and if the time stamp of the non-first frame audio frame is greater than the maximum time stamp, discarding the non-first frame audio frame;
and acquiring audio frames from the audio buffer queue, and decoding to acquire audio data.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Other embodiments of the present description will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It will be understood that the present description is not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (15)

1. A method for synchronizing decoding of a live data stream, comprising:
calculating a maximum time stamp according to a specified parameter, wherein the specified parameter comprises decoding time delay of an audio frame and output time delay of decoded audio data, and the maximum time stamp represents the maximum value of audio frame time stamps which can be cached in an audio cache queue;
comparing the time stamp of the non-first frame audio frame to be added into the audio cache queue with the maximum time stamp, if the time stamp of the non-first frame audio frame is not greater than the maximum time stamp, storing the non-first frame audio frame in the audio cache queue, and if the time stamp of the non-first frame audio frame is greater than the maximum time stamp, discarding the non-first frame audio frame;
and acquiring audio frames from the audio buffer queue, and decoding the audio frames to acquire audio data.
2. The method of claim 1, wherein the specifying parameters further comprise: the method comprises the steps of recording a system time and a timestamp corresponding to a first frame audio frame and a preset maximum playing time delay allowed by a user, wherein the first frame audio frame corresponds to a video frame when playing starts.
3. The method of claim 1, wherein after the non-leading frame audio frame is stored in the audio buffer queue, the method further comprises: and traversing the audio frames in the audio buffer queue, and discarding the audio frames with the time stamps exceeding the maximum time stamp.
4. A method for synchronization of decoding of a live data stream according to claim 1 or 3, characterized in that the method further comprises:
caching a video frame into a video caching queue, and recording a timestamp of the video frame;
acquiring video frames from the video cache queue for decoding to obtain video data;
and carrying out synchronous processing on the audio data and the video data based on an audio and video synchronization strategy.
5. The method for decoding synchronization of a live data stream according to claim 4, wherein the audio-video synchronization policy comprises:
updating an audio time axis according to the time stamp corresponding to the decoded audio frame;
judging whether time corresponding to the time stamp of the video frame exists on the audio time axis;
if the time corresponding to the time stamp of the video frame exists on the audio time axis, synchronizing the video data corresponding to the video frame to the audio data corresponding to the time;
if the time corresponding to the time stamp of the video frame does not exist on the audio time axis, synchronizing the video data without the time corresponding to the time to the audio data corresponding to the next time nearest to the time on the audio time axis.
6. The method for decoding synchronization of a live data stream according to claim 4, wherein the audio-video synchronization policy comprises:
updating an audio time axis according to the time stamp corresponding to the decoded audio frame;
judging whether time corresponding to the time stamp of the video frame exists on the audio time axis, and if the time corresponding to the time stamp of the video frame exists on the audio time axis, synchronizing the video data corresponding to the video frame to the audio data corresponding to the time;
and if the time corresponding to the time stamp of the video frame does not exist on the audio time axis, discarding the video data corresponding to the video frame.
7. The method for decoding synchronization of a live data stream as claimed in claim 2, wherein the specific step of calculating the maximum timestamp is:
acquiring the time difference from the system time corresponding to the playing of the first frame of audio frame to the current system time;
determining a time stamp of an audio frame corresponding to a currently played video frame;
and adding the timestamp with a preset maximum playing time delay allowed by a user, and comparing the added timestamp with the sum of the decoding time delay and the output time delay to obtain the maximum timestamp.
8. A device for synchronization of decoding of a live data stream, comprising:
the audio buffer module comprises an audio buffer queue for buffering audio frames;
the computing module is used for computing a maximum timestamp according to a specified parameter, wherein the specified parameter comprises decoding delay of an audio frame and output delay of decoded audio data, and the maximum timestamp represents the maximum value of timestamps of the audio frames which can be cached by the audio cache queue;
a judging module, configured to compare a timestamp of a non-first frame audio frame to be added to the audio buffer queue with the maximum timestamp, if the timestamp of the non-first frame audio frame is not greater than the maximum timestamp, store the non-first frame audio frame in the audio buffer queue, and if the timestamp of the non-first frame audio frame is greater than the maximum timestamp, discard the non-first frame audio frame;
and the audio decoding module is used for acquiring the audio frames from the audio buffer queue and decoding the audio frames to acquire audio data.
9. The apparatus for decoding synchronization of a live data stream as claimed in claim 8, wherein the specified parameters further comprise: the system time and the timestamp corresponding to the first frame of audio frame stored in the recording unit and the preset maximum playing time delay allowed by the user, wherein the first frame of audio frame corresponds to the video frame at the beginning of playing.
10. The apparatus as claimed in claim 8, wherein after the audio frame of the non-first frame is stored in the audio buffer queue, the determining module is further configured to traverse the audio frames in the audio buffer queue and discard the audio frames whose timestamps exceed the maximum timestamp.
11. A device for synchronization of decoding a live data stream according to claim 8 or 10, said device further comprising: the video caching module comprises a video caching queue for caching video frames;
the video decoding module is used for acquiring video frames from the video cache queue and decoding the video frames to obtain video data;
and the audio and video synchronization module is used for carrying out synchronization processing on the audio data and the video data based on an audio and video synchronization strategy.
12. The apparatus for decoding and synchronizing a live data stream according to claim 11, wherein the audio/video synchronization module comprises:
the time axis updating unit is used for updating the audio time axis according to the time stamp corresponding to the decoded audio frame;
the synchronization unit is used for comparing whether time corresponding to the time stamp of the video frame exists on the audio time axis, and if the time corresponding to the time stamp of the video frame exists, video data corresponding to the video frame is synchronized to audio data corresponding to the time; if the time corresponding to the time stamp of the video frame does not exist on the audio time axis, synchronizing the video data without the time corresponding to the time to the audio data corresponding to the next time nearest to the time on the audio time axis.
13. The apparatus for decoding and synchronizing a live data stream according to claim 11, wherein the audio/video synchronization module further comprises:
the time axis updating unit is used for updating the audio time axis according to the time stamp corresponding to the decoded audio frame;
the synchronization unit is used for comparing and judging whether time corresponding to the time stamp of the video frame exists on the audio time axis, and if the time corresponding to the time stamp of the video frame exists on the audio time axis, synchronizing the video data corresponding to the video frame to the audio data corresponding to the time; and if the time corresponding to the time stamp of the video frame does not exist on the audio time axis, discarding the video data corresponding to the video frame.
14. The device for decoding and synchronizing a live data stream according to claim 9, wherein the step of calculating the maximum timestamp by the calculation module comprises:
acquiring the time difference from the system time corresponding to the playing of the first frame of audio frame to the current system time;
determining a time stamp of an audio frame corresponding to a currently played video frame;
and adding the timestamp with a preset maximum playing time delay allowed by a user, and comparing the added timestamp with the decoding time delay and the output time delay to obtain the maximum timestamp.
15. A decoding synchronization apparatus for a live data stream, the decoding synchronization apparatus comprising: a processor and a memory;
the memory is to store executable computer instructions;
the processor is configured to implement the steps of the method of any one of claims 1 to 7 when executing the computer instructions.
CN201811637340.XA 2018-12-29 2018-12-29 Decoding synchronization method, device and equipment for live data stream Active CN109714634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811637340.XA CN109714634B (en) 2018-12-29 2018-12-29 Decoding synchronization method, device and equipment for live data stream

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811637340.XA CN109714634B (en) 2018-12-29 2018-12-29 Decoding synchronization method, device and equipment for live data stream

Publications (2)

Publication Number Publication Date
CN109714634A CN109714634A (en) 2019-05-03
CN109714634B true CN109714634B (en) 2021-06-29

Family

ID=66259584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811637340.XA Active CN109714634B (en) 2018-12-29 2018-12-29 Decoding synchronization method, device and equipment for live data stream

Country Status (1)

Country Link
CN (1) CN109714634B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111093107A (en) * 2019-12-18 2020-05-01 深圳市麦谷科技有限公司 Method and device for playing real-time live stream
CN111010603A (en) * 2019-12-18 2020-04-14 浙江大华技术股份有限公司 Video caching and forwarding processing method and device
CN111601135B (en) * 2020-05-09 2022-02-25 青岛海信传媒网络技术有限公司 Method for synchronously injecting audio and video elementary streams and display equipment
CN112004030A (en) * 2020-07-08 2020-11-27 北京兰亭数字科技有限公司 Visual VR (virtual reality) director system for meeting place control
CN114095769B (en) * 2020-08-24 2024-05-14 海信视像科技股份有限公司 Live broadcast low-delay processing method of application-level player and display device
CN112235597B (en) * 2020-09-17 2022-07-29 深圳市捷视飞通科技股份有限公司 Method and device for synchronous protection of streaming media live broadcast audio and video and computer equipment
CN113014997B (en) * 2021-03-12 2023-04-07 上海哔哩哔哩科技有限公司 Cache updating method and device
CN113473229B (en) * 2021-06-25 2022-04-12 荣耀终端有限公司 Method for dynamically adjusting frame loss threshold and related equipment
CN113784118A (en) * 2021-09-14 2021-12-10 广州博冠信息科技有限公司 Video quality evaluation method and device, electronic equipment and storage medium
CN114025233B (en) * 2021-10-27 2023-07-14 网易(杭州)网络有限公司 Data processing method and device, electronic equipment and storage medium
CN114172605B (en) * 2021-11-18 2024-03-08 湖南康通电子股份有限公司 Synchronous playing method, system and storage medium
CN114257771B (en) * 2021-12-21 2023-12-01 杭州海康威视数字技术股份有限公司 Video playback method and device for multipath audio and video, storage medium and electronic equipment
CN114339381A (en) * 2021-12-28 2022-04-12 北京中交兴路信息科技有限公司 Audio and video synchronization method and device, electronic equipment and storage medium
CN114866830A (en) * 2022-03-30 2022-08-05 中国经济信息社有限公司 Audio and video synchronization method and device and computer readable storage medium
CN114512139B (en) * 2022-04-18 2022-09-20 杭州星犀科技有限公司 Processing method and system for multi-channel audio mixing, mixing processor and storage medium
CN114979712A (en) * 2022-05-13 2022-08-30 北京字节跳动网络技术有限公司 Video playing starting method, device, equipment and storage medium
CN115065860B (en) * 2022-07-01 2023-03-14 广州美录电子有限公司 Audio data processing method, device, equipment and medium suitable for stage
CN115484494B (en) * 2022-09-15 2024-04-02 云控智行科技有限公司 Digital twin video stream processing method, device and equipment
CN117376609A (en) * 2023-09-21 2024-01-09 北京国际云转播科技有限公司 Video synchronization method and device and video playing equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1455408A (en) * 2002-05-04 2003-11-12 三星电子株式会社 Method and apparatus for controlling audio-frequency flow buffering
CN101778269A (en) * 2009-01-14 2010-07-14 扬智电子(上海)有限公司 Synchronization method of audio/video frames of set top box
CN101902625A (en) * 2009-05-27 2010-12-01 深圳市九洲电器有限公司 Interactive-type internet protocol television video data processing method and system as well as set top box
CN102572611A (en) * 2010-12-07 2012-07-11 中国电信股份有限公司 Method for watching network live stream synchronously with different users and system thereof
CN104394421A (en) * 2013-09-23 2015-03-04 贵阳朗玛信息技术股份有限公司 Video frame processing method and device
CN108696773A (en) * 2017-04-11 2018-10-23 上海谦问万答吧云计算科技有限公司 A kind of transmission method and device of real-time video
US10116989B1 (en) * 2016-09-12 2018-10-30 Twitch Interactive, Inc. Buffer reduction using frame dropping
CN108769786A (en) * 2018-05-25 2018-11-06 网宿科技股份有限公司 A kind of method and apparatus of synthesis audio and video data streams

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106330761B (en) * 2015-06-30 2020-09-15 中兴通讯股份有限公司 Congestion control method and device based on queue time delay
CN106454553A (en) * 2016-11-15 2017-02-22 深圳市视维科技有限公司 A precise time delay live video network transmission control method
CN108462896B (en) * 2018-03-23 2020-10-02 北京潘达互娱科技有限公司 Live data stream processing method and device and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1455408A (en) * 2002-05-04 2003-11-12 三星电子株式会社 Method and apparatus for controlling audio-frequency flow buffering
CN101778269A (en) * 2009-01-14 2010-07-14 扬智电子(上海)有限公司 Synchronization method of audio/video frames of set top box
CN101902625A (en) * 2009-05-27 2010-12-01 深圳市九洲电器有限公司 Interactive-type internet protocol television video data processing method and system as well as set top box
CN102572611A (en) * 2010-12-07 2012-07-11 中国电信股份有限公司 Method for watching network live stream synchronously with different users and system thereof
CN104394421A (en) * 2013-09-23 2015-03-04 贵阳朗玛信息技术股份有限公司 Video frame processing method and device
US10116989B1 (en) * 2016-09-12 2018-10-30 Twitch Interactive, Inc. Buffer reduction using frame dropping
CN108696773A (en) * 2017-04-11 2018-10-23 上海谦问万答吧云计算科技有限公司 A kind of transmission method and device of real-time video
CN108769786A (en) * 2018-05-25 2018-11-06 网宿科技股份有限公司 A kind of method and apparatus of synthesis audio and video data streams

Also Published As

Publication number Publication date
CN109714634A (en) 2019-05-03

Similar Documents

Publication Publication Date Title
CN109714634B (en) Decoding synchronization method, device and equipment for live data stream
EP3520420B1 (en) Viewer importance adaptive bit rate delivery
CN107690073B (en) Video live broadcast method and video live broadcast server
CN113225598B (en) Method, device and equipment for synchronizing audio and video of mobile terminal and storage medium
US8743906B2 (en) Scalable seamless digital video stream splicing
JP5452495B2 (en) System and method for early start of audio / video rendering
CN107566918B (en) A kind of low delay under video distribution scene takes the neutrel extraction of root
CN106470352B (en) Live channel playing method, device and system
US10638180B1 (en) Media timeline management
KR102469142B1 (en) Dynamic playback of transition frames while transitioning between media stream playbacks
EP3520421B1 (en) Viewer importance adaptive bit rate delivery
CN111316659A (en) Dynamically reducing playout of substitute content to help align the end of substitute content with the end of replaced content
CN107517400B (en) Streaming media playing method and streaming media player
US11128897B2 (en) Method for initiating a transmission of a streaming content delivered to a client device and access point for implementing this method
CN108259964B (en) Video playing rate adjusting method and system
US20090204842A1 (en) Streaming Media Player and Method
CN112640479B (en) Method and apparatus for switching media service channels
US20190387271A1 (en) Image processing apparatus, image processing method, and program
EP3970384B1 (en) Method and apparatus for playing multimedia streaming data
CN113409801A (en) Noise processing method, system, medium, and apparatus for real-time audio stream playback
CN112073823A (en) Frame loss processing method, video playing terminal and computer readable storage medium
CN111093107A (en) Method and device for playing real-time live stream
CN115278288A (en) Display processing method and device, computer equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218

Applicant after: Hisense Video Technology Co., Ltd

Address before: 266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218

Applicant before: HISENSE ELECTRIC Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant