CN110213308A - A kind of method and device of decoding video data - Google Patents
A kind of method and device of decoding video data Download PDFInfo
- Publication number
- CN110213308A CN110213308A CN201810166306.2A CN201810166306A CN110213308A CN 110213308 A CN110213308 A CN 110213308A CN 201810166306 A CN201810166306 A CN 201810166306A CN 110213308 A CN110213308 A CN 110213308A
- Authority
- CN
- China
- Prior art keywords
- video data
- browser
- data
- current time
- cache space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 239000000872 buffer Substances 0.000 claims abstract description 41
- 238000012545 processing Methods 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 2
- 238000004891 communication Methods 0.000 abstract description 10
- 238000012544 monitoring process Methods 0.000 description 11
- 230000002093 peripheral effect Effects 0.000 description 10
- 230000001133 acceleration Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 2
- 239000000919 ceramic Substances 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/02—Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
- H04L67/5682—Policies or rules for updating, deleting or replacing the stored data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/433—Content storage operation, e.g. storage operation in response to a pause request, caching operations
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Multimedia (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The application is to belong to the communications field about a kind of method and device of decoding video data.The described method includes: obtaining the first data volume of the spatial cache, the spatial cache is located at the code layer of the browser at the video data input-buffer space that browser sends its received sender;When first data volume is more than preset first threshold value, the browser received video data after current time is abandoned, and or, notice described sender pause sends video data;Reading video data and the video data of the reading is decoded from the spatial cache.Described device includes: to obtain module, discard module and decoder module.The application, which can be avoided, leads to browser crash.
Description
Technical Field
The present application relates to the field of display, and in particular, to a method and an apparatus for decoding video data.
Background
With the continuous progress of the society, the application range of the video monitoring system is wider and wider, and meanwhile, a user can watch videos shot by the video monitoring system by using a browser. The latest browser at present is a hypertext markup Language5 (HTML 5) browser, and the HTML5 browser plays videos shot by a video monitoring system as follows.
The HTML5 browser receives video data sent by a video monitoring system, caches the received video data, reads a frame of video data from the cached video data, decodes the frame of video data by using a decoder, and finally plays the decoded video data.
In the process of implementing the present application, the inventors found that the above manner has at least the following defects:
when the performance of the decoder is insufficient, the rate of decoding the video data by the decoder is less than the rate of receiving the video data by the HTML5 browser, so that the data volume of the video data cached by the HTML5 browser is larger and larger, a large amount of memory space is occupied, and finally the browser is crashed.
Disclosure of Invention
In order to avoid causing a browser crash, embodiments of the present application provide a method and an apparatus for decoding video data. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a method for decoding video data, where the method includes:
when a browser inputs video data received by the browser and sent by a sender into a cache space, acquiring a first data volume of the cache space, wherein the cache space is located in a code layer of the browser;
when the first data volume exceeds a preset first threshold value, discarding video data received by the browser after the current time, and/or informing the sender to suspend sending the video data;
and reading the video data from the buffer space and decoding the read video data.
Optionally, the obtaining the first data volume in the cache space includes:
when the browser inputs the received video data into a cache space, increasing the recorded data volume of the cache space by the data volume of the received video data to obtain a first data volume.
Optionally, after reading a frame of video data from the buffer space, the method further includes:
reducing the data amount of the recorded buffer space by the data amount of the read video data.
Optionally, the discarding the video data received by the browser after the current time includes:
notifying the browser to discard video data received by the browser after a current time; or,
and when the browser caches the video data in the cache space after the current time, acquiring non-key frame video data from the cached video data after the current time, and discarding the non-key frame video data.
Optionally, after reading the video data from the buffer space and decoding the read video data, the method further includes:
acquiring a second data volume of the cache space, and stopping discarding the video data received by the browser after the current time when the second data volume is lower than a preset second threshold and the browser is in a video data discarding state; and/or when the second data volume is lower than a preset second threshold and the sender is in a state of suspending sending video data, the preset second threshold is smaller than or equal to the preset first threshold.
Optionally, the stopping discarding the video data received by the browser after the current time includes:
and informing the browser to stop discarding the video data received by the browser after the current time.
Optionally, the code layer of the browser is a JavaScript layer.
In another aspect, an embodiment of the present application provides an apparatus for decoding video data, where the apparatus includes:
the system comprises an acquisition module, a cache module and a processing module, wherein the acquisition module is used for acquiring a first data volume of a cache space when the browser inputs video data received by the browser and sent by a sender into the cache space, and the cache space is positioned in a code layer of the browser;
the discarding module is used for discarding the video data received by the browser after the current time when the first data volume exceeds a preset first threshold value, and/or informing the sender to suspend sending the video data;
and the decoding module is used for reading the video data from the buffer space and decoding the read video data.
Optionally, the obtaining module is configured to:
when the browser inputs the received video data into a cache space, increasing the recorded data volume of the cache space by the data volume of the received video data to obtain a first data volume.
Optionally, the apparatus further comprises:
a reducing module for reducing the data amount of the recorded buffer space by the data amount of the read video data.
Optionally, the discarding module includes:
a notification unit, configured to notify the browser to discard video data received by the browser after a current time; or,
and the discarding unit is used for acquiring non-key frame video data from the video data cached after the current time and discarding the non-key frame video data when the browser caches the video data in the caching space after the current time.
Optionally, the apparatus further comprises:
the stopping module is used for acquiring a second data volume of the cache space, and stopping discarding the video data received by the browser after the current time when the second data volume is lower than a preset second threshold and the browser is in a video data discarding state; and/or when the second data volume is lower than a preset second threshold and the sender is in a state of suspending sending video data, the preset second threshold is smaller than or equal to the preset first threshold.
Optionally, the stopping module is configured to:
and informing the browser to stop discarding the video data received by the browser after the current time.
Optionally, the code layer of the browser is a JavaScript layer.
In another aspect, embodiments of the present application provide a computer-readable storage medium for storing computer program instructions, which can be executed by a processor to perform the following operations:
when a browser inputs video data received by the browser and sent by a sender into a cache space, acquiring a first data volume of the cache space, wherein the cache space is located in a code layer of the browser;
when the first data volume exceeds a preset first threshold value, discarding video data received by the browser after the current time, and/or informing the sender to suspend sending the video data;
and reading the video data from the buffer space and decoding the read video data.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
the method comprises the steps of setting a cache space in a code layer of the browser, so that a first data volume of the cache space can be obtained, discarding video data received by the browser after the current time when the first data volume exceeds a preset first threshold, reading and decoding the video data from the cache space, continuously reducing the data volume in the cache space, and avoiding browser crash.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1-1 is a schematic diagram of a network architecture provided in an embodiment of the present application;
fig. 1-2 are schematic diagrams of another network architecture provided by the embodiments of the present application;
fig. 2 is a flowchart of a method for decoding video data according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an apparatus for decoding video data according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of another apparatus for decoding video data according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of another apparatus for decoding video data according to an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Referring to fig. 1-1, an embodiment of the present application provides a network architecture, including:
the system comprises a play terminal 1 and at least one shooting device 2, wherein the play terminal 1 can establish communication connection with part or all of the at least one shooting device 2. The at least one photographing apparatus 2 may constitute a monitoring system.
The playing terminal 1 may have a browser installed therein, and the browser may receive video data shot by any one or more shooting devices 2 of the at least one shooting device 2 and play the video data through the browser.
Optionally, referring to fig. 1-2, the monitoring system may further include a server 3, each of the at least one photographing apparatus 2 may establish a communication connection with the server 3, and each photographing apparatus 2 may transmit photographed video data to the server 3.
The playing terminal 1 may establish a communication connection with the server 3, may receive, through the browser, video data shot by one or more shooting devices 2 sent by the server 3, and may play the video data through the browser.
After receiving the video data, the playing terminal 1 may decode the video data first and then play the decoded video data, and the detailed decoding process may refer to the content of any of the following embodiments.
Optionally, the browser may be an HTML5 browser, and the playback terminal 1 may be a computer, a mobile phone, a notebook computer, or a tablet computer.
Referring to fig. 2, an embodiment of the present application provides a method for decoding video data, where the method includes:
step 201: when the browser inputs the received video data sent by the sender into a cache space, acquiring a first data volume of the cache space, wherein the cache space is located in a code layer of the browser.
Step 202: and when the first data amount exceeds a preset first threshold value, discarding the video data received by the browser after the current time, and/or informing the sender to suspend sending the video data.
After the sender is notified, there may be a network delay, which results in that the browser still receives the video data and buffers the video data into the buffer space within a period of time after the sender is notified. Therefore, when the sender is notified to suspend sending the video data, the video data received by the browser after the current time is discarded, the data volume in the cache space can be effectively prevented from being continuously increased within a period of time after the sender is notified, and the problem of notification effective delay caused by network delay is effectively solved.
Step 203: and reading the video data from the buffer space and decoding the read video data.
The execution sequence of step 202 and step 203 is not sequential, and step 202 and step 203 may be executed simultaneously, or step 202 may be executed first and step 203 may be executed second, or step 203 may be executed first and step 202 may be executed second.
In the embodiment of the application, the cache space is set in the code layer of the browser, so that the first data volume of the cache space can be acquired, when the first data volume exceeds a preset first threshold value, the video data received by the browser after the current time is discarded, and the video data is read and decoded from the cache space, so that the data volume in the cache space is continuously reduced, and the browser is prevented from being crashed.
Referring to fig. 3, an embodiment of the present application provides a method for decoding video data, including:
step 301: when the browser inputs the received video data sent by the sender into a cache space, acquiring a first data volume of the cache space, wherein the cache space is located in a code layer of the browser.
Before executing the step, a cache space is set in a code layer of the browser, and the code layer may be a JavaScript layer.
The browser can receive video data sent by the video monitoring system, and the video data are cached in the cache space every time the video data are received. That is, the sender may be a monitoring system, and optionally, the sender may be a shooting device or a server in the monitoring system.
In this embodiment, a processing module may be disposed in the browser, and the processing module may perform real-time monitoring on the cache space to obtain the first data amount of the cache space in real time.
When the browser caches the received video data in the cache space, the processing module can directly count the first data volume of the cache space; or when the browser caches the received video data in the cache space, increasing the recorded data volume of the cache space by the data volume of the received video data to obtain the first data volume.
The processing module may record the data amount in the cache space in real time, for example, the recorded data amount is N, which indicates that the current data amount of the cache space is N.
When the video data cached in the cache space changes, the processing module updates the value of N. For example, when the browser caches the received video data in the cache space, N is set to be N + m, where m is the data amount of the received video data; for another example, when the video data is read from the buffer space and sent to a decoder for decoding, N is set to N-M, where M is the data amount of the read video data.
Among them, it should be noted that: if the cache space is empty before the browser caches the video data in the cache space, that is, if the recorded N is 0, it indicates that the browser first caches the video data in the cache space in this step. Assuming that the number of the video data is M, after the browser buffers the video data into a buffer space, setting N-M, then reading the video data from the buffer space, and sending the read video data into a decoder, decoding the read video data by decoding, assuming that the data amount of the read video data is M, and when the video data is read, simultaneously setting N-M.
If the cache space is not empty before the browser caches the video data in the cache space, that is, the recorded N is not equal to 0, after the browser caches the video data in the cache space, the data volume of the cached video data is m, and N is set to be N + m. After the decoder outputs the decoded frame of video data, reading the video data from the buffer space, and sending the read video data to the decoder to decode the read video data, wherein the data volume of the read video data is M, and N is set to be N-M.
The decoder may also be located in the browser.
When the video data is sent to the decoder, if the video data cached to the cache space by the browser is the first frame video data, namely the cache space is empty before the video data is cached to the cache space, reading a segment of video data from the cache space, and sending the segment of video data to the decoder for decoding.
If the video data cached to the cache space by the browser is not the first frame video data, namely the cache space is not empty before the video data is cached to the cache space, judging whether the video data sent to the decoder at the last time is decoded into a video frame, if the video frame is decoded, reading a section of video data from the cache space and sending the video data into the decoder for decoding. And if the video frame is not decoded, waiting for decoding a frame of video frame.
When waiting to decode a frame of video frame, if the video data sent to the decoder is less than the video data decoded by the decoder, a segment of video data needs to be read from the buffer space and sent to the decoder, so that the decoder decodes a frame of video frame.
Among them, it should be further explained that: when the rate of decoding the video data by the decoder is greater than or equal to the rate of receiving the video data by the browser, the data amount in the buffer space is not increased. The amount of data in the buffer space may be continuously increased when the rate at which the decoder decodes the video data is less than the rate at which the browser receives the video data.
In order to avoid the system crash caused by the fact that the data volume in the cache space is continuously increased and a large amount of memory space is occupied, the following steps can be adopted to avoid the system crash.
Step 302: and when the first data amount exceeds a preset first threshold value, discarding the video data received by the browser after the current time, and/or informing the sender to suspend sending the video data.
The video data received by the browser after the current time can be discarded in two ways, respectively:
for the first way, it may be: the browser is notified to discard the video data received by the browser after the current time.
In a first manner, a first notification message may be sent to the browser, where the first notification message carries a discard event instruction. The browser receives the first notification message, and discards the received video data when receiving the video data according to the indication of the discarding event instruction carried by the first notification message; or stopping receiving the video data according to the indication of the discarding event instruction carried by the first notification message.
For the second way, it may be: and acquiring non-key frame video data from the video data cached in the cache space by the browser after the current time, and discarding the non-key frame video data.
After finding that the first data amount exceeds the preset first threshold, the browser may further continue to receive the video data and buffer the received video data in the buffer space each time the video data is received. Correspondingly, the video data cached by the browser is judged, if the video data is non-key frame video data, the non-key frame video data is read from the cache space and discarded, and if the video data is key frame video data, the key frame video data is reserved.
Optionally, when the browser caches the video data in the cache space, increasing the recorded value of N by the data amount of the cached video data; when video data is discarded from the buffer space, the recorded value of N is reduced by the data amount of the discarded video data.
The video data sent by the video monitoring system comprises key frame video data and non-key frame video data, and each key frame video data corresponds to at least one non-key frame video data. The key frame video data is a complete frame of video data, and the non-key frame video data is an incomplete frame of video data. The decoder can decode a key frame video data, and the key frame video data corresponding to the non-key video data is needed to be used when decoding the non-key video data.
In the second mode, the key frame video data with complete data is retained in the buffer space, and the non-key frame video data with incomplete data is discarded, so that the video content can be retained to the maximum extent, and the data amount buffered in the buffer space is slowly reduced.
Optionally, when the sender is notified to suspend sending the video data, the sender may temporarily cache the video data locally, and when waiting for an instruction sent by the browser to instruct the video data to be sent again, send the cached video data to the browser to be played, so that content skipping caused by discarding the video data can be avoided.
Step 303: reading video data from the buffer space and decoding the read video data using a decoder.
When the first data amount exceeds a preset first threshold or when the first data amount does not exceed the preset first threshold, as long as the video data is cached in the cache space, the video data is read from the cache space and decoded by a decoder.
Optionally, after the video data is read from the buffer space, the recorded value of N may be decreased by the data amount of the read video data.
After the decoder decodes a frame of video data, the browser can play the frame of video data.
After step 302 is executed, since the video data is continuously read from the buffer space and sent to the decoder for decoding, the data amount of the video data buffered in the buffer space is continuously reduced. When the data amount of the buffer space is lower than the preset second threshold, the following step 304 may be performed.
Step 304: and acquiring a second data volume of the cache space, and stopping discarding the video data received by the browser after the current time and/or informing the sender to send the video data when the second data volume is lower than a preset second threshold.
Optionally, when the second data amount is lower than a preset second threshold and the browser is in a video data discarding state, stopping discarding the video data received by the browser after the current time; and/or when the second data volume is lower than a preset second threshold and the sending party is in a state of suspending sending the video data, informing the sending party to resume sending the video data.
The second threshold is less than or equal to the first threshold.
In this step, the data cached in the cache space may be counted to obtain a second data volume; or, if the data volume N of the buffer space is recorded, directly reading the value of N, and using the value of N as the second data volume.
Optionally, if the video data received by the browser after the current time is discarded in the first manner, in this step, the browser may be notified to stop discarding the video data received by the browser after the current time.
In implementation, a second notification message may be sent to the browser, and the second notification message may carry the stop event instruction. And the browser receives the indication of the stop event instruction carried by the second notification message, and buffers the received frame of video data into the buffer space when receiving the frame of video data.
Optionally, if the video data received by the browser after the current time is discarded in the second manner, in this step, the discarding of the non-key frame video data received by the browser after the current time is directly stopped.
The execution main body of this embodiment may be the playback terminal mentioned in the above embodiments, and specifically, the playback terminal may execute each step of this embodiment through a processing module provided in the browser. The browser may be an HTML5 browser, which is unable to monitor the cache space for caching video data. In order to master the cache condition of the cache space in real time, the cache space is arranged in a code layer of the browser and a processing module is arranged in the browser, so that the cache condition of the cache space can be monitored through the processing module, and the browser is controlled to cache the video data in the cache space according to the cache condition.
Alternatively, an instruction for instructing the video data to be retransmitted may be transmitted to the transmitting side, so that the transmitting side continues to transmit the video data to the browser upon receiving the instruction.
In the embodiment of the application, since the cache space is set in the code layer of the browser, the first data volume of the cache space can be acquired through the processing module set in the browser, when the first data volume exceeds the preset first threshold, the video data received by the browser after the current time is discarded, and the video data is read and decoded from the cache space, so that the data volume in the cache space is continuously reduced, and the browser is prevented from being crashed. And continuing to acquire the second data volume of the cache space, and stopping discarding the video data received by the browser when the second data volume is lower than a preset second threshold.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 4, an embodiment of the present application provides an apparatus 400 for decoding video data, where the apparatus 400 includes:
an obtaining module 401, configured to obtain a first data size of a cache space when a browser inputs video data sent by a sender and received by the browser into the cache space, where the cache space is located in a code layer of the browser;
a discarding module 402, configured to discard video data received by the browser after a current time when the first data amount exceeds a preset first threshold, and/or notify the sender to suspend sending video data;
a decoding module 403, configured to read video data from the buffer space and decode the read video data.
Optionally, the obtaining module 401 is configured to:
when the browser inputs the received video data into a cache space, increasing the recorded data volume of the cache space by the data volume of the received video data to obtain a first data volume.
Optionally, the apparatus 400 further includes:
a reducing module for reducing the data amount of the recorded buffer space by the data amount of the read video data.
Optionally, the discarding module 402 includes:
a notification unit, configured to notify the browser to discard video data received by the browser after a current time; or,
and the discarding unit is used for acquiring non-key frame video data from the video data cached after the current time and discarding the non-key frame video data when the browser caches the video data in the caching space after the current time.
Optionally, the apparatus 400 further includes:
the stopping module is used for acquiring a second data volume of the cache space, and stopping discarding the video data received by the browser after the current time when the second data volume is lower than a preset second threshold and the browser is in a video data discarding state; and/or when the second data volume is lower than a preset second threshold and the sender is in a state of suspending sending video data, informing the sender to resume sending the video data, wherein the preset second threshold is smaller than or equal to the preset first threshold.
Optionally, the stopping module is configured to:
and informing the browser to stop discarding the video data received by the browser after the current time.
Optionally, the code layer of the browser is a JavaScript layer.
In the embodiment of the application, the cache space is set in the code layer of the browser, so that the first data volume of the cache space can be acquired, when the first data volume exceeds a preset first threshold, the video data received by the browser after the current time is discarded, and the video data is read and decoded from the cache space, so that the data volume in the cache space is continuously reduced, and the system breakdown is avoided. And continuing to acquire the second data volume of the cache space, and stopping discarding the video data received by the browser when the second data volume is lower than a preset second threshold.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Referring to fig. 5, fig. 5 is a block diagram illustrating a terminal 500 according to an exemplary embodiment of the present invention. The terminal 500 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving picture Experts Group Audio Layer III, motion picture Experts compression standard Audio Layer 3), an MP4 player (Moving picture Experts Group Audio Layer IV, motion picture Experts compression standard Audio Layer 4), a notebook computer or a desktop computer. Terminal 500 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
In general, the terminal 500 includes: a processor 501 and a memory 502.
The processor 501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 501 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 501 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 501 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 501 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 502 may include one or more computer-readable storage media, which may be non-transitory. Memory 502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 502 is used to store at least one instruction for execution by processor 501 to implement the method of decoding video data provided by the method embodiments herein.
In some embodiments, the terminal 500 may further optionally include: a peripheral interface 503 and at least one peripheral. The processor 501, memory 502 and peripheral interface 503 may be connected by a bus or signal lines. Each peripheral may be connected to the peripheral interface 503 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 504, touch screen display 505, camera 506, audio circuitry 507, positioning components 508, and power supply 509.
The peripheral interface 503 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 501 and the memory 502. In some embodiments, the processor 501, memory 502, and peripheral interface 503 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 501, the memory 502, and the peripheral interface 503 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 504 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 504 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 504 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 504 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 504 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 505 is a touch display screen, the display screen 505 also has the ability to capture touch signals on or over the surface of the display screen 505. The touch signal may be input to the processor 501 as a control signal for processing. At this point, the display screen 505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 505 may be one, providing the front panel of the terminal 500; in other embodiments, the display screens 505 may be at least two, respectively disposed on different surfaces of the terminal 500 or in a folded design; in still other embodiments, the display 505 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 500. Even more, the display screen 505 can be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 505 may be made of LCD (liquid crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 506 is used to capture images or video. Optionally, camera assembly 506 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 506 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 507 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 501 for processing, or inputting the electric signals to the radio frequency circuit 504 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 500. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 501 or the radio frequency circuit 504 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 507 may also include a headphone jack.
The positioning component 508 is used to locate the current geographic position of the terminal 500 for navigation or LBS (location based Service). The positioning component 508 may be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, or the galileo System in russia.
Power supply 509 is used to power the various components in terminal 500. The power source 509 may be alternating current, direct current, disposable or rechargeable. When power supply 509 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 500 also includes one or more sensors 510. The one or more sensors 510 include, but are not limited to: acceleration sensor 511, gyro sensor 512, pressure sensor 513, fingerprint sensor 514, optical sensor 515, and proximity sensor 516.
The acceleration sensor 511 may detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 500. For example, the acceleration sensor 511 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 501 may control the touch screen 505 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 511. The acceleration sensor 511 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 512 may detect a body direction and a rotation angle of the terminal 500, and the gyro sensor 512 may cooperate with the acceleration sensor 511 to acquire a 3D motion of the user on the terminal 500. The processor 501 may implement the following functions according to the data collected by the gyro sensor 512: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 513 may be disposed on a side bezel of the terminal 500 and/or an underlying layer of the touch display screen 505. When the pressure sensor 513 is disposed on the side frame of the terminal 500, a user's holding signal of the terminal 500 may be detected, and the processor 501 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 513. When the pressure sensor 513 is disposed at the lower layer of the touch display screen 505, the processor 501 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 505. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 514 is used for collecting a fingerprint of the user, and the processor 501 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 514, or the fingerprint sensor 514 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 501 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 514 may be provided on the front, back, or side of the terminal 500. When a physical button or a vendor Logo is provided on the terminal 500, the fingerprint sensor 514 may be integrated with the physical button or the vendor Logo.
The optical sensor 515 is used to collect the ambient light intensity. In one embodiment, the processor 501 may control the display brightness of the touch display screen 505 based on the ambient light intensity collected by the optical sensor 515. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 505 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 505 is turned down. In another embodiment, processor 501 may also dynamically adjust the shooting parameters of camera head assembly 506 based on the ambient light intensity collected by optical sensor 515.
A proximity sensor 516, also referred to as a distance sensor, is typically disposed on the front panel of the terminal 500. The proximity sensor 516 is used to collect the distance between the user and the front surface of the terminal 500. In one embodiment, when the proximity sensor 516 detects that the distance between the user and the front surface of the terminal 500 gradually decreases, the processor 501 controls the touch display screen 505 to switch from the bright screen state to the dark screen state; when the proximity sensor 516 detects that the distance between the user and the front surface of the terminal 500 becomes gradually larger, the processor 501 controls the touch display screen 505 to switch from the screen-rest state to the screen-on state.
Those skilled in the art will appreciate that the configuration shown in fig. 5 is not intended to be limiting of terminal 500 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Claims (15)
1. A method of decoding video data, the method comprising:
when a browser inputs video data received by the browser and sent by a sender into a cache space, acquiring a first data volume of the cache space, wherein the cache space is located in a code layer of the browser;
when the first data volume exceeds a preset first threshold value, discarding video data received by the browser after the current time, and/or informing the sender to suspend sending the video data;
and reading the video data from the buffer space and decoding the read video data.
2. The method of claim 1, wherein said obtaining the first amount of data in the buffer space comprises:
when the browser inputs the received video data into a cache space, increasing the recorded data volume of the cache space by the data volume of the received video data to obtain a first data volume.
3. The method of claim 2, wherein after reading the video data from the buffer space, further comprising:
reducing the data amount of the recorded buffer space by the data amount of the read video data.
4. The method of claim 1, wherein said discarding video data received by the browser after a current time comprises:
notifying the browser to discard video data received by the browser after a current time; or,
when the browser caches the video data in the cache space after the current time, acquiring non-key frame video data from the cached video data after the current time, and discarding the non-key frame video data.
5. The method of claim 1, wherein after reading the video data from the buffer space and decoding the read video data, further comprising:
acquiring a second data volume of the cache space, and stopping discarding the video data received by the browser after the current time when the second data volume is lower than a preset second threshold and the browser is in a video data discarding state; and/or when the second data volume is lower than a preset second threshold and the sender is in a state of suspending sending video data, informing the sender to resume sending the video data, wherein the preset second threshold is smaller than or equal to the preset first threshold.
6. The method of claim 5, wherein the ceasing to discard video data received by the browser after the current time comprises:
and informing the browser to stop discarding the video data received by the browser after the current time.
7. The method of any of claims 1 to 6, wherein the code layer of the browser is a JavaScript layer.
8. An apparatus for decoding video data, the apparatus comprising:
the system comprises an acquisition module, a cache module and a processing module, wherein the acquisition module is used for acquiring a first data volume of a cache space when the browser inputs video data received by the browser and sent by a sender into the cache space, and the cache space is positioned in a code layer of the browser;
the discarding module is used for discarding the video data received by the browser after the current time when the first data volume exceeds a preset first threshold value, and/or informing the sender to suspend sending the video data;
and the decoding module is used for reading the video data from the buffer space and decoding the read video data.
9. The apparatus of claim 8, wherein the acquisition module is to:
when the browser inputs the received video data into a cache space, increasing the recorded data volume of the cache space by the data volume of the received video data to obtain a first data volume.
10. The apparatus of claim 9, wherein the apparatus further comprises:
a reducing module for reducing the data amount of the recorded buffer space by the data amount of the read video data.
11. The apparatus of claim 8, wherein the discard module comprises:
a notification unit, configured to notify the browser to discard video data received by the browser after a current time; or,
and the discarding unit is used for acquiring non-key frame video data from the video data cached after the current time and discarding the non-key frame video data when the browser caches the video data in the caching space after the current time.
12. The apparatus of claim 8, wherein the apparatus further comprises:
the stopping module is used for acquiring a second data volume of the cache space, and stopping discarding the video data received by the browser after the current time when the second data volume is lower than a preset second threshold and the browser is in a video data discarding state; and/or when the second data volume is lower than a preset second threshold and the sender is in a state of suspending sending video data, informing the sender to resume sending the video data, wherein the preset second threshold is smaller than or equal to the preset first threshold.
13. The apparatus of claim 12, wherein the stop module is to:
and informing the browser to stop discarding the video data received by the browser after the current time.
14. The apparatus of any of claims 8 to 13, wherein the code layer of the browser is a JavaScript layer.
15. A computer-readable storage medium storing computer program instructions for execution by a processor to perform operations of a method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810166306.2A CN110213308A (en) | 2018-02-28 | 2018-02-28 | A kind of method and device of decoding video data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810166306.2A CN110213308A (en) | 2018-02-28 | 2018-02-28 | A kind of method and device of decoding video data |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110213308A true CN110213308A (en) | 2019-09-06 |
Family
ID=67778678
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810166306.2A Pending CN110213308A (en) | 2018-02-28 | 2018-02-28 | A kind of method and device of decoding video data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110213308A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110677715A (en) * | 2019-10-11 | 2020-01-10 | 北京达佳互联信息技术有限公司 | Decoding method, decoder, electronic device and storage medium |
CN110855645A (en) * | 2019-11-01 | 2020-02-28 | 腾讯科技(深圳)有限公司 | Streaming media data playing method and device |
CN113395302A (en) * | 2020-03-11 | 2021-09-14 | 杭州中天微系统有限公司 | Asynchronous data distributor, related apparatus and method |
CN113672293A (en) * | 2020-04-30 | 2021-11-19 | 华为技术有限公司 | Media data processing method based on cloud mobile phone and terminal equipment |
CN114071224A (en) * | 2020-07-31 | 2022-02-18 | 腾讯科技(深圳)有限公司 | Video data processing method and device, computer equipment and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2192740A1 (en) * | 2008-11-26 | 2010-06-02 | Thomson Licensing | Method and apparatus for receiving content |
CN101946518A (en) * | 2007-12-28 | 2011-01-12 | 诺基亚公司 | Methods, apparatuses, and computer program products for adaptive synchronized decoding of digital video |
CN104427400A (en) * | 2013-08-22 | 2015-03-18 | 中国电信股份有限公司 | Streaming media transmission method and system, and streaming media server |
CN104702968A (en) * | 2015-02-17 | 2015-06-10 | 华为技术有限公司 | Frame loss method for video frame and video sending device |
CN106331835A (en) * | 2015-06-26 | 2017-01-11 | 成都鼎桥通信技术有限公司 | Method of dynamically adjusting data reception cache and video decoding device |
CN106488265A (en) * | 2016-10-12 | 2017-03-08 | 广州酷狗计算机科技有限公司 | A kind of method and apparatus sending Media Stream |
CN107147923A (en) * | 2017-05-05 | 2017-09-08 | 中广热点云科技有限公司 | A kind of time shift order method |
CN107371061A (en) * | 2017-08-25 | 2017-11-21 | 普联技术有限公司 | A kind of video stream playing method, device and equipment |
CN107465679A (en) * | 2017-08-07 | 2017-12-12 | 郑州仁峰软件开发有限公司 | A kind of streaming media control method |
-
2018
- 2018-02-28 CN CN201810166306.2A patent/CN110213308A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101946518A (en) * | 2007-12-28 | 2011-01-12 | 诺基亚公司 | Methods, apparatuses, and computer program products for adaptive synchronized decoding of digital video |
EP2192740A1 (en) * | 2008-11-26 | 2010-06-02 | Thomson Licensing | Method and apparatus for receiving content |
CN104427400A (en) * | 2013-08-22 | 2015-03-18 | 中国电信股份有限公司 | Streaming media transmission method and system, and streaming media server |
CN104702968A (en) * | 2015-02-17 | 2015-06-10 | 华为技术有限公司 | Frame loss method for video frame and video sending device |
CN106331835A (en) * | 2015-06-26 | 2017-01-11 | 成都鼎桥通信技术有限公司 | Method of dynamically adjusting data reception cache and video decoding device |
CN106488265A (en) * | 2016-10-12 | 2017-03-08 | 广州酷狗计算机科技有限公司 | A kind of method and apparatus sending Media Stream |
CN107147923A (en) * | 2017-05-05 | 2017-09-08 | 中广热点云科技有限公司 | A kind of time shift order method |
CN107465679A (en) * | 2017-08-07 | 2017-12-12 | 郑州仁峰软件开发有限公司 | A kind of streaming media control method |
CN107371061A (en) * | 2017-08-25 | 2017-11-21 | 普联技术有限公司 | A kind of video stream playing method, device and equipment |
Non-Patent Citations (2)
Title |
---|
刘后铭,洪福明: "《计算机通信网 修订版》", 31 December 1988 * |
李浪,谢新华,刘先锋: "《计算机网络 第2版》", 30 September 2017 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110677715A (en) * | 2019-10-11 | 2020-01-10 | 北京达佳互联信息技术有限公司 | Decoding method, decoder, electronic device and storage medium |
CN110677715B (en) * | 2019-10-11 | 2022-04-22 | 北京达佳互联信息技术有限公司 | Decoding method, decoder, electronic device and storage medium |
CN110855645A (en) * | 2019-11-01 | 2020-02-28 | 腾讯科技(深圳)有限公司 | Streaming media data playing method and device |
CN110855645B (en) * | 2019-11-01 | 2021-10-22 | 腾讯科技(深圳)有限公司 | Streaming media data playing method and device |
CN113395302A (en) * | 2020-03-11 | 2021-09-14 | 杭州中天微系统有限公司 | Asynchronous data distributor, related apparatus and method |
CN113672293A (en) * | 2020-04-30 | 2021-11-19 | 华为技术有限公司 | Media data processing method based on cloud mobile phone and terminal equipment |
CN113672293B (en) * | 2020-04-30 | 2024-04-09 | 华为云计算技术有限公司 | Media data processing method based on cloud mobile phone and terminal equipment |
CN114071224A (en) * | 2020-07-31 | 2022-02-18 | 腾讯科技(深圳)有限公司 | Video data processing method and device, computer equipment and storage medium |
CN114071224B (en) * | 2020-07-31 | 2023-08-25 | 腾讯科技(深圳)有限公司 | Video data processing method, device, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110881136B (en) | Video frame rate control method and device, computer equipment and storage medium | |
CN111372126B (en) | Video playing method, device and storage medium | |
CN108401124B (en) | Video recording method and device | |
CN111147878B (en) | Stream pushing method and device in live broadcast and computer storage medium | |
CN110368689B (en) | Game interface display method, system, electronic equipment and storage medium | |
CN110213308A (en) | A kind of method and device of decoding video data | |
CN112822522B (en) | Video playing method, device, equipment and storage medium | |
CN111327694B (en) | File uploading method and device, storage medium and electronic equipment | |
CN111106902B (en) | Data message transmission method, device, equipment and computer readable storage medium | |
CN108845777B (en) | Method and device for playing frame animation | |
CN110856152A (en) | Method, device, electronic equipment and medium for playing audio data | |
CN110868642B (en) | Video playing method, device and storage medium | |
CN111092991B (en) | Lyric display method and device and computer storage medium | |
CN107888975B (en) | Video playing method, device and storage medium | |
CN111586433B (en) | Code rate adjusting method, device, equipment and storage medium | |
CN112015612B (en) | Method and device for acquiring stuck information | |
CN110113669B (en) | Method and device for acquiring video data, electronic equipment and storage medium | |
CN110321059B (en) | Data processing method, device and computer readable storage medium | |
CN110636326A (en) | Live video processing method and device and storage medium | |
CN111711841B (en) | Image frame playing method, device, terminal and storage medium | |
CN112153404B (en) | Code rate adjusting method, code rate detecting method, code rate adjusting device, code rate detecting device, code rate adjusting equipment and storage medium | |
CN111641824B (en) | Video reverse playing method and device | |
CN108877845B (en) | Song playing method and device | |
CN114071224A (en) | Video data processing method and device, computer equipment and storage medium | |
CN111246240A (en) | Method and apparatus for storing media data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190906 |