WO2013100754A1 - Système et procédé de livraison de contenu de média adaptatif - Google Patents

Système et procédé de livraison de contenu de média adaptatif Download PDF

Info

Publication number
WO2013100754A1
WO2013100754A1 PCT/MY2012/000171 MY2012000171W WO2013100754A1 WO 2013100754 A1 WO2013100754 A1 WO 2013100754A1 MY 2012000171 W MY2012000171 W MY 2012000171W WO 2013100754 A1 WO2013100754 A1 WO 2013100754A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
component
bandwidth
video
frames
Prior art date
Application number
PCT/MY2012/000171
Other languages
English (en)
Inventor
Redika REMON
Woon Hon Hock
Hafriza Bin Zakaria KHAIRIL
Original Assignee
Mimos Berhad
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mimos Berhad filed Critical Mimos Berhad
Publication of WO2013100754A1 publication Critical patent/WO2013100754A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/15Data rate or code amount at the encoder output by monitoring actual compressed data size at the memory before deciding storage at the transmission buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2402Monitoring of the downstream path of the transmission network, e.g. bandwidth available

Definitions

  • the present invention relates to a system and method for adaptive delivering of media content over a network having varying bandwidth capacities.
  • adaptive media content delivery adapts the quality of the media content to be delivered based on the client's bandwidth connection.
  • the quality of the media content can be scaled down for delivering to the client having a low-bandwidth connection.
  • an adaptive video transmission with variable frame rate includes: (a) measuring the transmission capacity to the receiver; (b) determining the rate of change of transmission capacity; and (c) adjusting the transmitted video frame rate as a function of the said rate of change.
  • a system for adaptively transporting video over networks comprises a video/audio codec that functions to compress, code, decode and decompress video streams that are transmitted over networks having available bandwidths that vary with time and location.
  • the system adjusts the compression ratio to accommodate a plurality of bandwidths ranging from 20 Kbps for POTS to several Mbps for switched LAN and ATM environments.
  • Bandwidth adjustability is provided by offering a trade off between video resolution, frame rate and individual frame quality.
  • the system generates a video data stream comprised of Key, P and B frames from a raw source of video. Each frame type is further comprised of multiple levels of data representing varying degrees of quality.
  • several video server platforms can be utilized in tandem to transmit video/audio information with each video server platform transmitting information for a single compression/resolution level.
  • a system for adaptive media content delivery comprises of at least one media source (110), a server (130), and a client device (150).
  • the server (130) includes at least one video analytic component (132), wherein the at least one video analytic component (132) is used to analyse event in images captured by the at least one media source (110); a bandwidth rate receiver component (135), wherein the client bandwidth rate receiver component (135) is used to estimate the bandwidth of the client device (150) connection; a quality decision component (136), wherein the quality decision component (13S) is used to perform image compression for each frame and selective temporal buffer frames based on bandwidth of the client device (150) connection and event captured in the images; and a video encoder (137), wherein the video encoder (137) is used to encode the compressed and selected temporal buffer frames from the quality decision component (136) into a video.
  • a method for delivering of a media content by using the system for adaptive media content delivery (100) comprises the steps of receiving a plurality of captured images or video frames from at least one media source (110) by a video acquisition component (131); analysing each frame by at least one video analytic component (132) to determine whether an event is detected in each frame; buffering the frames in a video buffering component (134); estimating bandwidth of a client device (150) connection by the client bandwidth rate receiver component (135); performing image compression on each frame and selective temporal buffer frames by a quality decision component (136), wherein the image compression and selective temporal buffer frames are performed based on the bandwidth of the client device (150) connection and the event detected in each frame; encoding the compressed and selected temporal buffer frames into video by a video encoder (137); and transmitting the video to the client device (150).
  • the bandwidth estimation includes the steps of sending a test frame from the client device (150) to the server (130); recording arrival time of the first bit and the last bit of the test frame; and calculating the bandwidth of the client device (150) connection based on data size of the test frame, arrival time of the last bit of the test frame, and the arrival time of the first bit of the test frame.
  • the step of performing image compression on each frame and selective temporal buffer frames by a quality decision component (136) includes the steps of: (a) selecting and extracting a frame from the video buffering component (134); (b) determining whether an event is detected in the frame, wherein if an event is detected in the frame, selecting a lower image compression ratio and more frequent frame sequence for the following frame by the quality decision component (136), and wherein if there is no event detected in the frame, selecting a higher image compression ratio and less frequent frame sequence for the following by frame the quality decision component (136); (c) compressing the frame based on the selected compression ratio; (d) packetizing the compressed frame with previous selected frames; (e) comparing total size of the packetized frames with the bandwidth of the client device (150) connection, wherein if the total size of the packetized frames is lower than the bandwidth estimated by the client bandwidth rate receiver component (135), repeating steps (a) to (d) for the following frame selected from the video buffering component (134), and wherein if the total size of the packet
  • a server (130) for delivering media content to a client device (150) comprises of a video acquisition component (131), a database (133), a video buffering component (134), and a socket listener component (138).
  • the server (130) further includes at least one video analytic component (132), wherein the at least one video analytic component (132) is used to analyse event in images captured by the at least one media source (110); a bandwidth rate receiver component (135), wherein the client bandwidth rate receiver component (135) is used to estimate the bandwidth of the client device (150) connection; a quality decision component (136), wherein the quality decision component (136) is used to perform image compression for each frame and selective temporal buffer frames based on bandwidth of the client device (150) connection and event captured in the images; and a video encoder (137), wherein the video encoder (137) is used to encode the compressed and selected temporal buffer frames from the quality decision component (136) into a video.
  • at least one video analytic component 132
  • the at least one video analytic component (132) is used to analyse event in images captured by the at least one media source (110)
  • a bandwidth rate receiver component (135) wherein the client bandwidth rate receiver component (135) is used to estimate the bandwidth of the client
  • a method for delivering of a media content by using the server (130) comprises the steps of receiving a plurality of captured images or video frames by a video acquisition component (131); analysing each frame by at least one video analytic component (132) to determine whether an event is detected in each frame; buffering the frames in a video buffering component (134); estimating bandwidth of a client device (150) connection by the client bandwidth rate receiver component (135); performing image compression on each frame and selective temporal buffer frames by a quality decision component (136), wherein the image compression and selective temporal buffer frames are performed based on the bandwidth of the client device (150) connection and the event detected in each frame; encoding the compressed and selected temporal buffer frames into video by a video encoder (137); and transmitting the video to the client device (150).
  • the bandwidth estimation includes the steps of sending a test frame from the client device (150) to the server (130); recording arrival time of the first bit and the last bit of the test frame; and calculating the bandwidth of the client device (150) connection based on data size of the test frame, arrival time of the last bit of the test frame, and the arrival time of the first bit of the test frame.
  • the step of performing image compression on each frame and selective temporal buffer frames by a quality decision component (136) includes the steps of: (a) selecting and extracting a frame from the video buffering component (134); (b) determining whether an event is detected in the frame, wherein if an event is detected in the frame, selecting a lower image compression ratio and more frequent frame sequence for the following frame by the quality decision component (136), and wherein if there is no event detected in the frame, selecting a higher image compression ratio and less frequent frame sequence for the following by frame the quality decision component (136); (c) compressing the frame based on the selected compression ratio; (d) packetizing the compressed frame with previous selected frames; (e) comparing total size of the packetized frames with the bandwidth of the client device (150) connection, wherein if the total size of the packetized frames is lower than the bandwidth estimated by the client bandwidth rate receiver component (135), repeating steps (a) to (d) for the following frame selected from the video buffering component (134), and wherein if the total size of the packet
  • FIG. 1 shows a block diagram of a system for adaptive media content delivery (100) according to an embodiment of the present invention.
  • FIG. 2 shows a flowchart of a method for adaptive delivering of media content using the system (100) of FIG. 1 according to an embodiment of the present invention.
  • FIG. 3 shows a flowchart of a method for performing image compression and selective temporal buffer frames by the quality decision component (136) by using a quality decision component (136) of the system (100) of FIG. 1 according to an embodiment of the present invention.
  • FIGS. 4(a-c) show a set of video frames undergoing the method of FIG. 3.
  • FIG. 1 there is shown a block diagram of a system for adaptive media content delivery (100) according to an embodiment of the present invention.
  • the system (100) adapts the quality of media content to be delivered based on bandwidth of a client device (150) connection and event detected in the media content. This allows the system (100) to adaptively send a media content while preserving the quality of frames capturing an event.
  • the term "event” used in the description and in the appended claims refers to a behaviour of an object captured in a media content that is detected and deemed as being suspicious by a surveillance system such as, but not limited to intrusion, loitering, slip and fall, unattended object and etc.
  • the system (100) generally comprises of a media source (110), a server (130), and a client device (150).
  • the media source (110) is used for capturing a sequence of images of a location area under surveillance.
  • the media source (110) is a video camera or any other image sensor device.
  • the images captured by the media source (110) are pulled by the server ( 30) as video through either a wired or wireless connection.
  • the server (130) is connected to the client device (150) through a network such as an Internet Protocol network.
  • the server (130) comprises of a video acquisition component (131), a video analytic component (132), a database (133), a video buffering component (134), bandwidth rate receiver component ( 35), a quality decision component (136), a video encoder (137), and a socket listener component (138).
  • the video acquisition component (131) is used to acquire the video from the media source (110).
  • the video acquisition component (131) decodes the video into a plurality of frames and thereon, it sends each frame to video analytic component (132) for event analysis.
  • the video analytic component (132) performs event analysis on each frame of the video to detect an event captured in the video.
  • the event analysis is performed by analysing each pixel in each frame to identify motion blobs, filtering noise from the frames, and applying event rules such as object tracking, intrusion detection and loitering detection. Based on the event analysis performed by the video analytic component (132), each frame is tagged with event information. The frames tagged with event information are stored in a database (133) and temporarily stored in the video buffering component (134).
  • the video buffering component (134) is used to temporarily store each frame tagged with event information based on the event analysis performed by the video analytic component (132) before transmitting those images to the quality decision component (136).
  • the video buffering component (134) is connected to the quality decision component (136).
  • the quality decision component (136) is used to select image compression ratio for each frame, and to select temporal buffer frames. The selection of image compression and temporal buffer frames is based on the bandwidth of the client device (150) connection and event information tagged with each frame.
  • the selected image compression ratio is used by the quality decision component (136) to compress each frame, wherein frame tagged with an event is compressed with a higher image compression ratio than the frame without an event.
  • the image compression used to reduce the images bit rate includes any intra-codec, lossy or lossless compression.
  • the quality decision component (136) is further connected to the video encoder (137) and the client bandwidth rate receiver component (135).
  • the client bandwidth rate receiver component (135) is used to estimate the bandwidth of the client device (150) connection. More specifically, the client bandwidth rate receiver component (135) receives bit rate value based on transmission rate and packets size yield from the client device (150).
  • the video encoder (137) is used to encode the compressed and selected temporal buffer frames from the quality decision component (136) into a video.
  • the video encoder (137) is further connected to the socket listener component ( 38).
  • the socket listener component (138) is an interface component to receive a request message from the client device (150) and to deliver the video encoded by the video encoder (137) to the client device (150) over a wired or wireless network.
  • the socket listener component (138) communicates with the client device (150) either through TCP protocol, UDP protocol, HTTP protocol or any other communication protocol.
  • the client device (150) is used for downloading and playing the media content from the server (130).
  • the client device (130) includes a socket downloader component (151) and a video player (152).
  • the socket downloader component (151) is an interface component which is used to send a request message to the server (130) and to receive the video from the server (130).
  • the socket downloader component (151) is connected to the video player (152).
  • the socket downloader component (151) communicates with the server (130) either through TCP protocol, UDP protocol, HTTP protocol or any other communication protocol.
  • the video player (152) is used to decode the received video and thereon, playing and displaying the decoded video on a monitor connected to the client device (150).
  • FIG. 2 there is shown a flowchart of a method for adaptive delivering of media content by using the system (100) of FIG. 1.
  • the video acquisition component (131) receives a video from the media source (110) and thereon, decodes the video into a plurality of frames (1 - 10) as illustrated in FIG. 4a.
  • each frame of the video is analysed by the video analytic component (132) to detect any event captured in the frames as in step 202.
  • Each frame is tagged with event information based on the event analysis performed by the video analytic component (132).
  • FIG. 4b illustrates each frame of the video of FIG.
  • step 203 the frames tagged with event information are buffered in the video buffering component (134).
  • step 204 the bandwidth of the client device (150) connection is estimated by the client bandwidth rate receiver component (135).
  • the bandwidth estimation includes the steps of sending a test frame from the client device (150) to the server (130), recording arrival time of the first bit and the last bit of the test frame, and calculating the bandwidth based on the equation below:
  • T is the arrival time of the last bit of the test frame
  • the quality decision component (136) performs image compression for each frame and selective temporal buffer frames based on the bandwidth of the client device (150) connection and event information tagged with each frame (step 205).
  • the image compression used to reduce the images bit rate includes any intra-codec, lossy or lossless compression.
  • the compressed and selected temporal buffer frames are then transmitted to the video encoder (137).
  • the video encoder (137) encodes the compressed and selected temporal buffer frames into video.
  • FIG. 4c illustrates the frames of FIG. 4b encoded by the video encoder (137) based on a selected temporal buffer frames, wherein the encoded video comprises of only the first frame (1), the third frame (3), the fourth frame (4), the fifth frame (5), and the seventh frame (7).
  • the encoded video is then transmitted to the client device (150) via the socket listener component (138) as in step 206.
  • the video received by the socket downloader component (151) are then decoded and played by video player (152) as in step 207.
  • step 301 the quality decision component (136) selects and extracts a frame from the video buffering component (134) and thereon, checks whether the frame is tagged with an event. If the frame is tagged with an event, then the quality decision component
  • the quality decision component (136) selects a higher image compression ratio and less frequent frame sequence for the following frame (decision 302 and steps 305 to 307).
  • the selected image compression ratio reduces the image quality of the frame.
  • the frame is compressed according to the selected image compression and packetized with other selected frames (step 308).
  • the total size of the packetized frames is compared with the bandwidth of the client device (150) connection.
  • the quality decision component (136) repeats step 301 to decision 309, wherein the quality decision component (136) extracts the following frame from the video buffering component (134) to check whether the frame is tagged with an event. The following frame is selected and extracted based on the previous frame packetized.
  • the quality decision component (136) determines whether the packetized frames can be compressed to reduce it to a size lower than or equal to the estimated bandwidth as in decision 310. If the packetized frames can be reduced, the quality decision component (136) compresses the packetized frames and thereon, sends the packetized frames to the video encoder (137) as in step 311 and 313. Otherwise, the quality decision component (136) discards the frame from the packetized frames and sends the packetized frames to the video encoder (137) as in step 312 and 313. The frame which is discarded from the packetized frames is included for the following packetized frames to be sent to the video encoder (137).
  • the quality decision component (136) sends the packetized frames to the video encoder (137) as in step 313.
  • FIG. 4b and FIG. 4c are referred to illustrate an example of the frames stored in the video buffering component (134) and the packetized frames.
  • the size for each frame is 180kbit and the estimated bandwidth is 750kbit/s.
  • the quality decision component (136) selects and extracts the first frame (1) stored in the video buffering component (134). Since the first frame (1) is tagged with no event, the quality decision component (136) selects a higher image compression ratio. Moreover, the quality decision component (136) selects a less frequent frame sequence for the following frame, wherein the quality decision component (136) skips a frame to select the third frame (3) as the following frame. Thereon, the first frame (1) is compressed to a size of 120kbit and the compressed first frame (1) is packetized. The total size of the packetized frames is compared with the bandwidth of the client device (150) connection. Since the total size of the packetized frames of 120kbit is lower than the estimated bandwidth of 750kbit/s, the quality decision component (136) selects and extracts the following frame which is the third frame (3) stored in the video buffering component (134).
  • the third frame (3) is tagged with an event and thus, the quality decision component (136) selects a lower image compression ratio. Moreover, the quality decision component (136) selects more frequent frame sequence for the following frame, wherein the quality decision component (136) selects the fourth frame (4) as the following frame. Thereon, the third frame (3) is compressed to a size of 180kbit and then, packetized with the first frame (1). The total size of the packetized frames is compared with the bandwidth of the client device ( 50) connection. Since the total size of the packetized frames of 300kbit is lower than the estimated bandwidth of 750kbit/s, the quality decision component (136) selects and extracts the following frame which is the fourth frame (4) stored in the video buffering component (134).
  • the fourth frame (4) is tagged with an event and thus, the quality decision component (136) selects a higher image compression ratio. Moreover, the quality decision component (136) selects more frequent frame sequence for the following frame, wherein the quality decision component (136) selects the fourth frame (5) as the following frame. Thereon, the fourth frame (4) is compressed to a size of 180kbit and then, packetized with the first frame (1) and the third frame (3). The total size of the packetized frames is compared with the bandwidth of the client device (150) connection. Since the total size of the packetized frames of 480kbit is still lower than the estimated bandwidth of 750kbit/s, the quality decision component (136) selects and extracts the following frame which is the fifth frame (5) stored in the video buffering component (134).
  • the fifth frame (5) is tagged with no event and thus, the quality decision component (136) selects a higher image compression ratio. Moreover, the quality decision component (136) selects a less frequent frame sequence for the following frame, wherein the quality decision component (136) skips a frame to select the seventh frame (7) as the following frame. Thereon, the fifth frame (5) is compressed to a size of 120kbit and then, packetized with the first frame (1), the third frame (3) and the fourth frame (4). The total size of the packetized frames is compared with the bandwidth of the client device (150) connection. Since the total size of the packetized frames of 600kbit is lower than the estimated bandwidth of 750kbit/s, the quality decision component (136) selects and extracts the following frame which is the seventh frame (7) stored in the video buffering component (134).
  • the seventh frame (7) is tagged with no event and thus, the quality decision component (136) selects a higher image compression ratio. Moreover, the quality decision component (136) selects a less frequent frame sequence for the following frame, wherein the quality decision component (136) skips two frames to select the tenth frame (10) as the following frame. Thereon, the seventh frame (7) is compressed to a size of 100kbit and then, packetized with the first frame (1), the third frame (3), the fourth frame (4) and the fifth frame (5). The total size of the packetized frames is compared with the bandwidth of the client device (150) connection. Since the total size of the packetized frames of 700kbit is still lower than the estimated bandwidth of 750kbit/s, the quality decision component (136) selects and extracts the following frame which is the tenth frame (10) stored in the video buffering component (134).
  • the tenth frame (10) is tagged with an event and thus, the quality decision component (136) selects a lower image compression ratio. Moreover, the quality decision component (136) selects more frequent frame sequence for the following frame. Thereon, the tenth frame (10) is compressed to a size of 180kbit and then, packetized with the first frame (1), the third frame (3), the fourth frame (4), the fifth frame (5) and the seventh frame (7). The total size of the packetized frames is compared with the bandwidth of the client device (150) connection.
  • the quality decision component (136) discards the tenth frame (10) from the packetized frames.
  • the packetized frames comprising the first frame (1), the third frame (3), the fourth frame (4), the fifth frame (5) and the seventh frame (7) as shown in FIG. 4c are then sent to the video encoder (137).
  • the tenth frame (10) will be included for the following packetized frames to be sent to the video encoder (137).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

La présente invention concerne un système de livraison de contenu de média adaptatif (100). Le système (100) adapte la qualité du contenu de média qui doit être livré en fonction de la bande passante de la connexion d'un dispositif client (150) et des événements détectés dans le contenu de média. Ceci permet au système (100) d'envoyer de manière adaptative un contenu de média tout en préservant la qualité des images capturant un événement. Le système (100) consiste en une source de média (110), un serveur (130) et un dispositif client (150). Le serveur (130) comprend un composant d'acquisition vidéo (131), un composant d'analyse vidéo (132), une base de données (133), un composant de mise en mémoire tampon de vidéo (134), un composant de réception de la vitesse de la bande passante (135), un composant de décision de la qualité (136), un encodeur vidéo (137) et un composant de prise d'écouteur (138). Le dispositif client (130) comprend un composant de prise de téléchargement (151) et un lecteur vidéo (152).
PCT/MY2012/000171 2011-12-28 2012-06-29 Système et procédé de livraison de contenu de média adaptatif WO2013100754A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
MYPI2011700206A MY173712A (en) 2011-12-28 2011-12-28 A system and method for adaptive media content delivery
MYPI2011700206 2011-12-28

Publications (1)

Publication Number Publication Date
WO2013100754A1 true WO2013100754A1 (fr) 2013-07-04

Family

ID=46754742

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/MY2012/000171 WO2013100754A1 (fr) 2011-12-28 2012-06-29 Système et procédé de livraison de contenu de média adaptatif

Country Status (2)

Country Link
MY (1) MY173712A (fr)
WO (1) WO2013100754A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104735485A (zh) * 2015-03-05 2015-06-24 上海小蚁科技有限公司 一种播放视频的方法及装置
US10178203B1 (en) 2014-09-23 2019-01-08 Vecima Networks Inc. Methods and systems for adaptively directing client requests to device specific resource locators

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6014694A (en) 1997-06-26 2000-01-11 Citrix Systems, Inc. System for adaptive video/audio transport over a network
WO2001027763A1 (fr) * 1999-10-08 2001-04-19 Ivex Corporation Systeme de securite numerique en reseau et procedes
EP1777969A1 (fr) 2005-10-10 2007-04-25 BRITISH TELECOMMUNICATIONS public limited company Transmission de vidéo adaptative à fréquence de trame variable
WO2008119043A1 (fr) * 2007-03-27 2008-10-02 Armida Technologies Contrôleur de sécurité intégré sans fil
US20110119716A1 (en) * 2009-03-12 2011-05-19 Mist Technology Holdings, Inc. System and Method for Video Distribution Management with Mobile Services

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6014694A (en) 1997-06-26 2000-01-11 Citrix Systems, Inc. System for adaptive video/audio transport over a network
WO2001027763A1 (fr) * 1999-10-08 2001-04-19 Ivex Corporation Systeme de securite numerique en reseau et procedes
EP1777969A1 (fr) 2005-10-10 2007-04-25 BRITISH TELECOMMUNICATIONS public limited company Transmission de vidéo adaptative à fréquence de trame variable
WO2008119043A1 (fr) * 2007-03-27 2008-10-02 Armida Technologies Contrôleur de sécurité intégré sans fil
US20110119716A1 (en) * 2009-03-12 2011-05-19 Mist Technology Holdings, Inc. System and Method for Video Distribution Management with Mobile Services

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10178203B1 (en) 2014-09-23 2019-01-08 Vecima Networks Inc. Methods and systems for adaptively directing client requests to device specific resource locators
CN104735485A (zh) * 2015-03-05 2015-06-24 上海小蚁科技有限公司 一种播放视频的方法及装置

Also Published As

Publication number Publication date
MY173712A (en) 2020-02-17

Similar Documents

Publication Publication Date Title
US8238441B2 (en) Method and system for scalable representation, storage, transmission and reconstruction of media streams
KR101734835B1 (ko) 재전송 결정을 위한 장치 및 방법
US20080259796A1 (en) Method and apparatus for network-adaptive video coding
JP4712238B2 (ja) 映像信号符号化装置、映像信号送信装置、および映像信号符号化方法
US20090190652A1 (en) System and method for controlling transmission of moving image data over network
EP2919453A1 (fr) Commutation de flux vidéo
WO2008094092A9 (fr) Procédé et agencement pour une évaluation de qualité de visiophonie
KR102059222B1 (ko) 비디오 스트리밍 서비스를 위한 컨텐트-의존 비디오 품질 모델
RU2009116472A (ru) Динамическая модификация свойств видео
WO2016018543A1 (fr) Sélection automatique et adaptative de profils, pour une diffusion en flux continu à débit binaire adaptatif
CN106162199B (zh) 带反向信道消息管理的视频处理的方法和系统
EP1679895A1 (fr) Procede d'emission de signal de support, procede de reception, procede demission/reception, et dispositif
CN1605075A (zh) 基于客户端或网络环境调整视频流的系统和方法
EP1187460A2 (fr) Méthode et appareil de transmission des images et méthode et appareil de réception des images
CN111093083A (zh) 数据传输方法及装置
US20020184645A1 (en) Measurement of quality of service
CN110099250B (zh) 一种监控视频质量判断方法及播放控制装置
CN107580781B (zh) 视频编码器
US20070110168A1 (en) Method for generating high quality, low delay video streaming
WO2013100754A1 (fr) Système et procédé de livraison de contenu de média adaptatif
TWI566550B (zh) 影像資訊傳送方法以及封包通信系統
Seeling et al. Video quality evaluation for wireless transmission with robust header compression
WO2017075692A1 (fr) Procédé et système de régulation du débit dans un réseau de diffusion en continu à contenu commandé
WO2017041163A1 (fr) Procédé et système de diffusion en continu de multimédia panoramique
KR100701032B1 (ko) 네트워크 상의 동영상 데이터 전송 조절 시스템과 그 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12751379

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12751379

Country of ref document: EP

Kind code of ref document: A1