TW201145981A - Cloud video and event processing sub-system and source end and player - Google Patents
Cloud video and event processing sub-system and source end and player Download PDFInfo
- Publication number
- TW201145981A TW201145981A TW99118059A TW99118059A TW201145981A TW 201145981 A TW201145981 A TW 201145981A TW 99118059 A TW99118059 A TW 99118059A TW 99118059 A TW99118059 A TW 99118059A TW 201145981 A TW201145981 A TW 201145981A
- Authority
- TW
- Taiwan
- Prior art keywords
- module
- video
- event
- data
- audio
- Prior art date
Links
Landscapes
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
201145981 六、發明說明: 【發明所屬之技術領域】 本發明係關於一種影音及事件處理系統,係透過網際 網路使用http網路協定接收一來源端所產生之MPEG-TS 影音與事件資料,並可透過網路使用http網路協定將該 MPEG-TS資料傳送至一接收端。。201145981 VI. Description of the Invention: [Technical Field] The present invention relates to an audio-visual and event processing system for receiving MPEG-TS video and event data generated by a source through an internet protocol using the http network protocol, and The MPEG-TS data can be transmitted to a receiving end via the Internet using the http network protocol. .
【先前技術】 數位式監視保全系統(Surveillance System)分成封閉式 或網路式架構。前者使用一般DVR (Digital Video Recorder) 進行近端單機錄影;後者則使用一網路遠端伺服器設備 NVSR (Network Video Streaming Recorder,或稱 NVR),進 行對具網路功能之攝影機(或稱網路攝影機)的遠端錄影, 及可讓該攝影機之使用者進行遠端影音瀏覽。一般來說, 該攝影機會產生影像及聲音資料並壓縮之,以利資料於網 路之傳送。 網路傳送方式分成被動及主動兩種。被動方式由 NVSR對網路攝影機進行㈣讀取,攝影機必須架設於網 際,周路IP彳見網址,即在使用上會限制於固定p網址處; it i則由網路攝影機對nvsr進行資料寫人,方便網 架认於任何可上網地點,只要NVSR架設於網路 可見網址即可。—般NVSR可為—網路伺服器群組, :徒戽可:應"^千台的攝影機。主’方式方便☆影音業務 B彳用者只須裝設可上網之網路攝影機即可。網路 3 201145981 業者則負責代管錄影,使用者可對NVSR進行對應網路攝 影機之近似即時(Near real-time)之監視(Live)播放,或歷 史影音之播放或調閱(Playback)。 影音資料之劉覽於網路上傳遞,分成檔案(File)傳遞與 串流(Streaming)兩種方式。檔案方式即將影音資料以影片 檔案方式傳送,當傳送成功後,劉覽者之影音播放裝置(如 電腦、PDA、或手機的播放器Player )再播出該影片。串 流方式則是資料以網路封包方式,分成許多小小片段方式 送達’由瀏覽者之影音播放裝置搜集並決定播出時機。 檔案方式傳送,屬於網際網路網頁方式應用(Web或 h«p網路協定之格式),在網路上應用非常普遍,適合對影 音Playback需求,但因影音資料通常資料量大,對影音瀏 覽即時性的要求則非常不利。因此,一般以串流方式(RTSP 或SIP協定)對影音資料以RTP (Real-time Transport Protocol)封包來傳送,可以做到較低延遲,適合對影音劉 覽即時性之需求。 但使用 RTP/RTSP (Real-Time Streaming Protocol)的 封包格式進行網路串流,有幾點常見如下問題存在: 網路設備如NAT路由設備、防火牆與Proxy必須特 別設定或設計’以讓RTP/RTSP這類封包順利通過,這 造成應用上的困擾。相反地,相對使用http ( HyperText Transfer Protocol,超文件傳輸通訊協定)網路協定之應 用,則無此困擾,這些網路設備通常設計上會對Web應 用提供許多的應用上的支援。 一般RTP封包封裝工作’屬於網路應用層,無法全 201145981 面性考慮傳送網路的特性,為了節省封包表頭的使用 (Overhead) ’因此常使用較大長度封包來傳送。這在某 些網路上應用,如封包通過Wireless LAN或WiMAX等 無線網路’傳輸通道(Transmission Channel)品質常會 因時、因地改變,較大長度封包的傳送反會造成資料傳 送成功率的急速下降。此時,如為影音瀏覽,最多造成 瀏覽上的不順,尚可無重要影響,但對網路攝影機上傳 NVSR之錄影應用,必須保證送達考量下使用 TCP(Transmission Control Protocol)連線,有時會因較大 長度封包傳送時的錯誤而產生封包重送的機會,造成網 路頻寬上的嚴重負擔與困擾。 另RTP/RTSP上傳影音資料至NVSR的網路架構應 用’很難讓該影音資料進行點對點加密,這裡的點對點 指影音產生端與使用者影音瀏覽端。此一私密需求對某 一裝設於個人家庭的網路攝影機而言是必須的,但使用 NVSR代管錄影時,NVSR無法對加密影音資料進行管 理,也無法讓瀏覽端進行播放。 另外RTP/RTSP影音資料瀏覽,無法對頻寬變動之 網路環境產生適當反應。有時網路頻寬大時’可以劉覽 高品質影片資料得高晝質的影音晝面,但當網路頻寬小 或變差時,高畫質的影音代表高頻寬需求,便會產生大 量的資料傳送失敗的情形。 因此’蘋果電腦(Apple Computer)在2009年1 〇月5 曰提出一使用http網路協定方式,新的即時串流(Live Streaming)網路規格,並將此規格實作於目前Apple網路 201145981 終端產品,包括Apple的新出品電腦MacBook、iMac、 iPhone、iPod 與 iPad 等產品。[Prior Art] The Surveillance System is divided into closed or networked architectures. The former uses a general DVR (Digital Video Recorder) for near-end stand-alone recording; the latter uses a network remote server device NVSR (Network Video Streaming Recorder, or NVR) to perform a network-enabled camera (or network) The remote camera of the road camera, and allows the user of the camera to perform remote video browsing. In general, the camera produces video and audio data and compresses it for the transmission of data over the network. The network transmission method is divided into passive and active. The passive mode is read by the NVSR on the network camera. The camera must be set up on the Internet. The IP address of the IP address is limited to the fixed p URL. It i is written by the network camera to the nvsr. People, convenient grids recognize any Internet location, as long as the NVSR is installed on the network visible URL. The general NVSR can be - the network server group, : 戽 戽: should be " ^ thousand sets of cameras. The main mode is convenient ☆ audio and video business B users only need to install a network camera that can access the Internet. Network 3 201145981 The company is responsible for the escrow video recording. The user can perform Near's Near real-time monitoring (Live) playback of the network camera or the playback or playback of the historical video and audio (Playback). The audio and video materials are transmitted on the Internet and are divided into two ways: file transfer and streaming. In the file mode, the video and audio data will be transmitted as a video file. After the transmission is successful, the viewer's audio and video playback device (such as a computer, PDA, or mobile phone player Player) will broadcast the video. In the streaming mode, the data is encapsulated in a network and divided into a number of small segments. The message is collected by the viewer's video playback device and determines the timing of the broadcast. File transfer, belonging to the Internet web page application (Web or h«p network protocol format), is very common on the Internet, suitable for audio and video playback, but because audio and video data is usually large amount of data, instant access to audio and video Sexual requirements are very unfavorable. Therefore, the audio and video data is generally transmitted by RTP (Real-time Transport Protocol) in a streaming mode (RTSP or SIP protocol), which can achieve low latency and is suitable for the instantness of video and audio viewing. However, using RTP/RTSP (Real-Time Streaming Protocol) packet format for network streaming, there are several common problems: Network devices such as NAT routing devices, firewalls, and Proxy must be specially configured or designed to allow RTP/ Packets such as RTSP passed smoothly, which caused application problems. Conversely, this is not the case with applications that use the http (HyperText Transfer Protocol) network protocol, which is typically designed to provide a lot of application support for Web applications. The general RTP packet encapsulation work belongs to the network application layer. It is not possible to consider the characteristics of the transport network in 201145981. In order to save the use of the header (Overhead), it is often transmitted using a larger length packet. This is applied on some networks. For example, the quality of the transmission channel of the wireless network such as Wireless LAN or WiMAX is often changed in time and place. The transmission of large-length packets will cause the rapid success rate of data transmission. decline. At this time, if it is for audio-visual browsing, it may cause unsatisfactory browsing, but it has no significant impact. However, for the video application of the NVSR uploaded by the network camera, it is necessary to ensure that the TCP (Transmission Control Protocol) connection is used under the consideration. The chance of resending the packet due to an error in the transmission of a large-length packet causes a serious burden and trouble on the network bandwidth. Another RTP/RTSP uploads audio and video data to NVSR's network architecture application. It is difficult to make the video and audio data point-to-point encrypted. Here, the point-to-point refers to the video and audio generating end and the user's audio and video browsing end. This privacy requirement is necessary for a webcam installed in a personal home. However, when using NVSR to host video recording, NVSR cannot manage the encrypted video and audio data, nor can it be played by the browser. In addition, the RTP/RTSP audio and video data browsing cannot properly respond to the network environment with varying bandwidth. Sometimes when the network bandwidth is large, it can be high-quality video data, but when the network bandwidth is small or worse, high-definition video and audio represents high-bandwidth requirements, and a large number of The case where the data transfer failed. Therefore, Apple Computer proposed a new online streaming network specification using the http network protocol method in January 1st, 2009, and implemented this specification on the current Apple network 201145981. End products, including Apple's new products such as MacBook, iMac, iPhone, iPod and iPad.
Apple的http網路協定即時串流規格,可讓使用者對 某網頁(Web Page)瀏覽方式以“小檔案”下載來進行影 音資料播放。其細節包括如下: 影音資料被一伺服器(Server)以長度較短188 bytes 的MPEG-TS封包封裝,並由多個短封包組成一短秒數 的影片檔案,副檔名為.ts。壓縮影像及壓縮語音分別依 產生時間順序各自封裝成連續的短封包,此影音格式為 一習知影音同步的 MPEG2 TS (Transport Stream)格 式,對多數的影音播放設備並不陌生。例如,用於數位 電視廣播的DVB (歐規)及ATSC (美規)標準,即使用 MPEG-TS封包來傳送數位電視影音資料,除了影像及 語音資料可分別使用MPEG-TS來封裝外,某些控制資料 也可使用MPEG-TS進行封裝,如廣播節目表等。 伺服器並產生記錄連續.ts影片檔案的播放清單 (Play List) ’副檔名為.m3u8,方便瀏覽者以http網路協 定方式讀取此播放清單,並決定播放清單中影片.ts樓案 的讀取與播放。此一影音串流播放方式,實為檔案了載 (使用http網路協定)後’再播放的方式。Apple建議每 影片檔案時間為10秒,因此Live播放延遲可為3〇秒。 伺服器產生播放清單,同時可對每個.ts影片槽案進 行資料加密之設定,這方便影片傳送、錄影與觀看時, 隱私保密的需求或方便於收費機制的應用,對使用者來 說,點對點加密便為可行。加密方法可使用運算容易但 201145981 安全性高的方法’如AES-128 (128-bits AdvanceApple's http network protocol real-time streaming specification allows users to play video files by downloading "small files" on a web page. The details include the following: The audio and video data is encapsulated by a server (Server) with a short length of 188 bytes of MPEG-TS packets, and a plurality of short packets form a short-second video file with a file name of .ts. The compressed image and the compressed voice are respectively packaged into consecutive short packets according to the time sequence. The audio and video format is a conventional audio and video synchronized MPEG2 TS (Transport Stream) format, which is not unfamiliar to most audio and video playback devices. For example, the DVB (European) and ATSC (US) standards for digital television broadcasting use MPEG-TS packets to transmit digital video and audio data, except for video and voice data, which can be packaged separately using MPEG-TS. Some control data can also be encapsulated using MPEG-TS, such as a broadcast schedule. The server also generates a playlist (Play List) that records the continuous .ts video file. The sub-file name is .m3u8, which is convenient for the viewer to read the playlist in the form of http network protocol and decide to play the movie.ts in the list. Read and play. This video streaming mode is actually a way to replay after the file (using the http network protocol). Apple recommends a video file time of 10 seconds, so the Live playback delay can be 3 seconds. The server generates a playlist, and at the same time, data encryption can be set for each .ts video slot, which facilitates the transmission, recording and viewing of the video, the need for privacy or the application of the charging mechanism, for the user, Point-to-point encryption is possible. The encryption method can be used with a method that is easy to calculate but 201145981 is highly secure, such as AES-128 (128-bits Advance)
Encryption Standard)或 AES-256 的演算法則。 飼服器可同時產生相同影片之不同影音品質影片 的.ts檔。如此’瀏覽者的播放器(player)可以依自己網 路頻寬狀況’隨時選擇不同頻寬品質的,,小檔案,,影片 來播放’因此可以就網路狀況選擇不同品質的影音檔來 播放,例如,當網路頻寬品質忽然不良時,前1Q秒選擇 高品質影片播放,再後10秒選擇中品質影片播放,再後 10秒選擇低品質影片播放,因此達到可對網路頻寬品質 適當反應的無縫接軌影音播放目的。 伺服器可將”小檔案,’影片存放於不同的網址 (URI),這對錄影設備來說很重要,對瀏覽者可以依網路 狀況找到”小檔案”影片播放。 由於網路上使用網頁(Web Page)的許多認證與計費 方式已經非常成熟’Apple的http網路協定即時串流規格, 使用網頁(Web Page)管理方式另可得許多必須以認證方 式來確遇使用者身份’同時以AAA (Authetication, Authorization and Accounting)機制來計費時,可以得到管 理上的好處。Encryption Standard) or AES-256 algorithm. The feeder can simultaneously produce .ts files for different audio and video quality movies of the same movie. So the 'browser' player can choose different bandwidth qualities, small files, and videos to play at any time according to their network bandwidth. So you can choose different quality video files to play on the network. For example, when the network bandwidth quality is suddenly bad, the high-quality video playback is selected in the first 1 second, the medium-quality video is selected in the next 10 seconds, and the low-quality video playback is selected in the next 10 seconds, so that the network bandwidth can be achieved. The quality of the appropriate response to the seamless audio and video playback purposes. The server can store "small files," videos in different URLs (URIs), which is very important for video devices. Browsers can find "small files" to play according to the network conditions. Many of the authentication and billing methods of Web Page) are very mature. 'Apple's http network protocol real-time streaming specification, using Web Page management, many other methods must be used to authenticate users.' When AAA (Authetication, Authorization and Accounting) mechanism is used for billing, management benefits can be obtained.
Apple的http網路協定即時串流規格的缺點,主要為 影音傳送時的即時性要求,可能無法滿足使用上的需求。 傳統監視保全系統若採取Apple的規格,雖然解決某些 RTP/RTSP的缺點,然而其即時性播放會造成問題,Apple 的規格較適合延遲播放或播放過去影音(Playback)之方式。 另Apple規格並不包含一般運用於監視錄影時,網路 201145981 攝影機影音資料產生端使用網路傳送的規範,當一網路攝 影機被定位為一錄影資訊產生端,而非Apple考慮之影音 瀏覽應用時,伺服器端便需要加強某些功能。當網路攝影 機產生影音資料,上傳至NVSR伺服器端時,同樣面臨使 用RTP/RTSP在網路傳輸設備與環境的問題,同樣面臨影 音資料私密性等問題,網路攝影機與伺服器端必須做一整 體性的改變。而且監視錄影之網路攝影機常伴隨某些防盜 器或感測器的觸發事件時,影音資料與事件資料的同步, 必須進一步考量。 傳統監視保全系統雖有事件處理,譬如家中若門打 開,會有『門打開』之事件記錄,然而『門打開』之事件 並沒有辦法與影音資料結合同步傳送。因為習知影音資料 與事件資料分別使用不同的獨立網路連線(TCP或UDP Connection),有時某些先前發生的資料卻可能被後續送 達,如『門打開』時的影音資料比事件資料早一步或晚一 步送達,如果使用者以事件發生點調閱影片,可能看不到 真正是由誰打開門。 另外,網路上常存在某一區域網路下的不同瀏覽者, 進行同一影片的瀏覽時,Apple的規格無法讓影片同步播 出。如某公司辦公室或某學校教室内的相鄰的不同電腦, 同看一隻影片時,可能有播出時間差。其實同一區域網路 的任兩台電腦對外使用相同IP,不同的埠號(Port)分別建 立兩條獨立連線對影片進行瀏覽,如此除了浪費網路頻寬 外,看同一隻影片,無法保證收看時的同步。 綜上所述,傳統監視保全系統對於網路設備,封包在 8 201145981 傳送時因為協定問題,網路設備常造成封包傳 題,有時也造成網料備嚴重之負擔。由 # = 視::常多用戶,這在管理上造成非常大的不便。 之問題應該需要解決。 上边 【發明内容】 本發明之主要目的在於解決上述習知之缺失The shortcomings of Apple's http network protocol real-time streaming specification are mainly the immediacy requirements of video and audio transmission, which may not meet the requirements of use. The traditional surveillance security system adopts Apple's specifications. Although it solves some of the shortcomings of RTP/RTSP, its instant playback can cause problems. Apple's specifications are more suitable for delaying playback or playback of past playback (Playback). In addition, the Apple specification does not include the specification that the network 201145981 camera audio and video data generation end uses the network transmission when the video is normally used. When a network camera is positioned as a video information generating end, it is not an Apple audio and video browsing application. At the server side, some functions need to be enhanced. When the network camera produces audio and video data and uploads it to the NVSR server, it also faces the problem of using RTP/RTSP in the network transmission equipment and environment. It also faces the privacy of audio and video data. The network camera and server must do A holistic change. Moreover, when the video camera of the surveillance video is often accompanied by the trigger event of some anti-theft device or sensor, the synchronization of the audio-visual material and the event data must be further considered. Although the traditional surveillance security system has event handling, such as the opening of the door in the home, there will be an event record of the "door open", but the event of "door opening" has no way to transmit synchronously with the audio and video materials. Because the conventional audio and video data and the event data use different independent network connections (TCP or UDP Connection), sometimes some of the previously generated data may be subsequently delivered, such as the video data than the event data when the door is opened. Delivered one step earlier or later, if the user accesses the movie at the point of the event, it may not be possible to see who really opened the door. In addition, different viewers under a certain local area network often exist on the Internet. When browsing the same movie, Apple's specifications cannot allow the video to be played simultaneously. For example, when a movie is viewed in a company office or in a school classroom, there may be a time difference between the broadcasts. In fact, any two computers in the same regional network use the same IP externally. Different nicknames (Ports) establish two independent connections to browse the video. In addition to wasting the network bandwidth, watching the same movie cannot guarantee. Synchronization when watching. In summary, the traditional monitoring and security system for network equipment, packets in the transmission of 201145981 due to the agreement, network equipment often caused packet problems, and sometimes caused a serious burden on the network equipment. By # = View:: often multiple users, which causes great inconvenience in management. The problem should be solved. [Invention] The main object of the present invention is to solve the above-mentioned shortcomings.
傳送影音及事件資料,且不但提供即時觀看影Ϊ 及事件讀’或是事後或延軸看影音及事件資料。Transmitting audio and video and event data, and providing instant viewing and event reading, or viewing video and event data after the event.
為達成上述之目的’本發明影音及事件處理系統主要 資料ΤΓ ’包括影音及事件資料產生端,影音及事件 ^使用同一網路httP協定之TCP連線(TCP ;;ηΓΐη): MPEG_TS封包格式封裝,透過網路送至 衫音及事件處理子系統。 h:雲端影音及事件處理子系統,包括一接收處理模 檔案分割模組,—槽案管理模組,-即時播放模組, 一事件處理模組及一網頁處理模組。 山&立接枚端,包括接收端播放器,使用網頁進行對該雲 知7y曰及事件處理子系統之連續短時間檔案或MPEG-TS封 包格式之影音及事件瀏覽。 其中雲端影音及事件處理子系統的模組分述如下: 接收處理模組用以接收一來源端使用h t tp協定所產生 之該影音及事件組合MPEG-TS封包資料,除了將影音資 201145981 組及即時播放模組外,並 =影音及事件標及-索引檔,其; ===音^?之順序關係。資料分 :協疋下載’使得影音及事件資料在 j送 :::設備會提供較佳之應咖 ,案管理模組,係収管理檔案分龍組產生之該複 料與事==及索,可以讓來源端影音資 ,即時播放模組’與該接收處理模組連接,用以接收該 影音資料與事件#料’並將該㈣使帛_協定及⑽ 連線方式即時傳送至該接收端。 事件處理模組,與該接收處理模組連接,用以接收該 事件負料,並將該資料儲存至一事件資料庫。 網頁處理模組,與該檔案管理模組、即時播放模組與 事件處理模組連接,提供該接收端即時影音與事件 MPEG-TS封包資料,或歷史影音與事件之檔案,或事件 整理之澍覽網頁(Web,http協定應用、網頁處理模組同 時對同一IP、同一影音瀏覽之不同接收端,進行群播 (Multicast)告之,接收端的某一播放器會因此主動對其 它接收端進行群播,以節省影音瀏覽之頻寬,並可讓這 些同IP之不同接收端影音瀏覽達到同步的效果。 201145981 在實施例中,為達成影音及事件傳遞最佳效果,影音 及事件資料使用符合http協定及TCP連線之封包,因為纟^: 設備通常設計上會對http協定及TCP連線提供較佳之應用 支援,而且TCP連線支援保證送達。另外影音及事件^料 也使用以MPEG-TS之短封包格式傳送,這會減少如使用 無線網路或不穩定網路傳輸環境時,封包錯誤之後重送的 機會。 【實施方式】 為能讓貴審查委員能更瞭解本發明之技術内容,特舉 一個較佳具體實施例說明如下。 請一併參考圖1及圖2關於本發明影音及事件處理系 統1架構圖。來源端2所產生之影音資料23及事件資料 24,依資料產生時間之順序,使用同一網路匕卬協定 之 TCP 連線(TCP Connection),以 MPEG-TS 格式進行 封裝’透過網路3傳送至雲端影音及事件處理子系統4, 將該資料在雲端子純4因此分㈣檔案及f料庫方式儲 存記錄整理之,而接收端/播放器5 (主要用以接收影音資 料並播放出來)亦可透過網路3對雲端處理子系統4之網 頁處理模組46進行http協定方式之影音及事件的劉覽。 影音及事件處理系統1應用在監視用途上之監視保全 系^來源端2包括影音來源端21及事件來源端22。影 音^端21包括影像產生模組叫、影像壓縮模組別、 聲音產生模組以及聲音_模組2id。缝如是在家中 裝,特別設計的網路攝影機為—影音來源端;ι,除了本身 產生影音資料23外’透過有線或無線 同時搜集事件 201145981 來源端22 (事件產生模組22a,如防盜器、感應器等)產 ^的事件資料24,以MPEG-TS封包格式依產生順序封裝 資料,並使用網路發送設備傳送至雲端影音及事件處理子 系統4。接收端/播放器5譬如是上班地點、旅館、室外等, 使用者透過網路連線設備,如電腦、可上網之手機等接收 雲端影音及事件處理子系統4整理的資料,因此使用者可 以透過影音及事件處理系統1的服務監看家中的情;兄。 影音及事件處理系統1譬如亦可應用在教學上,提供 教學之影音資料23,來源端2譬如是遠端一台DVD播放 器’而接收端/播放器5為教室中的多台電腦。 鲁 影音來源端21以具網路功能之攝影機(當然亦可為 IP-Cam、DVR 或 DVS『Digital Video Server』等任何影; 產生器)為例,通常使用某一影音資料壓縮,影像壓縮模組 21b如使用h.264 (Video)方式壓縮影像,而聲音壓縮模組 2Id 使用 AAC (Audio ’ Advanced Audio Coding)的壓縮聲 音以產生影音資料23。但不管來源端2是何種設備,重點 在於來源端2提供靜態或動態之影音資料23,但須注意的 是影音資料23内同時包含影像、聲音與文字等資料,其中 φ 文字資料包括一般文字、字母、碼、或符號等。影音資料 23可能會有文字之原因如同影片中之字幕。 進一步考慮以豕中監控為例,來源端2除了装設攝影 機之外、譬如防盜¥(門、或窗的债測器)、錢器(如煙 霧感應器)在被觸發後,會產生之事件資料24。此事件來 源端22產生之事件資料24可以發射訊號至某—家用閘道 器(Home Gateway)搜集,以進行後續網路傳送。在家中監 控的應用,事件資料24常被與影音資料23 一起被調閱, 12 201145981 但習知監控系統並無法將影音資料23與事件資料24使用 同一網路連線依序傳送。這造成有的時候,如門被打開的 事件產生,馬上去瀏覽影音資料23,卻不知發生什麼事, 因為該時間點的影片可能尚未送到或早已送到。In order to achieve the above objectives, the main data of the audio-visual and event processing system of the present invention 包括 'including the audio-visual and event data generating end, audio-visual and event ^ use the same network httP protocol TCP connection (TCP;; ηΓΐη): MPEG_TS packet format package , sent to the shirt sound and event processing subsystem through the network. h: Cloud video and event processing subsystem, including a receiving processing module file segmentation module, a slot management module, an instant play module, an event processing module and a web processing module. The Mountain & Stand-up End, including the receiver player, uses the web page to view the video and events of the continuous short-term file or MPEG-TS packet format of the cloud 7y曰 and event processing subsystem. The module components of the cloud video and event processing subsystem are as follows: The receiving processing module is configured to receive the video and event combination MPEG-TS packet data generated by a source using the ht tp protocol, in addition to the audio and video resources 201145981 group and Instant playback module, and = video and event and - index file, its; === sound ^? order relationship. Information points: Xieyi download 'Make video and event data sent in j::: The equipment will provide better coffee, case management module, and the management file will be generated by the branch group and the event == and The source end audio and video component can be connected, and the instant play module is connected to the receiving processing module to receive the video and audio data and the event #料', and the (4) instant connection of the 帛_ agreement and the (10) connection mode to the receiving end . The event processing module is connected to the receiving processing module for receiving the event negative and storing the data in an event database. The webpage processing module is connected with the file management module, the instant play module and the event processing module, and provides the instant video and event MPEG-TS packet data of the receiving end, or the archive of historical audio and video and events, or the event finishing Viewing the webpage (Web, http protocol application, web processing module simultaneously broadcasts the same IP, different receiving end of the same audio and video, multicasting, and a certain player at the receiving end will actively perform grouping on other receiving ends. Broadcast, to save the bandwidth of audio and video browsing, and to achieve the same effect of the audio and video browsing of different receiving ends of the same IP. 201145981 In the embodiment, in order to achieve the best effect of audio and video, the use of video and event data conforms to http Protocols and TCP connection packets, because the device is usually designed to provide better application support for http protocol and TCP connection, and TCP connection support is guaranteed. In addition, audio and video and events are also used in MPEG-TS. The short packet format is transmitted, which reduces the chance of resending after a packet error when using a wireless network or an unstable network transmission environment. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS In order to enable the reviewing committee to better understand the technical contents of the present invention, a preferred embodiment will be described below. Please refer to FIG. 1 and FIG. 2 together with the architecture diagram of the audio-visual and event processing system 1 of the present invention. The video data 23 and the event data 24 generated by the source terminal 2 are encapsulated in the MPEG-TS format by using the same network protocol TCP connection (TCP connection) in the order of the data generation time. To the cloud audio and video and event processing subsystem 4, the data is stored in the cloud terminal pure 4, so (four) file and f library storage records, and the receiver / player 5 (mainly used to receive audio and video data and play out) The webpage processing module 46 of the cloud processing subsystem 4 can also perform the protocol of the video protocol and the event of the http protocol mode through the network 3. The video and event processing system 1 is applied to the surveillance security system for monitoring purposes. The video source end 21 and the event source end 22. The audio and video end 21 includes an image generation module, an image compression module, a sound generation module, and a sound_module 2id. The seam is at home. The specially designed network camera is the audio and video source; in addition to the audio and video data 23 itself, the event is collected through wired or wireless 201145981 source 22 (event generation module 22a, such as anti-theft device, sensor, etc.) The event data 24 of the product is encapsulated in the order of the MPEG-TS packet format and transmitted to the cloud video and event processing subsystem 4 using the network transmitting device. The receiving end/player 5 is, for example, a working place, a hotel, or an outdoor The user can receive the data organized by the cloud video and event processing subsystem 4 through the network connection device, such as a computer or an internet-enabled mobile phone, so that the user can monitor the situation in the home through the service of the audio-visual and event processing system 1. Brother. The audio-visual and event processing system 1 can also be applied to teaching, providing audio-visual materials for teaching 23, such as a remote DVD player and a receiving/player 5 for multiple computers in the classroom. Lu Ying sound source 21 is a network-enabled camera (of course, any image such as IP-Cam, DVR or DVS "Digital Video Server"; generator), usually using a certain video data compression, image compression mode The group 21b compresses the image using the h.264 (Video) method, and the sound compression module 2Id uses the compressed sound of AAC (Audio 'Advanced Audio Coding) to generate the video material 23. However, regardless of the source device 2, the focus is on the source 2 to provide static or dynamic audio and video data23, but it should be noted that the video material 23 contains both video, sound and text, and the φ text data includes general text. , letters, codes, or symbols, etc. Audio and video material 23 may have text reasons like subtitles in the film. Taking into account the monitoring in the middle of the example, the source end 2 will be generated in addition to the camera, such as the anti-theft ¥ (door, or window detector), money (such as smoke sensor) after being triggered. Information 24. The event data 24 generated by the source 22 can be transmitted to a Home Gateway collection for subsequent network transmission. For applications monitored at home, event data 24 is often accessed along with audiovisual material 23, 12 201145981 However, conventional monitoring systems are unable to transmit audiovisual material 23 and event data 24 using the same network connection. This caused some occasions, such as the event that the door was opened, and immediately went to browse the audio and video material 23, but did not know what happened, because the film at that time may not have been delivered or sent.
本實施例中考慮某一家用閘道器(Home Gateway)搜 集之事件資料24會送至影音來源端21,影音來源端21因 此同時搜集影音資料23與事件資料24,封包產生模組% 並依序產生188-bytes的MPEG-TS封包,使用同一網路 http協定之TCP連線(TCP Connection)送出,這保證此二 類資料的時間同步。 — 影像、聲音與文字資料已經於MpEG_TS的標準格式中 被清楚定義,本實施例中的事件資料24,則使用MpEG_Ts 中的空封包_丨Paeket)來表示。原先空封包定義是為讓 .EG-TS封包得到固定傳送速率(c〇她加扯她)的庫 用,接收端收敎封包並不做其它解釋,直接捨棄該封包二 但本實施射考慮㈣它來封裝事件資料24。透過網路 、聲音、文字與事件之資料,因此被使用同一 封包格式與網路連線送至雲端影音及事件處理子系統4。 當然=較界之傳送細封包料傳遞In this embodiment, the event data 24 collected by a home gateway device is sent to the video source terminal 21, and the video source terminal 21 collects the video and audio data 23 and the event data 24 at the same time, and the packet generation module % The 188-bytes MPEG-TS packet is generated and sent using the TCP connection of the same network http protocol, which guarantees the time synchronization of the two types of data. – The image, sound and text data have been clearly defined in the standard format of MpEG_TS, and the event data 24 in this embodiment is represented by the empty packet _丨Paeket in MpEG_Ts. The original empty packet definition is for the library that allows the .EG-TS packet to get a fixed transmission rate (c〇 she adds her). The receiving end receives the packet without any other explanation, and directly discards the packet. However, this implementation considers (4) It encapsulates the event data 24. Through the network, voice, text and event data, it is sent to the cloud video and event processing subsystem 4 using the same packet format and network connection. Of course = the transmission of fine-packed materials
=與事件資料24在傳⑽程中係以封包方式呈現。 =本實施例中,影音資料23與事件資料 MPEG-TS應用層封包依序封梦 用 ytCS T C P傳輸層連線之封包,因為、 ‘ :h 11P協定之 tcp (λ Rk4-A(\\ ^ i- ju ^ «Λ. 表頭最後約為 228-bytes 〇88+4〇)之短封包,料產生許多表頭資料(Overhead), 13 201145981 但短封包在使用無線網路特性環境的傳送,保證送達之 TCP連線有比較高傳送效率,因為在無線環境傳送的資料 隨機錯誤機率較高,TCP連線保證送達的特性,當傳送資 料錯誤時便會將封包重送。短封包相對於長封包而言,重 送機會大大降低。188_bytes的MPEG_TS封包相對於習知 常用之約為1500-bytes的RTP封包(加上UDP及IP表頭), 雖然浪費較多封包表頭(0verhead),但可以讓封包因此’ 合適於無線網路特性環境的傳送。 本發明雲端影音及事件處理子系統4,主要 處理模組4卜—㈣分賴組42,-财管理模組43, -即時播放模組44,-事件處理模組45及― 組46。需注意的是,本說明書所述之『模組』,係」 或複數之電子裝置,電子裝置譬如是電路杨,軟^ 服器,電腦系統等等。储存有軟體之錢體裝置),伺 接收處理模組41係用以接收該來源端2所產生立 負枓23與事件資料24。接收處理模 = MPEG-TS資料傳送給4 44,但只將事件資料24送至事件處 == 模組42、檔案管理模組43, 5檔案分割 本發明特狀…事件資料f4^PPle規格實作之模組。 空封包(NuU Paeket)資料形式 =協)中之 便將事件資料24抽離出,由於、^曰_ : 、、’σ s,這樣方 或窗的偵測器)或感應3|已:4貝料24為防盜器(門、 道事件f料#_式,卿實上即可知 23可與事件資料2/同步所彡音為何。因此影音資料= and event data 24 is presented in a packet by way of transmission (10). In the present embodiment, the video material 23 and the event data MPEG-TS application layer packet are sequentially sealed with the ytCS TCP transport layer connection packet, because the ':h 11P protocol tcp (λ Rk4-A(\\ ^ I-ju ^ «Λ. The header is about 228-bytes 〇88+4〇) short packet, which produces a lot of header data (Overhead), 13 201145981 but the short packet is transmitted using the wireless network feature environment, Guaranteed delivery of TCP connections has a relatively high transmission efficiency, because the probability of random errors in the data transmitted in the wireless environment is high, and the TCP connection guarantees the characteristics of delivery. When the data is transmitted incorrectly, the packet will be resent. The short packet is relatively long. In terms of packets, the chances of resending are greatly reduced. The 188_bytes MPEG_TS packet is about 1500-bytes of RTP packets (plus UDP and IP headers), although it wastes more packet headers (0verhead). The packet can be made suitable for the transmission of the wireless network characteristic environment. The cloud audio-visual and event processing subsystem 4 of the present invention mainly processes the module 4 - (4) the group 42, the financial management module 43, - the instant play mode Group 44, event processing module 45 And - Group 46. It should be noted that the "module" described in this specification is a "multiple" electronic device, such as a circuit board, a soft device, a computer system, etc. The money stored in the software is stored. The device receiving processing module 41 is configured to receive the negative voltage 23 generated by the source terminal 2 and the event data 24. Receiving processing mode = MPEG-TS data is transmitted to 4 44, but only event data 24 is sent to the event office == module 42, file management module 43, 5 file segmentation of the present invention... event data f4^PPle specification Make a module. The empty packet (NuU Paeket) data form = association) will extract the event data 24, because, ^ 曰 _ : , , 'σ s, such a square or window detector) or induction 3 | has: 4 The material 24 is an anti-theft device (door, road event f material #_式, Qing can know that 23 can be synchronized with the event data 2 / why the sound. So audio and video materials
^而事件資料24為使用MPEG-TS 14 201145981 的空封包(Null Packet)形式A 士外 組45與接㈣理模組41 特社…事件處理模 之外,亦對事件資料24進行鱗㈣事件資料24 接收端/播放器5之播放哭— I a 解壓模組52、聲音解壓模^f括晝面處理模組51、影像 ;===模心接收影音資料_件 縮影立資斜23鱼1杜次、,:模組&及聲音解壓模組53解壓 影二i料23盥i株咨i料24,再由畫面處理模組51播放 料、24。事件控制模組55係用以產生 2。」’並透過事件處理子系統4傳送至來源端 5 點之—’即時播放模組44可支援接收端/播放 放心Li t,v"以即時觀看影音資料23資料時,即時播 以直接轉繞方式以符合_協定及Tcp連線之 、匕將衫音資料23傳送給接收端/播放考 46提供接收端/播放器5使用網頁頁處理模組 進行測覽,同時也對事件請對影音資料23 /播放器5對事件進行渗丨覽。 以方便接收端 本發明衫音及事件處理系統1更包括 5不需要以即時方式觀看來源端2所產 播放器 事件資料24之方式(譬如進行影音資料2;^資料23與 音資料23歷史資料,或是因為接收端/播 ’觀看影 法以即時方式觀看,而以如30秒延遲方裝置無 23)’檔案分割模組42與接收處理模組4〗電,& Z 3貝料 合及分割影音資料23及事件資料24成複數之小型影= 201145981 事件檔48及對應之索引檔47,其中索引檔47用以 數之小型影音與事件權48之順序關係。職管理田述複 會儲存並管理複數之小型影音與事件檔48及索弓丨柃、且43 使得該接收端/播放器5可以下載該索引檔47'及誃=47, 小型影音與事件檔48。小型影音與事件檔48播^ ,之 可為3秒至300秒之間,建議小型影音與事件檔48 數 Apple規格之.ts之MPEG_TS格式規範,且該^弓丨^、符, 合Apple規格之·m3u8格式之規範,這樣的格式可^ 47符 前流行之iMac、iPhone或iPad等Apple影音播放器二,目 受。惟對Apple設備之播放器而言,MPE(}_TS中^旎接 料24因為使用空封包(NullPacket),播放器並不舍=資 資料進行任何解釋。 丨事件 對使用微軟Microsoft Window或其它作業系 端,接收端/播放器5必須被設計,以播放.m3u8索引二=收 檔之MPEG-TS封包。請一併參考圖3之示意圖、,^ A 播放器5之畫面處理模組51可將影音資料以連同文端/ 料24晝面結合,事件資料24即為防盜器或 ,貝 可以產生的資訊(如『門打開』,『有人進門』^用者,, 義名詞),可如同影片字幕般貼於播放影片下方。因可定 收端/播放器5可以即時方式觀看來源端2所產生之岑丄, 料23,並同事件一起顯示之影片,達到真正同步的的資 以複數之小型影音與事件檔48傳送影音資料幻蛊 件資料24之另一原因是讓接收端/播放器5可以卜^ — 及TCP連線下載,使得該資料在透過網路3傳送過程中疋 網路設備會提供較佳之應用支援,使得傳輸過程較為順 利。網路設備如NAT路由設備、防火牆與Proxy因此無須 201145981 特別設定或設計’便可讓這類封包順利通過。 另外本發明特點之-’因為事件資料24使用mpeg_ts ^封包(Null Packet)隨時與影音資料23同步送出,因此 二影音來源端21與事件來源端22欲啟動或終止資料加 ^可產生-加密與否之事件#料2切告之後續處理單元 -貝料疋否已經加密,使用加密之參數為何。本特點因此, 可讓使用者對自己的家中或其它監控地點對產生之影音資 料23或事件:貝料24進行點對點加密處理,因此保有高隱 私安全之應用。 同樣地,影音來源端21,可隨時被啟動或終止高、中、 低影音壓縮品質之影音資料23之設定。高、中、低三種壓 縮影音品質可同時產生或單一產生。影音來源端21產生一 不同壓縮影音品質之事件資料24,使用MPEG-TS之空封 包(Null Packet)對後續處理單元告之,包括三種壓縮品質的 使用參數。當影音來源端21同時被要求產生高、中、低三 種壓縮影音品質之影音資料23時,三種影音資料23亦同 時使用同一 http協疋及TCP連線送出,因為MPEG-TS本 身支援多影音資料傳輸的功能,如同它提供數位電視廣播 一般,同時產生多個影音節目。 另外網頁處理模組46若偵測到接收端/播放器5(嬖如 s接收端/播放器5有同一區域網路中複數電腦時)使用相 同IP’不同埠號(P〇rt)來之要求傳送同一影音資料23與事 件資料24時’網頁處理模組46會要求相同ip的不同先 前進行影音瀏覽的某乙之接收端播放器(簡稱乙播放器5b) 對其後某甲之接收端播放器(簡稱曱播放器5a)進^群播 (Multicast)服務,請參考圖4,即當甲播放器5a要求網頁 201145981 處理模組⑼進行影音劉覽時,網頁處理模組46會產生要 求乙播放器5b進行群播命令,同時告之曱播放器5a準備 接收由乙播放器5b送來的廣播資料。 另外本發明特點之一,事件來源端22藉由影音來源端 21將事件㈣24使顔PEG_TS U封包(Null Paeket)送 出但在應用上某些事件來源端22也可接收由使用者透過 接收端/播放器5所送出之事件控制資料25。請參考圖1,事 件處理模組45可對來源端2送出某些事件控制資料25,並使 用與來源端2送影音資料23及事件資料24之相同Tcp連 線。事件控制資料25如控制攝影機轉動的遙控訊號,或開 啟某些可受控制的桌燈等。控制資料可以使用mpeq_ts封 包或任何格式。 表τ、上所陳,本發明無論就目的、手段及功效,在在均 顯不其迥異於習知技術之特徵,懇請貴審查委員明察, 早曰賜准專利,俾嘉惠社會,實感德便。惟應注意的是, 上述諸多實施例僅係為了便於說明而舉例而已,本發明所 主張之權利範圍自應以申請專利範圍所述為準,而非僅限 於上述實施例。譬如本發明中所建議之通訊協定雖使用現 有協定,但並不限於現有協定,亦包括未來之協定,本發 明特徵之一係在突破以往舊有思考所使用之協定,而尋找 出特有之結合’而且在說明書中揭露現有建議之協定,亦 即本發明之系統依照現有之技術可以很快實現,不需再多 開發其他網路設備。 另外需注意的是,本發明所述檔案分割模組將影音資 料分割後處理為複數之小型影音與事件檔及一索引檔,其 中一索引檔之意義為至少一索引檔,當然亦可有多個索引 201145981 檔,尤其當影音資料過大,或是有不同來源端所產生之影 音資料時。MPEG-TS封包大小非限制於188-bytes,某些客 制改變亦同本發明之概念。 【圖式簡單說明】 圖1係本發明影影音及事件理系統架構圖。 圖2係來源端及接收端之系統架構圖。 圖3係本發明關於影音結合事件,以字幕顯示之示意圖。 圖4係本發明關於群播之示意圖。 【主要元件符號說明】 影音及事件處理系統1 來源端2 事件來源端22 影像壓縮模組21b 聲音壓縮模組21d 事件資料24 影音來源端21 影像產生模組21a 聲音產生模組21c 事件產生模組22a 影音資料23 事件控制資料25 封包產生模組26 網路3 雲端影音及事件處理子系統4 接收處理模組41 檔案分割模組4 2 即時播放模組4 3 檔案管理模組4 4 201145981 事件處理模組45 索引檔47 接收端/播放器5 晝面處理模組51 聲音解壓模組53 事件控制模組55 接收端曱播放器5a 網頁處理模組46 小型影音及事件檔48 影像解壓模組52 封包接收模組54 接收端乙播放器5b^ Event data 24 is a scalar (four) event of the event data 24 in addition to the event processing module using the NG-Packet form of the MPEG-TS 14 201145981, the A-outer group 45 and the (four) module 41. Data 24 Receiver/player 5 play crying - I a decompression module 52, sound decompression mode ^f bracket face processing module 51, image; ===module receiving audio and video data_piece miniatures capital oblique 23 fish 1 Du Du,,: Module & and the sound decompression module 53 decompresses the shadow of the second material 23盥i strain consults the material 24, and then the screen processing module 51 plays the material, 24. The event control module 55 is operative to generate 2. 'And transmitted to the source 5 points through the event processing subsystem 4' - the instant play module 44 can support the receiving end / play assured Li t, v " to instantly view the audio and video data 23, instant broadcast directly The method is to communicate with the _ agreement and the Tcp connection, and the shirt audio data 23 is transmitted to the receiving end/playing test 46 to provide the receiving end/player 5 using the web page processing module for viewing, and also for the event, please refer to the video material. 23 / Player 5 to see through the event. In order to facilitate the receiving end, the shirt sound and event processing system 1 of the present invention further includes 5 ways of not needing to view the player event data 24 produced by the source terminal 2 in an instant manner (for example, performing audio and video data 2; ^ data 23 and audio data 23 historical data) Or, because the receiving end/broadcast 'viewing shadow method is viewed in an instant manner, and the delay device is not as long as 30 seconds, 23) the file splitting module 42 and the receiving processing module 4 are electrically, & Z 3 And splitting the video material 23 and the event data into a plurality of small shadows = 201145981 event file 48 and the corresponding index file 47, wherein the index file 47 is used for the order relationship between the small video and the event rights 48. The job management Tian Shuhui will store and manage a plurality of small video and event files 48 and cable, and 43 so that the receiver/player 5 can download the index files 47' and 誃=47, the small video and event files 48. Small video and event files 48 broadcast ^, which can be between 3 seconds and 300 seconds, it is recommended that small video and event files 48 number of Apple specifications. ts MPEG_TS format specification, and the ^ 丨 ^, Fu, Apple specifications The specification of the m3u8 format, such a format can be used to popularize the iMac, iPhone or iPad and other Apple audio and video players. However, for the player of the Apple device, the MPE (}_TS) is used because of the use of a null packet (NullPacket), the player does not provide any information for any explanation. 丨 Event pairs use Microsoft Microsoft Window or other jobs At the end, the receiving end/player 5 must be designed to play the .m3u8 index two=received MPEG-TS packet. Please refer to the schematic diagram of FIG. 3 together, the screen processing module 51 of the player A can be Combine the audio and video materials with the text/material 24, and the event data 24 is the anti-theft device or the information that can be generated by the shell (such as "door open", "someone enters the door" ^ user, synonym), as The subtitles of the movie are posted below the playing video. Because the fixed end/player 5 can view the video generated by the source 2 in real time, and the video displayed together with the event, the real synchronization is achieved. Another reason for the small video and event file 48 to transmit video and audio material data 24 is that the receiver/player 5 can be downloaded and connected to the TCP connection, so that the data is transmitted through the network 3 Road equipment will provide better With the support, the transmission process is smoother. Network devices such as NAT routing devices, firewalls and Proxys do not need to be specially set or designed by 201145981 to allow such packets to pass smoothly. In addition, the characteristics of the present invention - 'because event data 24 uses mpeg_ts ^Null Packet is sent out at the same time as the audio and video material 23, so the second video source end 21 and the event source end 22 want to start or terminate the data plus ^ can generate - the event of encryption or not # material 2 sue the subsequent processing unit - Whether the data has been encrypted or not, and the parameters of the encryption are used. This feature allows the user to perform point-to-point encryption on the audio and video data 23 or the event: the material 24 in his home or other monitoring location, thus maintaining high privacy. Security application. Similarly, the audio and video source terminal 21 can be activated or terminated at any time to set the high, medium and low video compression quality of the audio and video data 23. High, medium and low three compressed audio and video quality can be generated simultaneously or in a single generation. The source 21 generates an event data 24 of different compressed audio and video quality, and uses the MPEG-TS null packet (Null Packet). The continuation processing unit reports three usage parameters of the compression quality. When the audio source source 21 is simultaneously required to generate the high, medium and low audio and video quality of the compressed audio and video quality 23, the three audio and video data 23 also use the same http protocol and TCP connection is sent out, because MPEG-TS itself supports the function of multi-audio data transmission, as it provides digital TV broadcasting, and simultaneously generates multiple audio and video programs. In addition, the web processing module 46 detects the receiving end/player 5 ( For example, when the receiving end/player 5 has multiple computers in the same local area network, the same IP 'different nickname (P〇rt) is required to transmit the same video material 23 and event data 24'. A receiving end player (referred to as the B player 5b) of the same ip that previously performed the video and audio browsing is required to enter the multicast player service (hereinafter referred to as the player 5a). Please refer to FIG. 4, that is, when the A player 5a requests the webpage 201145981 processing module (9) to perform audio and video browsing, the webpage processing module 46 generates a request for the B player to perform the multicast command. Advertisement Yue 5a player is ready to receive the broadcast information sent by the player B 5b. In addition, in one of the features of the present invention, the event source terminal 22 sends the event PEG_TS U packet (Null Paeket) through the video source terminal 21, but in the application, some event source terminal 22 can also receive the user through the receiving end/ The event control data 25 sent by the player 5. Referring to FIG. 1, the event processing module 45 can send some event control data 25 to the source terminal 2, and use the same Tcp connection as the source terminal 2 to send the video material 23 and the event data 24. The event control data 25 is such as a remote control signal for controlling the rotation of the camera, or for turning on some of the table lights that can be controlled. Control data can be in mpeq_ts package or any format. Table τ, above, the present invention, regardless of the purpose, means and efficacy, are not different from the characteristics of the conventional technology, please ask the review board to observe, early granting patents, 俾嘉惠社会,实感德Will. It is to be noted that the various embodiments described above are intended to be illustrative only, and the scope of the invention is intended to be limited by the scope of the appended claims. For example, the communication protocol proposed in the present invention uses existing agreements, but is not limited to existing agreements, and includes future agreements. One of the features of the present invention is to find a unique combination by breaking through the agreements used in the past. 'And the prior proposed agreement is disclosed in the specification, that is, the system of the present invention can be implemented quickly according to the prior art, and no further network equipment needs to be developed. It should be noted that the file segmentation module of the present invention divides the video and audio data into a plurality of small video and event files and an index file, wherein an index file has at least one index file, and of course, there are many Index 201145981 file, especially when the audio and video materials are too large, or there are audio and video materials generated by different sources. The MPEG-TS packet size is not limited to 188-bytes, and some custom changes are also in the concept of the present invention. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a structural diagram of a video and audio system of the present invention. Figure 2 is a system architecture diagram of the source and receiver. FIG. 3 is a schematic diagram showing subtitle display according to the present invention regarding video and audio combining events. 4 is a schematic diagram of the present invention regarding group broadcasting. [Main component symbol description] Video and event processing system 1 Source 2 Event source 22 Image compression module 21b Sound compression module 21d Event data 24 Video source end 21 Image generation module 21a Sound generation module 21c Event generation module 22a Audio and video data 23 Event control data 25 Packet generation module 26 Network 3 Cloud audio and video and event processing subsystem 4 Receive processing module 41 File split module 4 2 Instant play module 4 3 File management module 4 4 201145981 Event processing Module 45 Index file 47 Receiver/player 5 Facet processing module 51 Sound decompression module 53 Event control module 55 Receiver port player 5a Web page processing module 46 Small video and event file 48 Image decompression module 52 Packet receiving module 54 receiving end player 5b
2020
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW99118059A TW201145981A (en) | 2010-06-04 | 2010-06-04 | Cloud video and event processing sub-system and source end and player |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW99118059A TW201145981A (en) | 2010-06-04 | 2010-06-04 | Cloud video and event processing sub-system and source end and player |
Publications (1)
Publication Number | Publication Date |
---|---|
TW201145981A true TW201145981A (en) | 2011-12-16 |
Family
ID=46766060
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW99118059A TW201145981A (en) | 2010-06-04 | 2010-06-04 | Cloud video and event processing sub-system and source end and player |
Country Status (1)
Country | Link |
---|---|
TW (1) | TW201145981A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI467985B (en) * | 2012-12-06 | 2015-01-01 | Gemtek Technology Co Ltd | Video playback system supporting group-based billing mechanism and related computer program products |
TWI508562B (en) * | 2013-03-19 | 2015-11-11 | Hon Hai Prec Ind Co Ltd | Cloud service device, method of providing multi-preview in video playing and system |
TWI706364B (en) * | 2017-11-16 | 2020-10-01 | 大陸商北京點石經緯科技有限公司 | A Smart Classroom Teaching System Based on Internet of Things |
-
2010
- 2010-06-04 TW TW99118059A patent/TW201145981A/en unknown
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI467985B (en) * | 2012-12-06 | 2015-01-01 | Gemtek Technology Co Ltd | Video playback system supporting group-based billing mechanism and related computer program products |
TWI508562B (en) * | 2013-03-19 | 2015-11-11 | Hon Hai Prec Ind Co Ltd | Cloud service device, method of providing multi-preview in video playing and system |
US9584572B2 (en) | 2013-03-19 | 2017-02-28 | Hon Hai Precision Industry Co., Ltd. | Cloud service device, multi-image preview method and cloud service system |
TWI706364B (en) * | 2017-11-16 | 2020-10-01 | 大陸商北京點石經緯科技有限公司 | A Smart Classroom Teaching System Based on Internet of Things |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9954717B2 (en) | Dynamic adaptive streaming over hypertext transfer protocol as hybrid multirate media description, delivery, and storage format | |
US10110932B2 (en) | Session administration | |
US9813740B2 (en) | Method and apparatus for streaming multimedia data with access point positioning information | |
US9462310B2 (en) | System for exchanging media content between a media content processor and a communication device | |
US10057535B2 (en) | Data segment service | |
US10863247B2 (en) | Receiving device and data processing method | |
US8627350B2 (en) | Systems and method for determining visual media information | |
US20160127798A1 (en) | Content supply device, content supply method, program, and content supply system | |
JP2004088466A (en) | Live video distribution system | |
US20200221161A1 (en) | Reception apparatus, transmission apparatus, and data processing method | |
WO2012097549A1 (en) | Method and system for sharing audio and/or video | |
KR20120114016A (en) | Method and apparatus for network adaptive streaming user data in a outer terminal | |
US10499094B2 (en) | Transmission apparatus, transmitting method, reception apparatus, and receiving method | |
US10057624B2 (en) | Synchronization of content rendering | |
WO2015035742A1 (en) | Method, terminal and system for audio and video sharing of digital television | |
US9854276B2 (en) | Information processing device, information processing method, and program | |
EP2661878B1 (en) | System and method for video distribution over internet protocol networks | |
TW201145981A (en) | Cloud video and event processing sub-system and source end and player | |
CN110392275B (en) | Sharing method and device for manuscript demonstration and video networking soft terminal | |
WO2014036873A1 (en) | Method for sharing transport stream | |
US8671422B2 (en) | Systems and methods for handling advertisements in conjunction with network-based bookmarking | |
US10425689B2 (en) | Reception apparatus, transmission apparatus, and data processing method | |
CN115604496A (en) | Display device, live broadcast channel switching method and storage medium | |
KR20130115950A (en) | Apparatus and method for supporting broadcast service |