WO2021136369A1 - 一种分布式跨节点视频同步方法及系统 - Google Patents

一种分布式跨节点视频同步方法及系统 Download PDF

Info

Publication number
WO2021136369A1
WO2021136369A1 PCT/CN2020/141369 CN2020141369W WO2021136369A1 WO 2021136369 A1 WO2021136369 A1 WO 2021136369A1 CN 2020141369 W CN2020141369 W CN 2020141369W WO 2021136369 A1 WO2021136369 A1 WO 2021136369A1
Authority
WO
WIPO (PCT)
Prior art keywords
cross
frame data
screen window
display
node
Prior art date
Application number
PCT/CN2020/141369
Other languages
English (en)
French (fr)
Inventor
张慧祥
董友球
Original Assignee
威创集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 威创集团股份有限公司 filed Critical 威创集团股份有限公司
Publication of WO2021136369A1 publication Critical patent/WO2021136369A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44004Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Definitions

  • the present invention relates to the technical field of video synchronization, and more specifically, to a distributed cross-node video synchronization method and system.
  • the distributed system uses a streaming media server to distribute the video compression code stream to the distributed decoding nodes through the network.
  • Each decoding node decodes the code stream and outputs it to the display device for display, and then splices the images output by each decoding and splicing node into one according to the needs. Large picture window.
  • the video code stream needs to be pre-processed by network transmission, decoding, noise reduction, scaling, cropping, sharpening, superposition, etc., and then sent to the video memory of the output module, and the output module controls the output.
  • the time spent on network transmission of the same video frame data sent to different nodes and the load of different nodes can be different, so that different nodes send the same video frame to the video memory at different time points.
  • the present invention aims to overcome at least one of the above-mentioned drawbacks of the prior art and provide a distributed cross-node video synchronization method and system, which can significantly improve the effect of cross-node video synchronization.
  • a distributed cross-node video synchronization method includes the following steps:
  • the decoded frame data corresponding to the cross-screen window is processed according to the image cropping area of the node corresponding to the cross-screen window and the display window area of the node corresponding to the cross-screen window. Cropping and zooming processing to obtain cropped and zoomed frame data;
  • a single thread is used to perform display processing on the frame data in the display queue.
  • the present invention solves the problem of unsynchronized display of the video signal across the screen window through the distributed cross-node video synchronization method.
  • the video signal needs to be displayed across the screen window, firstly according to the image cropping area of the node corresponding to the cross-screen window of the video signal and The cross-screen window corresponds to the display area of the node, and the decoded frame data corresponding to the cross-screen window is cropped and zoomed, and then the cropped and scaled frame data is obtained; secondly, a special overlay thread is enabled to crop and scale the multiple frame data according to Superimpose the reference time for superimposition processing. At this time, multiple frame data are superimposed in the new buffer according to the display area of the corresponding node of the multi-screen window.
  • the frame data after the superimposition process is the frame data of the multi-screen window; then the multi-screen window
  • the time stamp of the frame data is set as the superimposed reference time and cached in the superimposed display queue for display; finally, a single thread is started for display processing, and the frame data of the cross-screen window in the display queue is processed according to the display reference time Send display processing.
  • the present invention performs the image cropping area of the corresponding node of the cross-screen window display and the display area of the node corresponding to the cross-screen window according to the video signal, and obtains the frame data corresponding to the cross-screen window through cropping and scaling processing, and according to the cross-screen
  • the display area of the corresponding node of the window is superimposed and displayed in the new buffer area. Since the superimposing process and the display process are performed with a reference time, it can be ensured that all the nodes corresponding to the multi-screen window can be performed at the same time point.
  • the same operation solves the problem that the display screen of each node is not synchronized when the video signal is displayed in a cross-screen window, and ensures that the video signal of the cross-screen window is displayed on multiple nodes and each frame is displayed synchronously, which significantly improves the cross-node video The effect of synchronization.
  • a distributed cross-node video synchronization system including:
  • the cropping and zooming module is used for when the video signal needs to be displayed in a cross-screen window, according to the image cropping area of the node corresponding to the cross-screen window and the display window area of the node corresponding to the cross-screen window, the data corresponding to the cross-screen window
  • the decoded frame data is cropped and scaled to obtain cropped and scaled frame data
  • the overlay module is configured to perform overlay processing on multiple cropped and scaled frame data according to the overlay reference time to obtain the frame data of the multi-screen window;
  • a setting module configured to set the time stamp of the frame data of the cross-screen window after the overlay processing as the overlay reference time, and buffer it in the display queue for display;
  • the display sending module is used for sending and displaying the frame data in the display queue by using a single thread according to the sending and displaying reference time.
  • the present invention solves the problem of non-synchronization of the cross-screen window display of the video signal through the distributed cross-node video synchronization system.
  • the cropping and zooming module first crops the image according to the node corresponding to the window spanned by the video signal
  • the area and the display area of the node corresponding to the multi-screen window are cropped and zoomed on the decoded frame data corresponding to the multi-screen window, and then the cropped and scaled frame data is obtained;
  • the overlay module starts a special overlay thread to crop and zoom multiple
  • the frame data is superimposed according to the superimposition reference time.
  • the present invention performs the image cropping area of the corresponding node of the cross-screen window display and the display area of the node corresponding to the cross-screen window according to the video signal needs, and obtains the frame data corresponding to the cross-screen window after processing by the cropping and zooming module, and according to The display area of the node corresponding to the cross-screen window uses the overlay module in the new buffer area for overlay processing.
  • the time stamp of the frame data of the cross-screen window is set as the overlay reference time through the setting module and waits for display.
  • the display module The frame data of the cross-screen window is displayed.
  • the superimposition process and the display process are performed using a reference time, it can ensure that all nodes corresponding to the cross-screen window can perform the same operation at the same time point, which solves the problem of the video signal cross-screen window
  • each node's display picture is out of sync, ensuring that the video signal of a cross-screen window is displayed on multiple nodes and each frame is displayed synchronously, which obviously improves the effect of cross-node video synchronization.
  • the beneficial effects of the present invention are: a distributed cross-node video synchronization method and system of the present invention, according to the video signal needs to perform cross-screen window display of the corresponding node image cropping area and cross-screen window corresponding node After cropping and zooming, the frame data corresponding to the cross-screen window is obtained by cropping and scaling, and the overlay processing is performed in the new buffer area according to the display area of the node corresponding to the cross-screen window, and the time stamp of the frame data of the cross-screen window is set to Superimpose the reference time and wait for the display, and finally display the frame data of the cross-screen window.
  • the operation solves the problem that the display screen of each node is not synchronized when the video signal is displayed in a cross-screen window, and ensures that the video signal of the cross-screen window is displayed on multiple nodes and each frame is displayed synchronously, which significantly improves the video synchronization across nodes effect.
  • Fig. 1 is a flowchart of a distributed cross-node video synchronization method according to an embodiment of the present invention.
  • Figure 2 is a structural diagram of a distributed cross-node video synchronization system according to an embodiment of the present invention.
  • Figure 1 is a flow chart of a distributed cross-node video synchronization method of the present invention. The method includes the following steps:
  • the decoded frame corresponding to the cross-screen window is The data is cropped and zoomed to obtain cropped and zoomed frame data;
  • the cross-screen window usually corresponds to at least two nodes, which makes when the same video signal needs to be displayed through the cross-screen window, the same video signal corresponds to the code streams of at least two nodes.
  • the code streams corresponding to at least two nodes must be synchronized to achieve synchronization during display; therefore, it is necessary to synchronize the distributed cross-node video.
  • step S1 when the video signal needs to be displayed in a cross-screen window, according to the image cropping area of the node corresponding to the cross-screen window and the display area of the node corresponding to the cross-screen window, the decoded frame data corresponding to the cross-screen window is set to correspond to Cropping and zooming attributes, through the preset cropping and zooming attributes, the decoded image corresponding to the multi-screen window can be passed in to get the cropped and zoomed frame data of all nodes.
  • the decoded frame data can be cropped according to the image cropping area, and the decoded frame data can be scaled according to the resolution of the display area, so that the cropped and scaled frame data corresponding to the multi-screen window can be obtained.
  • the superimposition reference time in the embodiment of the present invention is less than or equal to the sum of the system reference time when the frame data is superimposed and the preset first fixed offset time.
  • a DMA data copy method is adopted to perform superposition processing on multiple cropped and scaled frame data.
  • multiple cropped and scaled frame data can be superimposed by enabling a special overlay thread (OverlayTask).
  • overlayTask overlay thread
  • the specific implementation process is as follows: first, the current system time and the current system time less than or equal to the frame data when superimposing the frame data can be superimposed.
  • the sum of the preset first fixed offset time is defined as the superimposition reference time.
  • the multiple frame data after cropping and scaling is superimposed by the DMA data copy method according to the superimposition reference time.
  • the data is superimposed in the new corresponding buffer according to the display area corresponding to the multi-screen window to ensure that the position of the video signal remains unchanged in the corresponding display area.
  • the superposition processing of multiple frame data satisfying the superposition reference time condition is to avoid the time error offset of each node in a fixed time interval causing inconsistent frame data obtained during superposition. , which further affects the synchronization effect.
  • the first fixed offset time is determined according to the time offset error of the node corresponding to the video signal across the screen window in the actual situation.
  • the timestamp of the frame data of the cross-screen window obtained after the above-mentioned superimposing process is set as the superimposing reference time. Because the above-mentioned steps perform superimposing processing on the frame data within the superimposing reference time, resulting in the superimposed frame data The timestamps corresponding to each node are inconsistent, so the timestamps of each node in the superimposed frame data are marked with the same superimposing reference time and cached in the display queue for display to ensure that the time points of the frame data corresponding to the nodes in the spanned window are consistent. It also ensures the synchronization of subsequent display processing.
  • a single thread is used to perform display processing on the frame data in the display queue.
  • the display reference time in the embodiment of the present invention is less than or equal to the sum of the system reference time when the frame data is transmitted and displayed and the preset second fixed offset time.
  • a separate thread is started to perform display processing on the frame data in the display queue.
  • the specific implementation process is as follows: First, the system reference time when the frame data is transmitted and displayed is less than or equal to the preset second fixed time. The sum of the offset time is defined as the sending display reference time (DispRefTime).
  • the frame data in the display queue is taken out according to the sending display reference time for sending display processing. More specifically, in the embodiment of the present invention, the frame data in the display queue that meets the condition of the display reference time is sent and displayed, and the node corresponding to the frame data of the cross-screen window is subjected to the same sending and display operation at a fixed point in time. Processing to ensure the effect of node synchronization when the video signal is displayed across screen windows.
  • the method in the embodiment of the present invention further includes the steps:
  • each channel of video signal has a decoding thread, and multiple decoding threads are used to simultaneously perform one-to-one decoding processing on multiple channels of video signals.
  • Each channel of video signal does not interfere with each other during the decoding process, and each channel of video
  • the frame data after signal decoding is managed by the corresponding queue, so that the frame data of the subsequent cross-screen window can be cropped and zoomed, superimposed, and sent to display.
  • the code stream of the video signal in the embodiment of the present invention is in the following manner: the same video signal is transmitted in a multicast manner, and the corresponding code stream is obtained from the multicast group according to the node corresponding to the multi-screen window.
  • all nodes that receive and send video signals adopt the udp multicast mode.
  • the specific implementation process is to add the nodes corresponding to the window display area spanned by the same video signal to the same multicast group, and then according to the multi-screen The node corresponding to the window obtains the corresponding code stream from the multicast group, so as to ensure that the video signal data of the same video signal is consistent among all nodes.
  • the ntp time synchronization mode is adopted between the nodes to synchronize time.
  • ntp is the network time protocol. All nodes in the embodiment of the present invention adopt the ntp time synchronization method for time synchronization.
  • the specific implementation process is: ntp Server is enabled on each node, and the system As a reference node, the ntp clients of other nodes in the system exchange udp data with the ntp Server of the reference node, and process the time, so as to ensure that the time of all nodes is running on a unified time axis.
  • the present invention is a distributed cross-node video synchronization method.
  • the image cropping area of the corresponding node of the cross-screen window display and the display area of the node corresponding to the cross-screen window are obtained.
  • the frame data corresponding to the screen window is superimposed in the new buffer area according to the display area of the node corresponding to the cross screen window, and the time stamp of the frame data of the cross screen window is set as the superposition reference time and waits for display.
  • the frame data of the window is displayed.
  • the present invention realizes that each node performs the same operation at the corresponding point in time, solves the problem that the display screen of each node is not synchronized when the video signal is displayed in a cross-screen window, and ensures the video signal of the cross-screen window Each frame displayed on multiple nodes is displayed synchronously, which obviously improves the effect of cross-node video synchronization.
  • Figure 2 is a structural diagram of a distributed cross-node video synchronization system according to the present invention, and the system includes:
  • the cropping and zooming module is used for when the video signal needs to be displayed in a cross-screen window, according to the image cropping area of the node corresponding to the cross-screen window and the display window area of the node corresponding to the cross-screen window, the data corresponding to the cross-screen window
  • the decoded frame data is cropped and scaled to obtain cropped and scaled frame data
  • the cross-screen window usually corresponds to at least two nodes, which makes when the same video signal needs to be displayed through the cross-screen window, the same video signal corresponds to the code streams of at least two nodes.
  • the code streams corresponding to at least two nodes must be synchronized to achieve synchronization during display; therefore, it is necessary to synchronize the distributed cross-node video.
  • the cropping and zooming module sets the corresponding cropping and zooming on the decoded frame data corresponding to the cross-screen window according to the image cropping area of the node corresponding to the cross-screen window and the display area of the node corresponding to the cross-screen window Property, through the preset cropping and zooming properties, passing in the decoded image corresponding to the multi-screen window can get the cropped and zoomed frame data of all nodes.
  • the decoded frame data can be cropped according to the image cropping area, and the decoded frame data can be scaled according to the resolution of the display area, so that the cropped and scaled frame data corresponding to the multi-screen window can be obtained.
  • the overlay module is configured to perform overlay processing on multiple cropped and scaled frame data according to the overlay reference time to obtain the frame data of the multi-screen window;
  • the overlay module can perform overlay processing on multiple cropped and scaled frame data by enabling a special overlay thread (OverlayTask).
  • the specific implementation process is as follows: First, make the current system less than or equal to the current system when the frame data is overlayed. The sum of the time and the preset first fixed offset time is defined as the overlay reference time. In a fixed time interval, the overlay module will crop and zoom multiple frames of data after the overlay reference time using the DMA data copy method. In the superimposition process, the frame data is superimposed in the new corresponding buffer according to the display area corresponding to the cross-screen window, so as to ensure that the position of the video signal in the corresponding display area remains unchanged.
  • the superposition module performs superposition processing on multiple frame data that meet the superposition reference time condition, so as to avoid the time error offset of each node in a fixed time interval causing superposition.
  • the frame data of is inconsistent, which further affects the synchronization effect.
  • the first fixed offset time is determined according to the time offset error of the node corresponding to the video signal across the screen window in the actual situation.
  • a setting module configured to set the time stamp of the frame data of the cross-screen window after the overlay processing as the overlay reference time, and buffer it in the display queue for display;
  • the time stamp of the frame data of the cross-screen window obtained after the superimposing process is set as the superimposing reference time through the setting module. Due to the superimposition of the frame data within the superimposing reference time, the superimposed frame is The time stamps of the data corresponding to each node are inconsistent, so the time stamps of each node in the superimposed frame data are marked with the same superimposed reference time and cached in the display queue for display to ensure that the time points of the corresponding nodes of the frame data across the windows are consistent , which also ensures the synchronization of the subsequent display processing.
  • the display sending module is used for sending and displaying the frame data in the display queue by using a single thread according to the sending and displaying reference time.
  • the display module sends and displays the frame data in the display queue by starting a separate thread.
  • the specific implementation process is as follows: first, set the system reference time less than or equal to the system reference time when the frame data is displayed and processed with the preset The sum of the second fixed offset time is defined as the display reference time (DispRefTime).
  • the frame data in the display queue is taken out for display processing according to the display reference time. More specifically, in the embodiment of the present invention, the frame data in the display queue that meets the condition of the display reference time is sent and displayed, and the node corresponding to the frame data of the cross-screen window is subjected to the same sending and display operation at a fixed point in time. Processing to ensure the effect of node synchronization when the video signal is displayed across screen windows.
  • the embodiment of the present invention further includes: a decoding module, configured to use multiple decoding threads to respectively perform one-to-one decoding processing on the code streams of multiple channels of video signals to obtain decoded frame data.
  • a decoding module configured to use multiple decoding threads to respectively perform one-to-one decoding processing on the code streams of multiple channels of video signals to obtain decoded frame data.
  • each channel of video signal has a decoding thread
  • the decoding module uses multiple decoding threads to simultaneously perform one-to-one decoding processing on multiple channels of video signals.
  • Each channel of video signals does not interfere with each other during the decoding process.
  • the decoded frame data of each video signal is managed by a corresponding queue, so that the frame data of the subsequent cross-screen window can be cropped and zoomed, superimposed and displayed.
  • the system in the embodiment of the present invention further includes:
  • the code stream obtaining module is used to obtain the corresponding code stream from the multicast group according to the node corresponding to the multi-screen window, and the multicast group is formed by the transmission of the same video signal through the multicast mode.
  • all nodes that receive and send video signals adopt the udp multicast mode.
  • the specific implementation process is to add the nodes corresponding to the window display area spanned by the same video signal to the same multicast group, and then according to the multi-screen The node corresponding to the window obtains the corresponding code stream from the multicast group, so as to ensure that the video signal data of the same video signal is consistent among all nodes.
  • the ntp time synchronization mode is adopted between the nodes to synchronize time.
  • ntp is the network time protocol. All nodes in the embodiment of the present invention adopt the ntp time synchronization method for time synchronization.
  • the specific implementation process is: ntp Server is enabled on each node, and the system As a reference node, the ntp clients of other nodes in the system exchange udp data with the ntp Server of the reference node, and process the time, so as to ensure that the time of all nodes is running on a unified time axis.
  • the present invention is a distributed cross-node video synchronization system.
  • the cropping and zooming module performs cross-screen window display according to the video signal.
  • the image cropping area of the corresponding node and the cross-screen window corresponding node The display area is cropped and zoomed to obtain the frame data corresponding to the cross-screen window.
  • the overlay processing is performed in the new buffer area according to the display area of the node corresponding to the cross-screen window.
  • the setting module sets the time stamp of the frame data of the cross-screen window. Set to superimpose the reference time and wait for the display, and finally display the frame data of the cross-screen window through the display module.
  • the present invention realizes that each node performs the same operation at the corresponding time point, and solves the problem of the cross-screen window display of the video signal.
  • the problem of unsynchronized display of each node to ensure that the video signal of the cross-screen window is displayed on multiple nodes, and each frame is displayed synchronously, which significantly improves the effect of cross-node video synchronization.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

一种分布式跨节点视频同步方法及系统,包括以下步骤:当视频信号需要进行跨屏窗口显示时,根据所述跨屏窗口对应节点的图像裁剪区域以及所述跨屏窗口对应节点的显示窗口区域,对所述跨屏窗口对应的解码后的帧数据进行裁剪缩放处理,得到裁剪缩放后的帧数据;根据叠加参考时间将多个裁剪缩放后的帧数据进行叠加处理,得到所述跨屏窗口的帧数据;将叠加处理后的所述跨屏窗口的帧数据的时间戳设置为叠加参考时间,并缓存到显示队列中等待显示;根据送显参考时间,采用单线程对所述显示队列中的帧数据进行送显处理。通过本发明一种分布式跨节点视频同步方法及系统可以明显提高跨节点视频同步效果。

Description

一种分布式跨节点视频同步方法及系统
本申请要求于2019年12月30日提交中国专利局、申请号为201911396653.5、发明名称为“一种分布式跨节点视频同步方法及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及视频同步技术领域,更具体地,涉及一种分布式跨节点视频同步方法及系统。
背景技术
分布式系统采用流媒体服务器将视频压缩码流通过网络分发给分布式解码节点,各个解码节点分别将码流解码并输出给显示设备显示,再根据需求将各个解码拼接节点输出的图像拼接呈一个大的画面窗口。所述视频码流需要经过网络传输、解码、降噪、缩放、裁剪、锐化、叠加等预处理后送到输出模块的显存,由输出模块控制输出。但是,现有的分布式系统其发往不同节点的同一视频帧数据在网络传输上消耗的时间及不同节点的负荷均可存在差异,使得不同节点将同一视频帧送到显存的时间点不同,这就造成显示设备输出显示同一视频帧的时间不同,相应的产生视频不同步的问题。
分布式处理器理念为去中心化,任意节点相互独立,而在同一视频信号跨多个节点时,若是简单将每个节点视频信号显示出来,那就会造成每个节点所显示的画面不同步,画面严重撕裂,严重影响观看体验,为解决视频信号跨节点不同步的问题,保证用户体验,需要将视频在跨节点后做同步处理,保证视频信号在多个节点上显示每一帧都是同步显示。
发明内容
本发明旨在克服上述现有技术的至少一种缺陷,提供一种分布式跨节点视频同步方法及系统,可以明显提高跨节点视频同步效果。
本发明采取的技术方案是:
一种分布式跨节点视频同步方法,包括以下步骤:
当视频信号需要进行跨屏窗口显示时,根据所述跨屏窗口对应节点的图像裁剪区域以及所述跨屏窗口对应节点的显示窗口区域,对所述跨屏窗口对应的解码后的帧数据进行裁剪缩放处理,得到裁剪缩放后的帧数据;
根据叠加参考时间将多个裁剪缩放后的帧数据进行叠加处理,得到所述跨屏窗口的帧数据;
将叠加处理后的所述跨屏窗口的帧数据的时间戳设置为叠加参考时间,并缓存到显示队列中等待显示;
根据送显参考时间,采用单线程对所述显示队列中的帧数据进行送显处理。
本发明通过分布式跨节点视频同步方法,解决了视频信号跨屏窗口显示不同步的问题,当视频信号需要进行跨屏窗口显示时,首先根据视频信号所跨屏窗口对应节点的图像裁剪区域以及跨屏窗口对应节点的显示区域,对跨屏窗口对应的解码后的帧数据进行裁剪缩放处理,进而得到裁剪缩放后的帧数据;其次启用专门叠加线程,将裁剪缩放后的多个帧数据根据叠加参考时间进行叠加处理,此时多个帧数据按照跨屏窗口对应节点的显示区域在新的缓存中进行叠加,叠加处理后的帧数据即为跨屏窗口的帧数据;然后将跨屏窗口的帧数据的时间戳设置为叠加参考时间,并缓存到叠加后的显示队列中等待显示;最后启动单线程进行送显处理,根据送显参考时间对显示队列中的跨屏窗口的帧数据进行送显处理。本发明通过以上方法步骤,根据视频信号需要进行跨屏窗口显示的对应节点的图像裁剪区域以及跨屏窗口对应节点的显示区域,通过裁剪缩放处理得到跨屏窗口对应的帧数据,并根据跨屏窗口对应节点的显示区域在新的缓存区中进行叠加处理并显示的过程,由于叠加过程和送显过程都利用一个参考时间进行,可以确保跨屏窗口对应的所有节点可以在相同的时刻点进行相同的操作,解决了视频信号跨屏窗口显示时每个节点显示画面不同步的问题,保证跨屏窗口的视频信号在多个节点上显示每一帧都是同步显示的,明显提高跨节点视频同步的效果。
一种分布式跨节点视频同步系统,包括:
裁剪缩放模块,用于当视频信号需要进行跨屏窗口显示时,根据所述跨屏窗口对应节点的图像裁剪区域以及所述跨屏窗口对应节点的显示窗口区域,对所述跨屏窗口对应的解码后的帧数据进行裁剪缩放处理,得到裁剪缩放后的帧数据;
叠加模块,用于根据叠加参考时间将多个裁剪缩放后的帧数据进行叠加处理,得到所述跨屏窗口的帧数据;
设置模块,用于将叠加处理后的所述跨屏窗口的帧数据的时间戳设置为叠加参考时间,并缓存到显示队列中等待显示;
送显模块,用于根据送显参考时间,采用单线程对所述显示队列中的帧数据进行送显处理。
本发明通过分布式跨节点视频同步系统,解决了视频信号跨屏窗口显示不同步的问题,当视频信号需要进行跨屏窗口显示时,首先裁剪缩放模块根据视频信号所跨窗口对应节点的图像裁剪区域以及跨屏窗口对应节点的显示区域,对跨屏窗口对应的解码后的帧数据进行裁剪缩放处理,进而得到裁剪缩放后的帧数据;叠加模块启动专门叠加线程,将裁剪缩放后的多个帧数据根据叠加参考时间进行叠加处理,此时多个帧数据按照跨屏窗口对应节点的显示区域在新的缓存中进行叠加,叠加处理后的帧数据即为跨屏窗口的帧数据;设置模块将跨屏窗口的帧数据的时间戳设置为叠加参考时间,并缓存到叠加后的显示队列中等待显示;送显模块启动单线程进行送显处理,根据送显参考时间对显示队列中的跨屏窗口的帧数据进行送显处理。本发明通过以上系统模块,根据视频信号需要进行跨屏窗口显示的对应节点的图像裁剪区域以及跨屏窗口对应节点的显示区域,通过裁剪缩放模块处理后得到跨屏窗口对应的帧数据,并根据跨屏窗口对应节点的显示区域在新的缓存区中采用叠加模块进行叠加处理,通过设置模块将跨屏窗口的帧数据的时间戳设置为叠加参考时间并等待显示,最后在送显模块中对跨屏窗口的帧数据进行显示,由于叠加过程和送显过程都利用一个参考时间进行,可以确保跨屏窗口对应的所有节点可以在相同的时刻点进行相同的操作,解决了视频信号跨屏窗口显示时每个节点显示画面不同步的问题,保证跨屏窗口的视频信号在多个节点上显示每一帧都是同步显示 的,明显提高跨节点视频同步的效果。
与现有技术相比,本发明的有益效果为:本发明一种分布式跨节点视频同步方法及系统,根据视频信号需要进行跨屏窗口显示的对应节点的图像裁剪区域以及跨屏窗口对应节点的显示区域,通过裁剪缩放处理后得到跨屏窗口对应的帧数据,并根据跨屏窗口对应节点的显示区域在新的缓存区中进行叠加处理,将跨屏窗口的帧数据的时间戳设置为叠加参考时间并等待显示,最后对跨屏窗口的帧数据进行显示,由于叠加过程和送显过程都利用一个参考时间进行,可以确保跨屏窗口对应的所有节点可以在相同的时刻点进行相同的操作,解决了视频信号跨屏窗口显示时每个节点显示画面不同步的问题,保证跨屏窗口的视频信号在多个节点上显示每一帧都是同步显示的,明显提高跨节点视频同步的效果。
附图说明
图1为本发明实施例一种分布式跨节点视频同步方法的流程图。
图2为本发明实施例一种分布式跨节点视频同步系统的结构图。
具体实施方式
本发明附图仅用于示例性说明,不能理解为对本发明的限制。为了更好说明以下实施例,附图某些部件会有省略、放大或缩小,并不代表实际产品的尺寸;对于本领域技术人员来说,附图中某些公知结构及其说明可能省略是可以理解的。
实施例1
如图1所示,图1为本发明一种分布式跨节点视频同步方法的流程图,所述方法包括以下步骤:
S1、当视频信号需要进行跨屏窗口显示时,根据所述跨屏窗口对应节点的图像裁剪区域以及所述跨屏窗口对应节点的显示窗口区域,对所述跨屏窗口对应的解码后的帧数据进行裁剪缩放处理,得到裁剪缩放后的帧数据;
本发明实施例中,跨屏窗口通常对应至少两个节点,这使得当同一路 视频信号需要通过跨屏窗口进行显示时,所述同一路视频信号至少对应两个节点的码流,这需要将至少对应两个节点的码流进行同步,方可在送显时达到同步;因此,需要对分布式跨节点的视频进行同步。
在步骤S1中,当视频信号需要进行跨屏窗口显示时,根据跨屏窗口对应节点的图像裁剪区域以及跨屏窗口对应节点的显示区域,对跨屏窗口对应的解码后的帧数据设置对应的裁剪缩放属性,通过预设的裁剪缩放属性,传入跨屏窗口对应的解码后的图像即可得到所有节点裁剪缩放后的帧数据。
其中,根据图像裁剪区域可以对解码后的帧数据进行裁剪,根据显示区域的分辨率可以对解码后的帧数据进行缩放,从而可以得到跨屏窗口对应的裁剪缩放后的帧数据。
S2、根据叠加参考时间将多个裁剪缩放后的帧数据进行叠加处理,得到所述跨屏窗口的帧数据;
优选地,本发明实施例中所述叠加参考时间为小于或等于对帧数据进行叠加处理时的系统参考时间与预设的第一固定偏移时间相加之和。
优选地,本发明实施例中采用DMA数据拷贝方式对多个裁剪缩放后的帧数据进行叠加处理。
本发明实施例中,可以通过启用专门叠加线程(OverlayTask)将多个裁剪缩放后的帧数据进行叠加处理,具体实施过程如下:首先将小于或等于对帧数据进行叠加处理时的当前系统时间与预设的第一固定偏移时间相加之和定义为叠加参考时间,在固定的时间间隔内,根据叠加参考时间将裁剪缩放处理后的多个帧数据采用DMA数据拷贝方法进行叠加处理,帧数据根据跨屏窗口对应的显示区域在新的对应的缓存中进行叠加,保证视频信号在对应的显示区域位置不变。更具体地说,本发明实施例中对满足叠加参考时间条件内的多个帧数据进行叠加处理,是为了避免在固定时间间隔内各个节点的时间误差偏移导致叠加时所获取的帧数据不一致,进而影响同步效果,另外,本发明实施例中,第一固定偏移时间根据实际情况中视频信号跨屏窗口对应的节点的时间偏移误差确定。
S3、将叠加处理后的所述跨屏窗口的帧数据的时间戳设置为叠加参考 时间,并缓存到显示队列中等待显示;
本发明实施例中,将上述叠加处理后得到的跨屏窗口的帧数据的时间戳设置为叠加参考时间,由于上述步骤对叠加参考时间内的帧数据进行叠加处理,导致叠加处理后的帧数据对应各个节点的时间戳不一致,因此将叠加后的帧数据中各个节点的时间戳打上同一叠加参考时间,并缓存到显示队列中等待显示,保证所跨窗口的帧数据对应节点的时刻点一致,也保证了后续的送显处理的同步性。
S4、根据送显参考时间,采用单线程对所述显示队列中的帧数据进行送显处理。
优选地,本发明实施例中所述送显参考时间为小于或等于对帧数据进行送显处理时的系统参考时间与预设的第二固定偏移时间相加之和。
本发明实施例中,启动单独线程对显示队列中的帧数据进行送显处理,具体实施过程如下:首先将小于或等于对帧数据进行送显处理时的系统参考时间与预设的第二固定偏移时间相加之和定义为送显参考时间(DispRefTime),在固定的时间间隔内,根据送显参考时间将显示队列中的帧数据取出来进行送显处理。更具体地说,本发明实施例中将显示队列中满足送显参考时间条件内的帧数据进行送显处理,在固定的时刻点对跨屏窗口的帧数据对应的节点进行相同的送显操作处理,保证了视频信号跨屏窗口显示时节点同步的效果。
优选地,本发明实施例中所述方法还包括步骤:
S0、采用多解码线程分别对多路视频信号的码流进行一对一的解码处理,得到解码处理后的帧数据。
本发明实施例中,每一路视频信号均有一个解码线程,采用多解码线程同时对多路视频信号进行一对一解码处理,每路视频信号在解码过程中互不干扰,并且将每路视频信号解码后的帧数据采用相对应的队列管理起来,以便进行后续的跨屏窗口的帧数据裁剪缩放、叠加处理和送显处理等。
优选地,本发明实施例中视频信号的码流通过如下方式:同一路视频信号通过组播方式进行传输,根据跨屏窗口对应的节点从组播组中获取对应的码流。
本发明实施例中,接收和发送视频信号的所有节点的方式均采用udp组播方式,具体实施过程是,将同一视频信号所跨窗口显示区域对应的节点加入同一组播组,然后根据跨屏窗口对应的节点从组播组中获取对应的码流,从而保证同一视频信号在各个节点间视频信号数据一致。
优选地,本发明实施例中节点之间采用ntp对时方式进行对时同步。
本发明实施例中,ntp为网络时间协议,本发明实施例中的所有节点之间采用ntp对时方式进行对时同步,具体实施过程为:在每个节点上都启用ntp Server,将系统中的某个节点作为基准节点,系统中的其他节点的ntp client均与基准节点的ntp Server进行udp数据交互,对时处理,从而保证所有节点的时间都是在统一时间轴上运行。
基于上述实施例,本发明一种分布式跨节点视频同步方法,根据视频信号需要进行跨屏窗口显示的对应节点的图像裁剪区域以及跨屏窗口对应节点的显示区域,通过裁剪缩放处理后得到跨屏窗口对应的帧数据,并根据跨屏窗口对应节点的显示区域在新的缓存区中进行叠加处理,将跨屏窗口的帧数据的时间戳设置为叠加参考时间并等待显示,最后对跨屏窗口的帧数据进行显示,本发明实现了每个节点在对应的时刻点进行相同的操作,解决了视频信号跨屏窗口显示时每个节点显示画面不同步的问题,保证跨屏窗口的视频信号在多个节点上显示每一帧都是同步显示的,明显提高跨节点视频同步的效果。
实施例2
如图2所示,图2为本发明一种分布式跨节点视频同步系统的结构图,所述系统包括:
裁剪缩放模块,用于当视频信号需要进行跨屏窗口显示时,根据所述跨屏窗口对应节点的图像裁剪区域以及所述跨屏窗口对应节点的显示窗口区域,对所述跨屏窗口对应的解码后的帧数据进行裁剪缩放处理,得到裁剪缩放后的帧数据;
本发明实施例中,跨屏窗口通常对应至少两个节点,这使得当同一路视频信号需要通过跨屏窗口进行显示时,所述同一路视频信号至少对应两个节点的码流,这需要将至少对应两个节点的码流进行同步,方可在送显 时达到同步;因此,需要对分布式跨节点的视频进行同步。
当视频信号需要进行跨屏窗口显示时,裁剪缩放模块根据跨屏窗口对应节点的图像裁剪区域以及跨屏窗口对应节点的显示区域,对跨屏窗口对应的解码后的帧数据设置对应的裁剪缩放属性,通过预设的裁剪缩放属性,传入跨屏窗口对应的解码后的图像即可得到所有节点裁剪缩放后的帧数据。
其中,根据图像裁剪区域可以对解码后的帧数据进行裁剪,根据显示区域的分辨率可以对解码后的帧数据进行缩放,从而可以得到跨屏窗口对应的裁剪缩放后的帧数据。
叠加模块,用于根据叠加参考时间将多个裁剪缩放后的帧数据进行叠加处理,得到所述跨屏窗口的帧数据;
本发明实施例中,叠加模块可以通过启用专门叠加线程(OverlayTask)将多个裁剪缩放后的帧数据进行叠加处理,具体实施过程如下:首先将小于或等于对帧数据进行叠加处理时的当前系统时间与预设的第一固定偏移时间相加之和定义为叠加参考时间,在固定的时间间隔内,叠加模块根据叠加参考时间将裁剪缩放处理后的多个帧数据采用DMA数据拷贝方法进行叠加处理,帧数据根据跨屏窗口对应的显示区域在新的对应的缓存中进行叠加,保证视频信号在对应的显示区域位置不变。更具体地说,本发明实施例中通过叠加模块中对满足叠加参考时间条件内的多个帧数据进行叠加处理,是为了避免在固定时间间隔内各个节点的时间误差偏移导致叠加时所获取的帧数据不一致,进而影响同步效果,另外,本发明实施例中,第一固定偏移时间根据实际情况中视频信号跨屏窗口对应的节点的时间偏移误差确定。
设置模块,用于将叠加处理后的所述跨屏窗口的帧数据的时间戳设置为叠加参考时间,并缓存到显示队列中等待显示;
本发明实施例中,通过设置模块将上述叠加处理后得到的跨屏窗口的帧数据的时间戳设置为叠加参考时间,由于对叠加参考时间内的帧数据进行叠加处理,导致叠加处理后的帧数据对应各个节点的时间戳不一致,因此将叠加后的帧数据中各个节点的时间戳打上同一叠加参考时间,并缓存 到显示队列中等待显示,保证所跨窗口的帧数据对应节点的时刻点一致,也保证了后续的送显处理的同步性。
送显模块,用于根据送显参考时间,采用单线程对所述显示队列中的帧数据进行送显处理。
本发明实施例中,送显模块通过启动单独线程对显示队列中的帧数据进行送显处理,具体实施过程如下:首先将小于或等于对帧数据进行送显处理时的系统参考时间与预设的第二固定偏移时间相加之和定义为送显参考时间(DispRefTime),在固定的时间间隔内,根据送显参考时间将显示队列中的帧数据取出来进行送显处理。更具体地说,本发明实施例中将显示队列中满足送显参考时间条件内的帧数据进行送显处理,在固定的时刻点对跨屏窗口的帧数据对应的节点进行相同的送显操作处理,保证了视频信号跨屏窗口显示时节点同步的效果。
优选地,本发明实施例中还包括:解码模块,用于采用多解码线程分别对多路视频信号的码流进行一对一的解码处理,得到解码处理后的帧数据。
本发明实施例中,每一路视频信号均有一个解码线程,解码模块通过采用多解码线程同时对多路视频信号进行一对一解码处理,每路视频信号在解码过程中互不干扰,并且将每路视频信号解码后的帧数据采用相对应的队列管理起来,以便进行后续的跨屏窗口的帧数据裁剪缩放、叠加处理和送显处理等。
优选地,本发明实施例中所述系统还包括:
码流获取模块,用于根据跨屏窗口对应的节点从组播组中获取对应的码流,所述组播组是同一路视频信号通过组播方式进行传输所形成。
本发明实施例中,接收和发送视频信号的所有节点的方式均采用udp组播方式,具体实施过程是,将同一视频信号所跨窗口显示区域对应的节点加入同一组播组,然后根据跨屏窗口对应的节点从组播组中获取对应的码流,从而保证同一视频信号在各个节点间视频信号数据一致。
优选地,本发明实施例中节点之间采用ntp对时方式进行对时同步。
本发明实施例中,ntp为网络时间协议,本发明实施例中的所有节点之 间采用ntp对时方式进行对时同步,具体实施过程为:在每个节点上都启用ntp Server,将系统中的某个节点作为基准节点,系统中的其他节点的ntp client均与基准节点的ntp Server进行udp数据交互,对时处理,从而保证所有节点的时间都是在统一时间轴上运行。
基于上述实施例,本发明一种分布式跨节点视频同步系统,(加上模块来描述)裁剪缩放模块根据视频信号需要进行跨屏窗口显示的对应节点的图像裁剪区域以及跨屏窗口对应节点的显示区域,进行裁剪缩放处理后得到跨屏窗口对应的帧数据,叠加处理根据跨屏窗口对应节点的显示区域在新的缓存区中进行叠加处理,设置模块将跨屏窗口的帧数据的时间戳设置为叠加参考时间并等待显示,最后通过送显模块对跨屏窗口的帧数据进行显示,本发明实现了每个节点在对应的时刻点进行相同的操作,解决了视频信号跨屏窗口显示时每个节点显示画面不同步的问题,保证跨屏窗口的视频信号在多个节点上显示每一帧都是同步显示的,明显提高跨节点视频同步的效果。
显然,本发明的上述实施例仅仅是为清楚地说明本发明技术方案所作的举例,而并非是对本发明的具体实施方式的限定。凡在本发明权利要求书的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明权利要求的保护范围之内。

Claims (10)

  1. 一种分布式跨节点视频同步方法,其特征在于,包括以下步骤:
    当视频信号需要进行跨屏窗口显示时,根据所述跨屏窗口对应节点的图像裁剪区域以及所述跨屏窗口对应节点的显示窗口区域,对所述跨屏窗口对应的解码后的帧数据进行裁剪缩放处理,得到裁剪缩放后的帧数据;
    根据叠加参考时间将多个裁剪缩放后的帧数据进行叠加处理,得到所述跨屏窗口的帧数据;
    将叠加处理后的所述跨屏窗口的帧数据的时间戳设置为叠加参考时间,并缓存到显示队列中等待显示;
    根据送显参考时间,采用单线程对所述显示队列中的帧数据进行送显处理。
  2. 根据权利要求1所述的一种分布式跨节点视频同步方法,其特征在于,所述方法还包括:采用多解码线程分别对多路视频信号的码流进行一对一的解码处理,得到解码处理后的帧数据。
  3. 根据权利要求1所述的一种分布式跨节点视频同步方法,其特征在于,所述叠加参考时间为小于或等于对帧数据进行叠加处理时的系统参考时间与预设的第一固定偏移时间相加之和。
  4. 根据权利要求1所述的一种分布式跨节点视频同步方法,其特征在于,所述送显参考时间为小于或等于对帧数据进行送显处理时的系统参考时间与预设的第二固定偏移时间相加之和。
  5. 根据权利要求1所述的一种分布式跨节点视频同步方法,其特征在于,采用DMA数据拷贝方式对多个裁剪缩放后的帧数据进行叠加处理。
  6. 根据权利要求1所述的一种分布式跨节点视频同步方法,其特征在于,视频信号的码流通过如下方式:
    同一路视频信号通过组播方式进行传输,根据跨屏窗口对应的节点从组播组中获取对应的码流。
  7. 根据权利要求1所述的一种分布式跨节点视频同步方法,其特征在于,节点之间采用ntp对时方式进行对时同步。
  8. 一种分布式跨节点视频同步系统,其特征在于,包括:
    裁剪缩放模块,用于当视频信号需要进行跨屏窗口显示时,根据所述跨屏窗口对应节点的图像裁剪区域以及所述跨屏窗口对应节点的显示窗口区域,对所述跨屏窗口对应的解码后的帧数据进行裁剪缩放处理,得到裁剪缩放后的帧数据;
    叠加模块,用于根据叠加参考时间将多个裁剪缩放后的帧数据进行叠加处理,得到所述跨屏窗口的帧数据;
    设置模块,用于将叠加处理后的所述跨屏窗口的帧数据的时间戳设置为叠加参考时间,并缓存到显示队列中等待显示;
    送显模块,用于根据送显参考时间,采用单线程对所述显示队列中的帧数据进行送显处理。
  9. 根据权利要求8所述的一种分布式跨节点视频同步系统,其特征在于,还包括:
    解码模块,用于采用多解码线程分别对多路视频信号的码流进行一对一的解码处理,得到解码处理后的帧数据。
  10. 根据权利要求8所述的一种分布式跨节点视频同步系统,其特征在于,所述系统还包括:
    码流获取模块,用于根据跨屏窗口对应的节点从组播组中获取对应的码流,所述组播组是同一路视频信号通过组播方式进行传输所形成的。
PCT/CN2020/141369 2019-12-30 2020-12-30 一种分布式跨节点视频同步方法及系统 WO2021136369A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911396653.5A CN111107411B (zh) 2019-12-30 2019-12-30 一种分布式跨节点视频同步方法及系统
CN201911396653.5 2019-12-30

Publications (1)

Publication Number Publication Date
WO2021136369A1 true WO2021136369A1 (zh) 2021-07-08

Family

ID=70425103

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/141369 WO2021136369A1 (zh) 2019-12-30 2020-12-30 一种分布式跨节点视频同步方法及系统

Country Status (2)

Country Link
CN (1) CN111107411B (zh)
WO (1) WO2021136369A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116886940A (zh) * 2023-09-07 2023-10-13 园测信息科技股份有限公司 多路视频推理并发预处理加速方法、系统、介质和设备

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111107411B (zh) * 2019-12-30 2022-03-18 威创集团股份有限公司 一种分布式跨节点视频同步方法及系统
CN112181571A (zh) * 2020-09-28 2021-01-05 北京字节跳动网络技术有限公司 浮窗显示方法、装置、终端及存储介质
CN112561929B (zh) * 2020-12-09 2022-10-25 威创集团股份有限公司 拼接屏裁剪缩放方法、设备及其一种电子设备、存储介质
CN112887731B (zh) * 2021-01-22 2023-05-26 北京淳中科技股份有限公司 压缩码流取流方法、装置、电子设备及存储介质
CN113052749B (zh) * 2021-03-02 2023-04-07 长沙景嘉微电子股份有限公司 视频显示方法及图形处理器

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090282444A1 (en) * 2001-12-04 2009-11-12 Vixs Systems, Inc. System and method for managing the presentation of video
CN101807389A (zh) * 2010-03-19 2010-08-18 上海博康智能网络科技有限公司 大屏拼接方法及系统
CN103795979A (zh) * 2014-01-23 2014-05-14 浙江宇视科技有限公司 一种分布式图像拼接同步的方法和装置
CN104375789A (zh) * 2013-08-14 2015-02-25 杭州海康威视数字技术股份有限公司 拼接屏的同步显示方法及系统
CN106791488A (zh) * 2016-12-28 2017-05-31 浙江宇视科技有限公司 一种同步拼接显示方法及装置
CN108259783A (zh) * 2016-12-29 2018-07-06 杭州海康威视数字技术股份有限公司 一种数字矩阵同步输出控制方法、装置及电子设备
CN108737689A (zh) * 2018-04-27 2018-11-02 浙江大华技术股份有限公司 一种视频的拼接显示方法及显示控制设备
CN108881955A (zh) * 2017-05-08 2018-11-23 Tcl新技术(惠州)有限公司 一种实现分布式节点设备视频同步输出的方法及系统
CN111107411A (zh) * 2019-12-30 2020-05-05 威创集团股份有限公司 一种分布式跨节点视频同步方法及系统

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106454154A (zh) * 2016-11-24 2017-02-22 Tcl数码科技(深圳)有限责任公司 一种电视墙拼接方法及系统
CN108881981A (zh) * 2017-05-08 2018-11-23 Tcl新技术(惠州)有限公司 一种跨屏显示方法、存储设备及电子设备

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090282444A1 (en) * 2001-12-04 2009-11-12 Vixs Systems, Inc. System and method for managing the presentation of video
CN101807389A (zh) * 2010-03-19 2010-08-18 上海博康智能网络科技有限公司 大屏拼接方法及系统
CN104375789A (zh) * 2013-08-14 2015-02-25 杭州海康威视数字技术股份有限公司 拼接屏的同步显示方法及系统
CN103795979A (zh) * 2014-01-23 2014-05-14 浙江宇视科技有限公司 一种分布式图像拼接同步的方法和装置
CN106791488A (zh) * 2016-12-28 2017-05-31 浙江宇视科技有限公司 一种同步拼接显示方法及装置
CN108259783A (zh) * 2016-12-29 2018-07-06 杭州海康威视数字技术股份有限公司 一种数字矩阵同步输出控制方法、装置及电子设备
CN108881955A (zh) * 2017-05-08 2018-11-23 Tcl新技术(惠州)有限公司 一种实现分布式节点设备视频同步输出的方法及系统
CN108737689A (zh) * 2018-04-27 2018-11-02 浙江大华技术股份有限公司 一种视频的拼接显示方法及显示控制设备
CN111107411A (zh) * 2019-12-30 2020-05-05 威创集团股份有限公司 一种分布式跨节点视频同步方法及系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116886940A (zh) * 2023-09-07 2023-10-13 园测信息科技股份有限公司 多路视频推理并发预处理加速方法、系统、介质和设备
CN116886940B (zh) * 2023-09-07 2023-12-01 园测信息科技股份有限公司 多路视频推理并发预处理加速方法、系统、介质和设备

Also Published As

Publication number Publication date
CN111107411B (zh) 2022-03-18
CN111107411A (zh) 2020-05-05

Similar Documents

Publication Publication Date Title
WO2021136369A1 (zh) 一种分布式跨节点视频同步方法及系统
US9485466B2 (en) Video processing in a multi-participant video conference
US8035679B2 (en) Method for creating a videoconferencing displayed image
CN101291417B (zh) 一种视频会议系统的轮询方法和系统
CN104918137A (zh) 一种拼接屏系统播放视频的方法
JP6172610B2 (ja) テレビ会議用システム
WO2007059684A1 (en) Device for managing video image and method thereof
JP2004536529A (ja) 複数のビデオチャネルから連続的にフレームを受信し、交互に連続的に、各々の該ビデオチャネルに関する情報を含む個々のフレームをテレビ会議における複数の参加者の各々に送信するための方法及び装置
KR20090086532A (ko) 채널 변경 시간을 감소시키고 채널 변경 동안 오디오/비디오 콘텐츠를 동기화하는 방법
KR101841313B1 (ko) 멀티미디어 흐름 처리 방법 및 대응하는 장치
WO2019233314A1 (zh) 一种电视墙图像回显方法、服务器件及电视墙系统
Halák et al. Real-time long-distance transfer of uncompressed 4K video for remote collaboration
JP7171929B2 (ja) オーディオストリーム及びビデオストリーム同期切替方法及び装置
WO2023279793A1 (zh) 视频的播放方法及装置
CN110072136A (zh) 一种4k超高清播出系统
WO2021207979A1 (zh) 处理视频的装置和系统
CN116389811A (zh) 一种分布式视频图像拼接的同步控制方法及系统
CN108769600B (zh) 一种基于视频流调帧率的桌面共享系统及其桌面共享方法
Tang et al. Audio and video mixing method to enhance WebRTC
CN113691847A (zh) 一种多屏帧同步的方法和装置
CN114025107B (zh) 图像重影的拍摄方法、装置、存储介质和融合处理器
US9794534B2 (en) Image processing methods, and image processing devices and system for a scalable multi-projection system
CN115065861A (zh) 一种分布式解码器视频同步拼接显示方法及系统
TWI520577B (zh) 立體影像輸出裝置與相關的立體影像輸出方法
Koyama et al. Implementing 8k vision mixer for cloud-based production system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20911276

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20911276

Country of ref document: EP

Kind code of ref document: A1