WO2013159368A1 - 数据叠加显示合成方法和系统及显示设备 - Google Patents

数据叠加显示合成方法和系统及显示设备 Download PDF

Info

Publication number
WO2013159368A1
WO2013159368A1 PCT/CN2012/074978 CN2012074978W WO2013159368A1 WO 2013159368 A1 WO2013159368 A1 WO 2013159368A1 CN 2012074978 W CN2012074978 W CN 2012074978W WO 2013159368 A1 WO2013159368 A1 WO 2013159368A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
display
packet
relationship
module
Prior art date
Application number
PCT/CN2012/074978
Other languages
English (en)
French (fr)
Inventor
汪宗
Original Assignee
青岛海信信芯科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 青岛海信信芯科技有限公司 filed Critical 青岛海信信芯科技有限公司
Publication of WO2013159368A1 publication Critical patent/WO2013159368A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
    • H04N21/4347Demultiplexing of several video streams

Definitions

  • the present invention relates to the field of display device technologies, and in particular, to a data overlay display synthesis method and system, and a display device.
  • Video or image display based on streaming media such as video on demand, distance education, social networking and e-commerce, has also become increasingly popular.
  • display modes similar to picture-in-picture (PIP) and picture-picture-picture (POP) in early TV are based on application scenarios of multiple sub-streams, with more applications for various data sub-streams. Rich, application scenarios are also more diverse.
  • the inventors have found that the existing video or image display based on multiple substreams is only displayed independently for a plurality of substreams, and there is no interaction between the substreams.
  • a method for realizing picture-in-picture playback between a main video and a sub-video although the sub-video can be placed at a corner of the main video picture for overlapping display, and the sub-video picture and the main video picture can be different.
  • the progress and rate are played, but the sub video screen is still only the sub video screen during playback, and the main video screen is still only the main video screen.
  • the two substreams are independent of each other and cannot interact with each other, and the screen content cannot be superimposed. Therefore, the expansion of multi-substream applications and the diversification of application scenarios are greatly limited.
  • the present invention provides a data superimposed display synthesis method and system and a display device in order to solve the problem that the multi-substream picture display cannot be interactively affected in the prior art.
  • the present invention provides a data overlay display synthesis method, the method comprising the steps of: S1, receiving an input of a first data, performing buffer storage after decoding; S2, receiving an input of the second data Decoding, parsing the object in the second data, dividing the second data into a plurality of packet data, buffering each packet data of the second data; S3, determining between each packet data of the second data and the first data According to the corresponding relationship, the packet data of the corresponding second data is selected to be superimposed and combined with the first data; and S5, the synthesized data is displayed.
  • the present invention also provides a data overlay display synthesis system, the system comprising a first data processing module, a second data processing module, a relationship specifying module, a superimposed synthesizing module, and a display module, wherein
  • the first data processing module is configured to receive an input of the first data, and perform buffer storage after decoding
  • the second data processing module is configured to receive and decode the input of the second data, parse the object in the second data, divide the second data into a plurality of packet data, and buffer each packet data of the second data;
  • the relationship specifying module is configured to determine various correspondences between each grouped data of the second data and the first data
  • the superposition synthesis module is configured to select, according to the correspondence relationship, the grouping data of the corresponding second data and the first data to perform superimposition and synthesis processing;
  • the display module is configured to display the synthesized data.
  • the present invention also provides a display device in which the data overlay display synthesis system as described above is integrated.
  • the above technical solution has the following advantages: in the technical solution of the present invention, in addition to the simple multi-video stream display such as the traditional picture-in-picture and the picture-in-picture, the composite display of the video stream image can be performed, that is, the The objects or elements in other video streams are combined to obtain new image data for display, which satisfies the special needs of the user for image processing, and expands the use of multi-substream applications.
  • FIG. 1 is a schematic flow chart of a method for synthesizing data superimposed display according to an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of a data overlay display synthesis system in an embodiment of the present invention.
  • processing may be performed based on multiple video streams.
  • the main video stream is a data stream of the main channel, such as a television signal, a multimedia signal, a network streaming signal, or other video signal that can be played and displayed locally
  • the signal of the sub video stream is a sub a video stream signal obtained by the channel, such as a second channel television signal, an OSD signal, an EPG signal, or a network streaming media signal obtained through the Internet
  • the embodiment of the present invention is mainly used to implement a composite display of two video stream signals. .
  • the embodiment of the present invention mainly provides a method for interactively displaying a plurality of video stream data.
  • the superimposed display of the divided data is realized in the specified display area and the basic data.
  • the content of the superimposed area data can be reselected by the adjuster selection setting, the selected data is used to replace the last data, and a set of display data is regenerated, and the data replacement and the superimposed display can be repeatedly implemented.
  • the method of the embodiment of the present invention includes the following steps:
  • Receiving and decoding the input of the second data parsing the object in the second data, dividing the second data into a plurality of packet data, and buffering and storing each packet data;
  • the synthesized data is displayed.
  • the description is made by taking an example in which the main video stream is a television signal and the sub video stream is a network streaming media signal.
  • the technology of digital television is very mature.
  • the demodulation, image processing and display principle of TV signals in this field are well-known technologies.
  • the relevant technical personnel have been able to generally master the processing of video streams of digital television signals, so it is no longer here. Narration.
  • the focus of embodiments of the present invention is on the superposition synthesis of two video stream pictures.
  • the data of the second video stream ie, the sub-video stream
  • the sub-video stream in the embodiment of the present invention is ASF (Advanced) of Microsoft Corporation.
  • Streaming is a data format that transmits multimedia information such as audio, video, images, and control command scripts in the form of network data packets to implement streaming multimedia content.
  • the data parsing in the preferred embodiment of the present invention is mainly performed on the logical object of the ASF file: that is, the header object (Header) Object), data object (Data object) and index object (Index Object) for in-depth analysis.
  • the ASF header object is parsed, a file attribute object, a stream attribute object, a content description object, a partial download object, a stream organization object, a scalable object, a priority object, a mutual exclusion object, a media interdependent object, Level object, index parameter object, etc.; by parsing the index object, information such as media data, storage form, data length and data sorting status can be obtained; and information about the media time can be obtained by parsing the index information.
  • the required information content is obtained in the above manner, and then the acquired various types of information are classified, and the classified packet data is stored in the buffer sequence.
  • each grouped data and the main video stream data is determined.
  • a group includes multiple elements, layers, attributes, etc., and data synthesis is required. Determine the various correspondences between the data to be used in the packet and the primary video stream data.
  • the correspondence relationship generally includes proportional relationship, spatial region relationship, format matching relationship and layer superposition relationship. These relationships indicate the specific position, size, format and color of the elements in the sub video stream in the main video stream display. The basis of synthesis.
  • the corresponding packet data in the sub-video stream is selected from the buffer sequence, and subjected to corresponding scaling, format conversion, positioning, etc., and then superimposed and synthesized with the main video stream data, and synthesized by the display device.
  • the screen display output of the new data afterwards.
  • the format matching relationship when it is determined that the format matching between the divided packet data of the second data and the first data is compatible with each other, the format conversion of the packet data is not required; otherwise, it will be to be The format of the packet data in which the first data is superimposed and synthesized is converted into a format compatible with the format of the first data.
  • the composite display of the video stream image can be performed, that is, the video stream can be utilized in other video streams.
  • the objects or elements are combined to obtain a new image data for display, which satisfies the special needs of the user for image processing, and expands the use of multi-substream applications.
  • the present invention can perform superimposed composite display for multiple substreams, and the number of substreams is not limited to two, as long as any number of substreams can adopt the above second data within the processing capability of the device.
  • the processing method is completed after analysis, segmentation, and buffering.
  • the first data can also be parsed and segmented, and the superimposed composite object can be any element in any sub-stream, so that the combination method is more flexible and varied, and the display effect is more abundant.
  • the first data (main video stream) is a character image data, and is decoded after receiving the first data (decompression, data sampling, quantization processing, decoding, and decoding). After a series of processes such as quantization, inverse transformation, and reconstruction of image data, analysis, segmentation, and buffering, the image of the first data can be divided into multiple parts (for example, the head of the character, the upper body of the character, the lower body of the character, etc. 3 Parts), the parts are displayed separately in the display section.
  • the second data is a set of image data (such as hats, tops, pants, etc. on the website).
  • the format of the first data and the second data is converted into a uniform format for subsequent superimposition processing (of course, if the format of the two data is originally a compatible format that is compatible with each other, this step can be omitted).
  • the processed first and second data objects are classified and displayed.
  • the objects in the second data for example, hats, tops, pants, etc.
  • the size and color of the character image, and specify that the object in the second data covers a specific position in the first data (such as selecting a hat to cover the head region of the character image), so as to achieve entertainment activities such as dressing for the character the goal of.
  • the first data is video stream data having a character action
  • the second data is video stream data having different weather conditions
  • the received first data is performed.
  • the character motion video of the first data is divided into a plurality of parts (such as a person riding a bicycle, a character running, a character wearing a mask, etc.); then processing the second data, dividing the different weather conditions of the second data into multiple Part (such as sunny weather, heavy rain, sandstorms, etc.). Determining various correspondences of the first and second data and processing the portions of the second data according to the various correspondences.
  • the corresponding portions of the processed first data and the second data are superimposed and synthesized and displayed (for example, corresponding to the bicycle riding part of the first data video, and the second data is displayed as a background for sunny weather;
  • the character rushing part corresponds, the second data is used as the background to display the rainstorm weather; corresponding to the person wearing the mask part in the first data video, the second data is used as the background to show the sandstorm weather, etc.).
  • the present invention also includes a data overlay display synthesis system, which is generally represented in the form of a functional module corresponding to each step of the method; as shown in FIG. 2, the system The first data processing module, the second data processing module, the relationship specifying module, the superimposed synthesizing module, and the display module, wherein
  • the first data processing module is configured to receive an input of the first data, and perform buffer storage after decoding
  • the second data processing module is configured to receive and decode the input of the second data, parse the object in the second data, divide the second data into a plurality of packet data, and buffer and store each packet data;
  • the relationship specifying module is configured to determine various correspondences between the grouped data of the second data after the segmentation and the first data (including a proportional relationship, a spatial region relationship, a format matching relationship, and a layer superposition relationship, etc.);
  • the superposition synthesis module is configured to select, according to the correspondence relationship, the grouped data of the corresponding divided second data and the first data to be superimposed and combined;
  • the display module is configured to display the synthesized data.
  • the system may further include an MCU program control unit, and the MCU program control unit is configured to perform scheduling control on all system processes and tasks.
  • a display device which may be: digital television, personal computer, notebook computer, digital photo frame, various mobile terminals (such as PDA, mobile phone, tablet) Any product or component that has a display function, such as a computer or electronic paper book.
  • a technical solution for interactively displaying a plurality of video stream data is provided.
  • the composite display of the video stream image can be performed, that is, the objects or elements in other video streams can be utilized.
  • the combination is performed to obtain a new image data for display, which satisfies the special needs of the user for image processing, and expands the use mode of the multi-substream application.
  • the invention provides a data superimposed display synthesis method and system and a display device.
  • a data superimposed display synthesis method and system In addition to realizing simple multi-video stream display, it is also possible to combine objects or elements in multiple video streams to obtain new image data for display. It satisfies the special needs of users for image processing, expands the use of multi-substream applications, and has industrial applicability.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

一种数据叠加显示合成方法和系统及显示设备。所述方法包括:接收第一数据的输入,解码后进行缓冲存储;接收第二数据的输入并解码,对第二数据中的对象进行解析,将第二数据分割成多个分组数据,缓冲存储第二数据的各分组数据;确定第二数据的各分组数据与第一数据之间的各种对应关系;根据所述对应关系,选取相应的第二数据的分组数据与第一数据进行叠加合成处理;对合成后的数据进行显示。本发明除了可以实现简单的多视频流显示之外,还可以利用多个视频流中的对象或元素进行组合,得到全新的图像数据进行显示,满足了用户对画面处理的特殊需求,扩展了多子流应用的使用方式。

Description

数据叠加显示合成方法和系统及显示设备
技术领域
本发明涉及显示设备技术领域,特别涉及一种数据叠加显示合成方法和系统及显示设备。
背景技术
随着多媒体技术、网络技术和通讯技术的发展,流媒体应用将无处不在,基于流媒体的视频或图像显示,如视频点播、远程教育、社交网络及电子商务等也已经日趋流行。在视频或图像显示中,类似于早期电视中的画中画(PIP)、画外画(POP)等显示方式都是基于多个子流的应用场景,随着针对各种数据子流的应用更加丰富,应用场景也更加多样化。
但是在实现本发明过程中,发明人发现,现有的基于多个子流的视频或图像显示均只是对多个子流分别独立地进行显示,子流之间并无任何交互处理。如现有技术中的针对主视频和子视频之间实现画中画回放的方法,其虽然可将子视频放置在主视频画面的一角进行重叠显示,且子视频画面和主视频画面可以有不同的播放进度和速率,但是在播放时子视频画面仍然只是子视频画面、主视频画面也仍然只是主视频画面,两个子流的画面相互独立,彼此之间并不能交互影响,画面内容也无法叠加组合,因而大大限制了多子流应用的扩展和应用场景的多样化。
发明内容
本发明要解决的技术问题是:针对上述缺点,本发明为了解决现有技术中多子流画面显示无法交互影响的问题,提供了一种数据叠加显示合成方法和系统及显示设备。
为解决上述技术问题,一方面,本发明提供了一种数据叠加显示合成方法,所述方法包括步骤:S1,接收第一数据的输入,解码后进行缓冲存储;S2,接收第二数据的输入并解码,对第二数据中的对象进行解析,将第二数据分割成多个分组数据,缓冲存储第二数据的各分组数据;S3,确定第二数据的各分组数据与第一数据之间的各种对应关系;S4,根据所述对应关系,选取相应的第二数据的分组数据与第一数据进行叠加合成处理;S5,对合成后的数据进行显示。
另一方面,本发明还同时提供了一种数据叠加显示合成系统,所述系统包括第一数据处理模块、第二数据处理模块、关系指定模块、叠加合成模块和显示模块,其中,
所述第一数据处理模块,用于接收第一数据的输入,解码后进行缓冲存储;
所述第二数据处理模块,用于接收第二数据的输入并解码,对第二数据中对象进行解析,将第二数据分割成多个分组数据,缓冲存储第二数据的各分组数据;
所述关系指定模块,用于确定第二数据的各分组数据与第一数据之间的各种对应关系;
所述叠加合成模块,用于根据所述对应关系,选取相应的第二数据的分组数据与第一数据进行叠加合成处理;
所述显示模块,用于对合成后的数据进行显示。
最后,本发明还同时提供一种显示设备,所述显示设备中集成有如上所述的数据叠加显示合成系统。
上述技术方案具有如下优点:在本发明的技术方案中,除了可以实现传统的画中画、画外画等简单的多视频流显示之外,还可以进行视频流图像的合成显示,即可以利用其他视频流中的对象或元素进行组合,得到全新的图像数据进行显示,满足了用户对画面处理的特殊需求,扩展了多子流应用的使用方式。
附图说明
图1为本发明的一个实施例中数据叠加显示合成方法的流程示意图;
图2为本发明的一个实施例中数据叠加显示合成系统的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整的描述,显然,所描述的实施例是本发明的一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。
在本发明的实施例中,可基于多个视频流进行处理,为便于理解,首先介绍基于两个视频流的处理方式。在两个视频流的实施例中,主视频流为主通道的数据流,如电视信号、多媒体信号、网络流媒体信号或者其它本地可以实现播放和显示的视频信号;子视频流的信号为子通道获取的视频流信号,如第二个通道电视信号、OSD信号、EPG信号或者通过互联网获取的网络流媒体信号;本发明的实施例主要用于实现对两个视频流信号进行叠加显示的合成。
本发明的实施例主要提供一种对多个视频流数据进行交互显示的方法,在优选的实施例中,其在对第一数据(Streaming1)和第二数据(Streaming2)解析及分割处理后,将分割的数据在指定显示区域中实现和基本数据的叠加显示。其中,叠加区域数据内容可通过调整器选择设置进行重新选择、选择后的数据实现对上次数据的替换,重新生成一组显示数据,可反复实现数据替换及叠加显示。具体地,本发明实施例的方法包括步骤:
接收第一数据的输入,解码后进行缓冲存储;
接收第二数据的输入并解码,对第二数据中对象进行解析,将第二数据分割成多个分组数据,缓冲存储各分组数据;
确定分割后的第二数据的各分组数据与第一数据之间的各种对应关系(包括比例关系、空间区域关系、格式匹配关系和图层叠加关系等);
根据所述对应关系,选取相应的分割后的第二数据的分组数据与第一数据进行叠加合成处理;
对合成后的数据进行显示。
具体地,在本发明的优选实施例中,以主视频流是电视信号、子视频流是网络流媒体信号为例进行描述。数字电视的技术已经很成熟,该领域中电视信号的解调、图像处理及显示原理都是公知的技术,相关技术人员已能普遍掌握对数字电视信号的视频流的处理,因而此处不再赘述。本发明的实施例的重点是针对两个视频流画面的叠加合成,在该优选实施例中,对第二个视频流(即子视频流)的数据首先进行解析、分类、缓冲的处理。本发明的实施例中的子视频流以微软公司的ASF(Advanced Streaming Format)格式的网络流媒体为例,ASF是一种数据格式,将音频、视频、图像以及控制命令脚本等多媒体信息以网络数据包的形式传输,实现流式多媒体内容的发布。
而本发明的优选实施例中的数据解析主要针对ASF文件的逻辑对象进行:即对头对象(Header object)、数据对象(Data object)和索引对象(Index object)进行深入解析。进一步说明,如通过对ASF头对象解析,可获得文件属性对象、流属性对象、内容描述对象、部分下载对象、流组织对象、可伸缩对象、优先级对象、相互斥对象、媒体相互依赖对象、级别对象、索引参数对象等;通过对索引对象解析,可以获得媒体数据、存储形式、数据长度及其数据排序情况等信息;通过对索引信息的解析可以获得媒体时间的相关信息。通过以上方式获取需要的信息内容,然后将所获取的各类信息进行分类,将分类后的各分组数据存储在缓冲序列中。
随后根据用户的指定或预先设定,确定各分组数据与主视频流数据之间的对应关系,通常情况下,一个分组里会包括多个元素、图层和属性等,要进行数据合成,需要确定分组中将要使用的数据与主视频流数据的各种对应关系。对应关系一般包括比例关系、空间区域关系、格式匹配关系和图层叠加关系等,这些关系表明了子视频流中元素在主视频流显示画面中的具体位置、大小、格式及配色等,是叠加合成的基础。
再根据上述对应关系,从缓冲序列中选取子视频流中对应的分组数据,对其进行相应的缩放、格式转换、定位等处理后与主视频流数据的画面进行叠加合成,通过显示设备将合成后的新数据的画面显示输出。其中,对于格式匹配关系来说,当确定分割后的第二数据的各分组数据与第一数据之间的格式匹配相互兼容时,则不需要再对分组数据进行格式转换;否则,将待与第一数据进行叠加合成的分组数据的格式转换为与第一数据的格式兼容的格式。
可以看出,采用本发明的实施例,除了可以实现传统的画中画、画外画等简单的多视频流显示之外,还可以进行视频流图像的合成显示,即可以利用其他视频流中的对象或元素进行组合,得到全新的图像数据进行显示,满足了用户对画面处理的特殊需求,扩展了多子流应用的使用方式。
在更优选的实施方式中,本发明可针对多个子流进行叠加合成显示,子流的数目不仅限于2个,只要在设备处理能力之内,任意数目的子流均可采用上述对第二数据的处理方式完成解析、分割、缓冲后再进行叠加合成。此外,对第一数据也同样可以进行解析和分割,叠加合成的对象可以是任意子流中的任意元素,使组合的方式更加灵活多变,显示效果更加丰富。
下面给出本发明的实施例的一个应用场景,比如第一数据(主视频流)为一个人物图像数据,接收第一数据后对其进行解码(解压缩、数据采样、量化处理、解码、解量化、反变换及重建图像数据等一系列处理过程)、解析、分割和缓冲处理后,第一数据的人物图像可分成多个部分(例如:人物的头、人物的上半身、人物的下半身等3个部分),各部分在显示部分被分开显示。第二数据为一组图片数据(例如网站上的帽子、上衣、裤子等图片),此处值得注意的是,不同类的图片数据可以分别存储在服务器端的不同存储单元里(如帽子、上衣、裤子按类别分开存储在不同的存储位置),也可以存储在一起;如果存储在一起,则需要在其存储位置对数据进行分类(比如给同一类数据增加属性标记,或者通过文件名区分数据等)。
随后,将第一数据和第二数据的格式转换为统一格式,以便后续的叠加处理(当然,如果两个数据的格式原本就为相互兼容的统一格式,则该步骤可以省略)。然后将处理后的第一数据和第二数据中的对象分类显示出来,显示后,可以选取第二数据中的对象(例如:帽子、上衣、裤子等)并将其调整成符合第一数据中人物图像的尺寸和颜色,并指定第二数据中的对象覆盖到第一数据中的具体位置(比如选取帽子覆盖到人物图像的头部区域上),以便达到为人物换装之类的娱乐活动的目的。
下面再给出本发明的实施例的另一个应用场景,比如第一数据为具有人物动作的视频流数据,第二数据为具有不同天气状况的视频流数据,其中,对接收的第一数据进行处理后,将第一数据的人物动作视频分成多个部分(如人物骑自行车、人物狂奔、人物戴口罩等);然后对第二数据进行处理后,将第二数据的不同天气状况分成多个部分(如阳光明媚天气、暴雨天气、沙尘暴天气等)。确定第一和第二数据的各种对应关系并根据所述各种对应关系对第二数据的各部分进行处理。然后将处理后的第一数据和第二数据中对应的部分叠加合成后显示出来(例如与第一数据视频中的人物骑自行车部分对应的,第二数据作为背景显示出阳光明媚天气;与第一数据视频中人物狂奔部分对应的,第二数据作为背景显示出暴雨天气;与第一数据视频中人物戴口罩部分对应的,第二数据作为背景显示出沙尘暴天气等)。
本领域普通技术人员可以理解,实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,所述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,包括上述实施例方法的各步骤,而所述的存储介质可以是:ROM/RAM、磁碟、光盘、存储卡等。因此,与本发明的方法相对应的,本发明还同时包括一种数据叠加显示合成系统,该系统通常以与方法各步骤相对应的功能模块的形式表示;如图2所示,所述系统包括第一数据处理模块、第二数据处理模块、关系指定模块、叠加合成模块和显示模块,其中,
所述第一数据处理模块,用于接收第一数据的输入,解码后进行缓冲存储;
所述第二数据处理模块,用于接收第二数据的输入并解码,对第二数据中对象进行解析,将第二数据分割成多个分组数据,缓冲存储各分组数据;
所述关系指定模块,用于确定分割后的第二数据的各分组数据与第一数据之间的各种对应关系(包括比例关系、空间区域关系、格式匹配关系和图层叠加关系等);
所述叠加合成模块,用于根据所述对应关系,选取相应的分割后的第二数据的分组数据与第一数据进行叠加合成处理;
所述显示模块,用于对合成后的数据进行显示。
其中,所述系统还可包括MCU程序控制单元,所述MCU程序控制单元用于对所有系统进程、任务进行调度控制。
最后,本发明的上述数据叠加显示合成方法和系统主要应用于显示设备中,所述显示设备可以为:数字电视、个人计算机、笔记本计算机、数码相框、各种移动终端(如PDA、手机、平板电脑、电纸书等)等任何具有显示功能的产品或部件。
综上所述,本发明的上述实施例中,提供了一种对多个视频流数据进行交互显示的技术方案。采用本发明的技术方案,除了可以实现传统的画中画、画外画等简单的多视频流显示之外,还可以进行视频流图像的合成显示,即可以利用其他视频流中的对象或元素进行组合,得到全新的图像数据进行显示,满足了用户对画面处理的特殊需求,扩展了多子流应用的使用方式。
工业实用性
本发明提供一种数据叠加显示合成方法和系统及显示设备,除了可以实现简单的多视频流显示之外,还可以利用多个视频流中的对象或元素进行组合,得到全新的图像数据进行显示,满足了用户对画面处理的特殊需求,扩展了多子流应用的使用方式,具有工业实用性。

Claims (1)

  1. 1、一种数据叠加显示合成方法,其特征在于,所述方法包括步骤:
    S1,接收第一数据的输入,解码后进行缓冲存储;
    S2,接收第二数据的输入并解码,对第二数据中的对象进行解析,将第二数据分割成多个分组数据,缓冲存储第二数据的各分组数据;
    S3,确定第二数据的各分组数据与第一数据之间的各种对应关系;
    S4,根据所述对应关系,选取相应的第二数据的分组数据与第一数据进行叠加合成处理;
    S5,对合成后的数据进行显示。
    2、根据权利要求1所述的方法,其特征在于,步骤S1中,在对第一数据解码后,对第一数据中的对象进行解析,将第一数据分割成多个分组数据,缓冲存储第一数据的各分组数据;
    步骤S3中,确定第二数据的各分组数据与第一数据的各分组数据之间的各种对应关系;
    步骤S4中,选取相应的第二数据的分组数据与第一数据的分组数据进行叠加合成处理。
    3、根据权利要求1或2所述的方法,其特征在于,所述第一数据及所述第二数据为至少一个视频流数据。
    4、根据权利要求1或2所述的方法,其特征在于,所述对应关系包括比例关系、空间区域关系、格式匹配关系和图层叠加关系。
    5、根据权利要求1或2所述的方法,其特征在于,步骤S4中,选取相应的分组数据后,先对分组数据进行相应的缩放、格式转换、定位处理,随后将处理后数据的画面进行叠加合成。
    6、一种数据叠加显示合成系统,其特征在于,所述系统包括第一数据处理模块、第二数据处理模块、关系指定模块、叠加合成模块和显示模块,其中,
    所述第一数据处理模块,用于接收第一数据的输入,解码后进行缓冲存储;
    所述第二数据处理模块,用于接收第二数据的输入并解码,对第二数据中对象进行解析,将第二数据分割成多个分组数据,缓冲存储第二数据的各分组数据;
    所述关系指定模块,用于确定第二数据的各分组数据与第一数据之间的各种对应关系;
    所述叠加合成模块,用于根据所述对应关系,选取相应的第二数据的分组数据与第一数据进行叠加合成处理;
    所述显示模块,用于对合成后的数据进行显示。
    7、根据权利要求6所述的系统,其特征在于,所述系统还包括MCU程序控制单元,所述MCU程序控制单元用于对所有系统进程、任务进行调度控制。
    8、根据权利要求6所述的系统,其特征在于,所述第一数据处理模块还对第一数据中的对象进行解析,将第一数据分割成多个分组数据,缓冲存储第一数据的各分组数据。
    9、根据权利要求6-8中任一项所述的系统,其特征在于,所述第一数据及所述第二数据为至少一个视频流数据。
    10、一种显示设备,其特征在于,所述显示设备中集成有如权利要求6-9中任一项所述的数据叠加显示合成系统。
PCT/CN2012/074978 2012-04-26 2012-05-02 数据叠加显示合成方法和系统及显示设备 WO2013159368A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210126488.3 2012-04-26
CN2012101264883A CN102685600A (zh) 2012-04-26 2012-04-26 数据叠加显示合成方法和系统及显示设备

Publications (1)

Publication Number Publication Date
WO2013159368A1 true WO2013159368A1 (zh) 2013-10-31

Family

ID=46816858

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/074978 WO2013159368A1 (zh) 2012-04-26 2012-05-02 数据叠加显示合成方法和系统及显示设备

Country Status (2)

Country Link
CN (1) CN102685600A (zh)
WO (1) WO2013159368A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111356002A (zh) * 2018-12-24 2020-06-30 海能达通信股份有限公司 一种视频播放方法及视频播放器

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104007963B (zh) * 2014-05-06 2017-11-07 北京猎豹网络科技有限公司 生成浏览器可读的皮肤文件的方法和装置
CN105376629B (zh) * 2014-08-25 2019-05-10 深圳市同方多媒体科技有限公司 一种智能电视电视节目的浏览方法及智能电视
CN105578204B (zh) * 2014-10-14 2020-10-30 海信视像科技股份有限公司 一种多视频数据显示的方法及装置
US11089214B2 (en) * 2015-12-17 2021-08-10 Koninklijke Kpn N.V. Generating output video from video streams
CN105894440B (zh) * 2016-03-30 2019-02-01 福州瑞芯微电子股份有限公司 一种图像多层数据处理方法和装置
CN106358049A (zh) * 2016-08-31 2017-01-25 刘永锋 一种视频回放方法
CN107135415A (zh) * 2017-04-11 2017-09-05 青岛海信电器股份有限公司 视频字幕处理方法及装置
CN109101148A (zh) * 2018-06-28 2018-12-28 珠海麋鹿网络科技有限公司 一种基于操作对象移动获取对象的窗口响应方法及其装置
CN113791858A (zh) * 2021-09-10 2021-12-14 中国第一汽车股份有限公司 一种显示方法、装置、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1453719A (zh) * 2002-04-28 2003-11-05 上海友讯网络资讯有限公司 可自由组合的虚拟形象虚拟场景的形成方法及其系统
CN101098241A (zh) * 2006-06-26 2008-01-02 腾讯科技(深圳)有限公司 虚拟形象实现方法及其系统
CN101183450A (zh) * 2006-11-14 2008-05-21 朱滨 虚拟服装真人试穿系统及其构建方法
CN101188576A (zh) * 2007-12-29 2008-05-28 腾讯科技(深圳)有限公司 一种动态用户形象的实现方法和装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9826695D0 (en) * 1998-12-05 1999-01-27 Koninkl Philips Electronics Nv Television receiver
CN1677390A (zh) * 2005-02-02 2005-10-05 广州网上新生活软件技术服务有限公司 一种嵌入式系统多种字体及大小和样式的显示系统和方法
CN100539706C (zh) * 2005-08-26 2009-09-09 逐点半导体(上海)有限公司 中间数据的动态显示装置及显示方法
CN101901456A (zh) * 2010-08-31 2010-12-01 刘利华 一种基于网络的时尚产品的展示与交易方法及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1453719A (zh) * 2002-04-28 2003-11-05 上海友讯网络资讯有限公司 可自由组合的虚拟形象虚拟场景的形成方法及其系统
CN101098241A (zh) * 2006-06-26 2008-01-02 腾讯科技(深圳)有限公司 虚拟形象实现方法及其系统
CN101183450A (zh) * 2006-11-14 2008-05-21 朱滨 虚拟服装真人试穿系统及其构建方法
CN101188576A (zh) * 2007-12-29 2008-05-28 腾讯科技(深圳)有限公司 一种动态用户形象的实现方法和装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111356002A (zh) * 2018-12-24 2020-06-30 海能达通信股份有限公司 一种视频播放方法及视频播放器

Also Published As

Publication number Publication date
CN102685600A (zh) 2012-09-19

Similar Documents

Publication Publication Date Title
WO2013159368A1 (zh) 数据叠加显示合成方法和系统及显示设备
CN103310820B (zh) 一种对多媒体播放器进行优化的方法
TWI479332B (zh) 視訊播放系統中的選擇性硬體加速
US8442121B2 (en) Method, apparatus and system for controlling a scene structure of multiple channels to be displayed on a mobile terminal in a mobile broadcast system
CN103905744B (zh) 一种渲染合成方法及系统
TWI707582B (zh) 傳送及接收媒體資料之方法及裝置
CN102026017B (zh) 一种视频解码高效测试方法
KR101416062B1 (ko) 무선 통신 시스템에서의 데이터 전송률 조절 방법 및 장치
CN107888567A (zh) 一种复合多媒体信号的传输方法及装置
US11039212B2 (en) Reception device
CN105491396A (zh) 一种多媒体信息处理方法及服务器
CN103929640B (zh) 用于管理视频流播的技术
CN104822070A (zh) 多路视频流播放方法及装置
CN106792155A (zh) 一种多视频流的视频直播的方法及装置
CN111147801A (zh) 一种视联网终端的视频数据处理方法和装置
US20180302636A1 (en) Method of mixing video bitstreams and apparatus performing the method
CN102137253A (zh) 图片处理的方法、终端及服务器
CN102883213B (zh) 字幕提取方法及装置
CN102047662B (zh) 编码器
US8296796B2 (en) Digital broadcasting receiver and a data processing method
CN109819343A (zh) 一种字幕处理方法、装置及电子设备
CN109905766A (zh) 一种动态视频海报生成方法、系统、装置及存储介质
CN108235144A (zh) 播放内容获取方法、装置及计算设备
CN115022713B (zh) 视频数据处理方法及装置、存储介质及电子设备
CN107197287A (zh) 一种基于arm处理器的视频录播方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12875203

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12875203

Country of ref document: EP

Kind code of ref document: A1