CN111787397A - Method for rendering multiple paths of videos on same canvas based on D3D - Google Patents

Method for rendering multiple paths of videos on same canvas based on D3D Download PDF

Info

Publication number
CN111787397A
CN111787397A CN202010782147.6A CN202010782147A CN111787397A CN 111787397 A CN111787397 A CN 111787397A CN 202010782147 A CN202010782147 A CN 202010782147A CN 111787397 A CN111787397 A CN 111787397A
Authority
CN
China
Prior art keywords
original video
video
rendering
screen surface
yuv
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010782147.6A
Other languages
Chinese (zh)
Other versions
CN111787397B (en
Inventor
任玉宝
王丹
王小虎
刘其峰
师少飞
王继能
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sailing Information Technology Co ltd
Original Assignee
Shanghai Sailing Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sailing Information Technology Co ltd filed Critical Shanghai Sailing Information Technology Co ltd
Priority to CN202010782147.6A priority Critical patent/CN111787397B/en
Publication of CN111787397A publication Critical patent/CN111787397A/en
Application granted granted Critical
Publication of CN111787397B publication Critical patent/CN111787397B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/745Detection of flicker frequency or suppression of flicker wherein the flicker is caused by illumination, e.g. due to fluorescent tube illumination or pulsed LED illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention discloses a method for rendering multiple paths of videos on the basis of the same canvas of D3D. The invention finishes multi-picture rendering by cutting and copying the same YUV video data to different off-screen surfaces to realize that multiple pictures share the same D3D equipment object. The invention is mainly applied to the video digital amplification module. The invention solves the problem of difficult multi-window level management and maintenance, reduces the video picture flicker frequency caused by the change of window size, and realizes seamless connection between pictures in visual effect; meanwhile, GPU and memory occupation are reduced, software performance is improved, and the bottleneck of multiple paths of videos, particularly 9 paths of videos and more, in the aspects of rendering and memory occupation is broken through.

Description

Method for rendering multiple paths of videos on same canvas based on D3D
Technical Field
The invention relates to the field of rendering methods of multiple paths of videos, in particular to a method for rendering multiple paths of videos on the basis of the same canvas of D3D.
Background
With continuous innovation and expansion of informatization construction, the combination of informatization data and other emerging industries such as big data, Internet of things, artificial intelligence and the like is more and more, so that competition inside and outside the industry and integration and adjustment across the industry are promoted. As basic data of information-based construction, public safety video monitoring networking plays a significant role in the whole information-based construction, and video data is used as basic data information of a monitoring networking platform, so that the authenticity and reliability of the data are particularly important.
In the process of collecting and previewing video data, in order to conveniently look up the detail information of a video fixed area, the video data is usually required to be locally amplified. In the local amplification process, the video code stream rendering module needs to overlap two windows together to respectively render the amplified video YUV data and the amplified original YUV data, so that synchronous playing of the original video and the amplified video is realized. However, in the existing video rendering technology, multiple paths of videos are displayed synchronously, the decoded video data are divided into multiple parts and rendered respectively, and multiple parts of YUV data and multiple D3D device objects exist, so that multiple windows and rendering modules need to be managed and maintained, and synchronous playing of the data is affected; the amplified code stream and the original code stream are not effectively multiplexed, and meanwhile, when the number of channels is large, the window redundancy is high, and the performance has certain influence.
Accordingly, those skilled in the art have endeavored to develop a method that can reduce or eliminate the need to manage maintenance of multiple windows and rendering modules and effectively improve performance.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, the technical problem to be solved by the present invention is how to maintain multiple windows and rendering modules in multiple videos with as little or no management and improved performance.
In order to achieve the above object, the present invention provides a method for rendering multiple videos based on the same canvas of D3D, wherein the method comprises the following steps:
step 1, initializing a rendering module:
1.1) create 1D 3D object;
1.2) acquiring a display device;
1.3) create 1D 3D device;
1.4) creating an original video off-screen surface;
1.5) creating a non-original video off-screen surface;
step 2, data preparation:
2.1) receiving YUV data of an original video;
2.2) detecting the YUV video data size of the original video, judging whether the YUV video data size is consistent with the off-screen surface size of the original video, if not, destroying the off-screen surface of the original video, and executing the step 1.4;
2.3) locking the original video off-screen surface;
2.4) copying the YUV data of the original video to the off-screen surface of the original video;
2.5) unlocking the original video off-screen surface;
2.6) locking the non-original video off-screen surface;
2.7) converting YUV data of a non-original video, and copying the YUV data of the converted non-original video to the off-screen surface of the non-original video;
2.8) unlocking the non-original video off-screen surface;
step 3, rendering pictures:
3.1) cleaning the D3D equipment;
3.2) opening a stage scene;
3.3) linearly stretching the original video off-screen surface to the video window size and putting the original video off-screen surface into the D3D device;
3.4) linearly stretching the non-original video off-screen surface to the non-original video display area size and placing into the D3D device;
3.5) closing the stage scene;
3.6) presenting D3D equipment stage scene pictures, and finishing rendering of the original video and the non-original video pictures;
and 4, destroying the resources and exiting.
Further, the method can be applied to a digital amplification function, and in the application, the non-original video is a digitally amplified video.
Further, in step 2.7, the scaling method is to convert the YUV data user selected portion according to the user selected area.
Further, the specific application steps are as follows:
step 1 is applied, a user starts a digital amplification function by operating an OCX;
applying step 2, transmitting the handle of the preview window and the handle of the original video window to the playing component through the TVSDK by the OCX;
step 3, selecting a video picture area needing to be amplified by a user through a frame selection tool;
step 4, the playing component cuts and converts a frame of data into YUV of a part of picture selected by a user through stream receiving and decoding;
and 5, rendering and displaying the YUV of the complete frame of picture and the YUV of the part of picture selected by the user respectively by adopting a method for rendering multi-channel videos based on the same canvas of D3D.
Further, in the application step 3, the drawing of the frame selection tool is realized in the rendering module, the digital amplification function is encapsulated in the playing component, and only the playing handle needs to be transferred when the digital amplification tool is used.
Further, in the application step 3, the attribute updating process of the box selection tool is as follows:
3.1, taking over the window process of the video preview window when the path rendering module is initialized;
step 3.2 is applied, capturing a mouse pressing event of a user in the window process to perform behavior analysis;
step 3.3 is applied, if the dragging position is at four corners of the framing tool, the rectangular area is stretched, and otherwise, the rectangular area is moved;
and 3.4, restoring the window process by stopping the digital amplification or releasing the rendering module.
Further, in the application step 3, the box tool is drawn using GDI and displayed after YUV display.
Further, in the application step 4, H264 code stream decoding is adopted.
Further, in the application step 4, the H264 code stream is decoded and then stored in an AVFrame structure; and when the data is converted into YUV, calculating the starting position and the ending position of the width and the height according to the area selected by the user, and selectively converting the AVFrame to generate YUV data of the selected area.
Further, in step 3.3, the linear stretching method specifically stretches the canvas to the window size according to a linear interpolation strategy.
The invention solves the problem of difficult multi-window hierarchical management and maintenance, reduces the video frame flicker frequency caused by the change of the window size, realizes seamless connection between frames in visual effect, reduces the occupation of GPU and memory, improves the software performance, and breaks through the bottleneck of multi-channel videos, especially videos of 9 channels and above in the aspects of rendering and memory occupation.
The conception, the specific structure and the technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, the features and the effects of the present invention.
Drawings
FIG. 1 is a flow chart of off-screen rendering in accordance with a preferred embodiment of the present invention;
FIG. 2 is a general flow diagram of the digital amplification module according to a preferred embodiment of the present invention;
FIG. 3 is a digital enlarged schematic view of a preferred embodiment of the present invention;
FIG. 4 is a flowchart of the attribute update of the box tool in accordance with a preferred embodiment of the present invention;
FIG. 5 is a flow chart showing the display of a box tool in accordance with a preferred embodiment of the present invention;
FIG. 6 is a diagram illustrating data clipping according to a preferred embodiment of the present invention.
Detailed Description
The technical contents of the preferred embodiments of the present invention will be more clearly and easily understood by referring to the drawings attached to the specification. The present invention may be embodied in many different forms of embodiments and the scope of the invention is not limited to the embodiments set forth herein.
In the drawings, structurally identical elements are represented by like reference numerals, and structurally or functionally similar elements are represented by like reference numerals throughout the several views. The size and thickness of each component shown in the drawings are arbitrarily illustrated, and the present invention is not limited to the size and thickness of each component. The thickness of the components may be exaggerated where appropriate in the figures to improve clarity.
As shown in fig. 1, the off-screen rendering process of the present invention is as follows:
step 1, initializing a rendering module;
1.1) create 1D 3D object;
1.2) acquiring a display device;
1.3) create 1D 3D device;
1.4) creating an original video off-screen surface;
1.5) creating a non-original video off-screen surface;
step 2, preparing data;
2.1) receiving YUV data of an original video;
2.2) detecting the YUV video data size of the original video, judging whether the YUV video data size is consistent with the off-screen surface size of the original video, if not, destroying the off-screen surface of the original video, and executing the step 1.4;
2.3) locking the original video off-screen surface;
2.4) copying the YUV data of the original video to the off-screen surface of the original video;
2.5) unlocking the original video off-screen surface;
2.6) locking the non-original video off-screen surface;
2.7) converting YUV data of a non-original video, and copying the YUV data of the converted non-original video to the off-screen surface of the non-original video;
2.8) unlocking the non-original off-screen surface;
step 3, rendering pictures;
3.1) cleaning the D3D equipment;
3.2) opening a stage scene;
3.3) linearly stretching the original video off-screen surface to the video window size and putting the original video off-screen surface into the D3D device;
3.4) linearly stretching the non-original video off-screen surface to the size of the non-original video display area and placing the non-original video off-screen surface into the D3D device;
3.5) closing the stage scene;
3.6) presenting D3D equipment stage scene pictures, and finishing rendering of the original video and the non-original video pictures;
and 4, step 4: and (6) destroying the resources and exiting.
The method may be applied in a digital amplification module, in which application the non-original video is digitally amplified.
As shown in fig. 2, the overall process of the digital amplification module is as follows:
1) the user turns on the digital amplification function by operating the OCX.
2) OCX passes the preview window handle as well as the original video window handle to the play component through TVSDK.
3) The user selects the area of the video picture that needs to be enlarged by the box selection tool.
4) And the playing component cuts and converts a frame of data into YUV of a part of picture selected by a user through stream receiving and decoding.
5) And the rendering module respectively displays the YUV of the complete frame of picture and the YUV generated after cutting, so that the user can synchronously watch the original video and the amplified video.
As shown in fig. 3, which is a schematic diagram of digital amplification, after digital amplification is started in a video control toolbar, an original video may appear in the upper right corner of a current picture in a form of a small window, a framing tool may appear on a video picture of the small window in the upper right corner by default, a user may frame a part to be amplified in the video picture of the small window by moving and stretching a selection frame, the video of the selected part may be amplified to fully cover the original video window, and the amplified picture is updated to an area selected by the user in real time.
The frame selection tool is mainly used for a user to select a video amplification area, the user can stretch and zoom through dragging four corners, and other parts are dragged to move.
As shown in fig. 4, a flowchart for updating the attributes of the box tool includes the following steps:
1) and when the path rendering module is initialized, taking over the window process of the video preview window.
2) And capturing a mouse pressing event of the user in the window process for behavior analysis.
3) If the dragging position is at four corners of the framing tool, the rectangular area is stretched, and otherwise, the rectangular area is moved.
4) Stopping the digital amplification or releasing the rendering module restores the window process.
The drawing of the framing tool is realized in the rendering module, the digital amplification function is completely packaged in the playing component, and a user only needs to transmit the playing handle, so that the functional coupling between the modules is reduced. On the one hand, the situation that a user frequently drags and changes the rectangular area OCX to the TVSDK and then to play the component is avoided, on the other hand, the video picture drawing and the drawing of the framing tool can be synchronized, and the frequent flicker of the framing tool caused by the fact that the video picture covers the drawing tool is avoided.
Fig. 5 is a flow chart showing the display of the box tool, and the GDI is used to draw the box tool after YUV display.
As shown in fig. 6, which is a schematic diagram of data clipping, the used code stream is decoded into an H264 code stream, the decoded code stream is stored in an AVFrame structure, when the code stream is converted into YUV, the start and end positions of the width and height are calculated according to the area selected by the user, and the AVFrame is selectively converted to generate YUV data in the selected area, as shown in the schematic diagram of data clipping in fig. 4.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. A method for rendering multiple videos based on the same canvas of D3D, the method comprising the steps of:
step 1, initializing a rendering module:
1.1) create 1D 3D object;
1.2) acquiring a display device;
1.3) create 1D 3D device;
1.4) creating an original video off-screen surface;
1.5) creating a non-original video off-screen surface;
step 2, data preparation:
2.1) receiving YUV data of an original video;
2.2) detecting the YUV video data size of the original video, judging whether the YUV video data size is consistent with the off-screen surface size of the original video, if not, destroying the off-screen surface of the original video, and executing the step 1.4;
2.3) locking the original video off-screen surface;
2.4) copying the YUV data of the original video to the off-screen surface of the original video;
2.5) unlocking the original video off-screen surface;
2.6) locking the non-original video off-screen surface;
2.7) converting YUV data of a non-original video, and copying the YUV data of the converted non-original video to the off-screen surface of the non-original video;
2.8) unlocking the non-original video off-screen surface;
step 3, rendering pictures:
3.1) cleaning the D3D equipment;
3.2) opening a stage scene;
3.3) linearly stretching the original video off-screen surface to the video window size and putting the original video off-screen surface into the D3D device;
3.4) linearly stretching the non-original video off-screen surface to the non-original video display area size and placing into the D3D device;
3.5) closing the stage scene;
3.6) presenting D3D equipment stage scene pictures, and finishing rendering of the original video and the non-original video pictures;
and 4, destroying the resources and exiting.
2. The method for rendering multiple videos based on the same canvas of D3D as claimed in claim 1, wherein the method is applicable in a digital zoom function in which the non-original video is a digitally zoomed video.
3. The method for rendering multi-channel video on the same canvas as in D3D, according to claim 2, wherein in step 2.7, the scaling method is to convert YUV data user selected portion according to user selected area.
4. The method for rendering multiple videos based on the same canvas of D3D as claimed in claim 2, wherein the specific application steps are:
step 1 is applied, a user starts a digital amplification function by operating an OCX;
applying step 2, transmitting the handle of the preview window and the handle of the original video window to the playing component through the TVSDK by the OCX;
step 3, selecting a video picture area needing to be amplified by a user through a frame selection tool;
step 4, the playing component cuts and converts a frame of data into YUV of a part of picture selected by a user through stream receiving and decoding;
and 5, rendering and displaying the YUV of the complete frame of picture and the YUV of the part of picture selected by the user respectively by adopting a method for rendering multi-channel videos based on the same canvas of D3D.
5. The method for rendering multiple videos based on the same canvas as the D3D as claimed in claim 4, wherein in the step 3 of applying, the drawing of the box selection tool is implemented in the rendering module, a digital zoom function is packaged in the playing component, and only a playing handle needs to be transferred when in use.
6. The method for rendering multi-channel video based on the same canvas of D3D as claimed in claim 4, wherein in the applying step 3, the attribute updating process of the framing tool is:
3.1, taking over the window process of the video preview window when the path rendering module is initialized;
step 3.2 is applied, capturing a mouse pressing event of a user in the window process to perform behavior analysis;
step 3.3 is applied, if the dragging position is at four corners of the framing tool, the rectangular area is stretched, and otherwise, the rectangular area is moved;
and 3.4, restoring the window process by stopping the digital amplification or releasing the rendering module.
7. The method for rendering multi-channel video on the basis of the same canvas as in claim 4 and D3D, wherein in the applying step 3, the box selection tool is drawn by using GDI and is displayed after YUV display.
8. The method for rendering multiple videos on the same canvas as in D3D, according to claim 4, wherein in the applying step 4, H264 bitstream decoding is adopted.
9. The method for rendering multiple videos based on the same canvas of claim 8 and according to D3D, wherein in the applying step 4, the H264 bitstream is decoded and then stored in an AVFrame structure; and when the data is converted into YUV, calculating the starting position and the ending position of the width and the height according to the area selected by the user, and selectively converting the AVFrame to generate YUV data of the selected area.
10. The method for rendering multi-channel video on the basis of the same canvas as in D3D, according to the step 3.3, wherein the linear stretching method is to stretch the canvas to the window size according to a linear interpolation strategy.
CN202010782147.6A 2020-08-06 2020-08-06 Method for rendering multiple paths of videos on basis of D3D same canvas Active CN111787397B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010782147.6A CN111787397B (en) 2020-08-06 2020-08-06 Method for rendering multiple paths of videos on basis of D3D same canvas

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010782147.6A CN111787397B (en) 2020-08-06 2020-08-06 Method for rendering multiple paths of videos on basis of D3D same canvas

Publications (2)

Publication Number Publication Date
CN111787397A true CN111787397A (en) 2020-10-16
CN111787397B CN111787397B (en) 2023-04-07

Family

ID=72765906

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010782147.6A Active CN111787397B (en) 2020-08-06 2020-08-06 Method for rendering multiple paths of videos on basis of D3D same canvas

Country Status (1)

Country Link
CN (1) CN111787397B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114222166A (en) * 2021-09-29 2022-03-22 重庆创通联达智能技术有限公司 Multi-path video code stream real-time processing and on-screen playing method and related system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050151885A1 (en) * 2003-12-08 2005-07-14 Lg Electronic Inc. Method of scaling partial area of main picture
CN111355998A (en) * 2019-07-23 2020-06-30 杭州海康威视数字技术股份有限公司 Video processing method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050151885A1 (en) * 2003-12-08 2005-07-14 Lg Electronic Inc. Method of scaling partial area of main picture
CN111355998A (en) * 2019-07-23 2020-06-30 杭州海康威视数字技术股份有限公司 Video processing method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
雷霄骅: "最简单的视音频播放示例3:Direct3D播放YUV,RGB(通过Surface)", 《CSDN》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114222166A (en) * 2021-09-29 2022-03-22 重庆创通联达智能技术有限公司 Multi-path video code stream real-time processing and on-screen playing method and related system
CN114222166B (en) * 2021-09-29 2024-02-13 重庆创通联达智能技术有限公司 Multi-channel video code stream real-time processing and on-screen playing method and related system

Also Published As

Publication number Publication date
CN111787397B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
US9870801B2 (en) Systems and methods for encoding multimedia content
US10964069B2 (en) Method and graphic processor for managing colors of a user interface
KR0138845B1 (en) Synchronization controller and control method for multimedia object in mheg engine
WO2003102874A1 (en) Optimized mixed media rendering
CN108141634B (en) Method and apparatus for generating preview image and computer-readable storage medium
CN116887005B (en) Screen projection method, electronic device and computer readable storage medium
CN111787397B (en) Method for rendering multiple paths of videos on basis of D3D same canvas
CN107580228B (en) Monitoring video processing method, device and equipment
KR20060135736A (en) Changing the aspect ratio of images to be displayed on a screen
CN113918766A (en) Video thumbnail display method, device and storage medium
US20240144429A1 (en) Image processing method, apparatus and system, and storage medium
CN106454554A (en) Ambient light television time synchronizing method and ambient light television
US6828979B2 (en) Method and device for video scene composition including mapping graphic elements on upscaled video frames
KR102265784B1 (en) Apparatus and method for client side forensic watermark
KR101970787B1 (en) Video decoding apparatus and method based on android platform using dual memory
US20020113814A1 (en) Method and device for video scene composition
JP3898347B2 (en) Movie data control apparatus, movie data control method, and computer-readable recording medium on which movie data control program is recorded
JP2004515175A (en) Method and apparatus for composition of video scenes from various data
KR20030032355A (en) Method for extracting video object plane by using moving object edge
US20070211082A1 (en) Method and System for Volatile Construction of an Image to be Displayed on a Display System from a Plurality of Objects
CN117834990A (en) Display equipment and medium data double-speed playing method
CN116684674A (en) Subtitle display method and display equipment
KR20040035127A (en) Apparatus for generating thumbnail image of digital video
Jones et al. Setting Up Your FLA
JP2001331246A (en) Picture associated data display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant