CN111935419A - Video overlapping method and device adopting same - Google Patents

Video overlapping method and device adopting same Download PDF

Info

Publication number
CN111935419A
CN111935419A CN202010838937.1A CN202010838937A CN111935419A CN 111935419 A CN111935419 A CN 111935419A CN 202010838937 A CN202010838937 A CN 202010838937A CN 111935419 A CN111935419 A CN 111935419A
Authority
CN
China
Prior art keywords
video
image
surface layer
playing
layer image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010838937.1A
Other languages
Chinese (zh)
Inventor
周宸臣
杨帮廷
黄杨友
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Shenchen Technology Development Co ltd
Original Assignee
Wuhan Shenchen Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Shenchen Technology Development Co ltd filed Critical Wuhan Shenchen Technology Development Co ltd
Priority to CN202010838937.1A priority Critical patent/CN111935419A/en
Publication of CN111935419A publication Critical patent/CN111935419A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Circuits (AREA)

Abstract

The invention discloses a video superposition method and a device adopting the method, comprising the steps that a video mixer acquires an auxiliary video as a surface image; after the video mixer starts playing time quantum, intercepting a main video output by the video output equipment as a bottom layer image; and the video mixing fuses the bottom layer image of each frame and the surface layer image of the corresponding time period into a pre-playing image, and sends the pre-playing image to a display for playing. The invention has the advantages that the video mixer is arranged, when the auxiliary video needs to be overlapped in the main video, the video mixer intercepts the main video which is being played in a frame in a corresponding time period, the auxiliary video is overlapped in a frame mode, and each frame of image is immediately sent to the display for playing after being overlapped.

Description

Video overlapping method and device adopting same
Technical Field
The invention relates to the field of image fusion, in particular to a video superposition method and a device adopting the method.
Background
The image superimposition refers to superimposing and combining at least two images, and correspondingly adjusting the definition of each image, so that the images are fused into one image, and all the images have own existence sense in the fused image, which is called image superimposition.
The video superposition means that two videos are divided into a plurality of frame images, the two corresponding frames of images are respectively fused after the frame images of the two videos are in one-to-one correspondence, and a superposed video is formed after the fused images and the playing track are unchanged.
The video overlay is widely applied, for example, in the existing television stations, station captions on the television stations are the overlay pictures on the existing videos, but the station caption pictures are not changed. For example, adding subtitles to a shot video is also an application of video superimposition. Particularly, after the existing comprehensive program is shot, a funny video which does not affect the main video is added in the video.
However, the existing video overlay technology is based on that the auxiliary video is overlaid in the main video after the main video is finished, but when the main video is just generated, the auxiliary video cannot be overlaid, for example, in a game hall, a user can insert an advertisement when loading a game or when the game is finished, and the advertisement is also advertised to the user while the user is not affected. For example, in a live game, some dynamic video can be added to the game screen to attract the audience.
Disclosure of Invention
An object of the present invention is to solve at least the above problems and to provide at least the advantages described later.
The invention aims to provide a video overlapping method and a device adopting the method, which aim to solve the problem that the video overlapping cannot be carried out immediately.
To achieve these objects and other advantages in accordance with the present invention, there are provided:
a method of video overlay, comprising:
the method comprises the steps that a video mixer obtains an auxiliary video, wherein the auxiliary video comprises a plurality of surface layer images, each surface layer image corresponds to a playing time period, and the playing time periods of the surface layer images are in a continuous state;
the video mixer intercepts a main video output by the video output equipment from a playing time period corresponding to a certain surface layer image until the playing time period is finished, the video mixer acquires images of one frame in the main video and defines the images as bottom layer images, and the playing time point of the bottom layer images is in the playing time period corresponding to the surface layer images;
after the video mixer intercepts a first frame of bottom layer image, an image processor in the video mixer fuses the first frame of bottom layer image and a corresponding surface layer image into an image defined as a pre-playing image, and the video mixer sends the pre-playing image to a display for playing;
after the video mixer intercepts the second frame of bottom layer image, an image processor in the video mixer fuses the second frame of bottom layer image and the corresponding surface layer image into a pre-playing image and sends the pre-playing image to a display for playing;
and in the playing time period, when the video mixer intercepts a frame of bottom layer image, the bottom layer image and the corresponding surface layer image are fused into a pre-playing image and sent to the display for playing.
In one possible design, the method for fusing the base layer image and the corresponding surface layer image into one image by the video mixer includes:
obtaining a bottom layer image and a surface layer image, wherein the size of the bottom layer image is consistent with that of the surface layer image;
processing the surface layer image, setting the transparency of the designated color part in the surface layer image to be 0, and adopting the set transparency for the non-designated other color parts;
and enabling each pixel point on the bottom layer image to correspond to each pixel point on the surface layer image one by one, and fusing the bottom layer image and the surface layer image one by adopting a set fusion algorithm so as to fuse the bottom layer image and the surface layer image into an image.
In one possible design, a fusion algorithm used when fusing pixel points on the base image and the surface image is as follows:
adopting corresponding pixel points to fuse one by one, taking any one pixel point on the surface layer image, setting the pixel value of the pixel point as M, setting the transparency as P, setting the pixel value of the pixel point on the corresponding bottom layer image as N, and setting the fusion algorithm when the two pixel points are fused as M P + N (255-P)/255.
In one possible design, the video mixer is connected to a remote server via a network, and the server sends the secondary video to the video mixer.
In one possible design, the video mixer sends the current pre-play image to the server at intervals, and the server monitors the superimposition effect.
In one possible design, the video mixer converts the format of the underlying image into YUV422 format after converting the main video into the underlying image.
In one possible design, the video mixer continuously acquires the main video sent by the video output device and sends the main video to the server.
In one possible design, the server should determine the size of the image in the video received in real time before transmitting the surface image, so that the size of the transmitted surface image matches the size of the video image.
A video mixing apparatus, comprising:
the field video input port is used for receiving the video output by the video output equipment;
the network interface is used for interacting data with the server;
the picture generator is used for converting the received main video into a bottom picture and converting the received network data into a surface picture;
the image processor is used for receiving the bottom layer picture and the surface layer picture sent by the picture generator and fusing the corresponding bottom layer picture and the corresponding surface layer picture into a picture;
and the on-site video output port is used for receiving the pictures processed by the image processor and sending the pictures to the display for playing.
The device realizes video superposition, a main video is received at a site video input port, an auxiliary video is received at a network interface, and the auxiliary video is superposed into the main video according to a command sent together after the auxiliary video is received.
In one possible design, the apparatus further includes a video processor, which encodes the video and transmits the encoded video to a remote server, where the encoded video is beneficial for network transmission.
The invention at least comprises the following beneficial effects: (1) the video mixer is arranged, when the auxiliary video needs to be overlapped in the main video, the main video which is being played is intercepted by the video mixer in one frame in the corresponding time period, the auxiliary video is overlapped in one frame, and after each frame of image is overlapped, the image is immediately sent to the display to be played;
(2) the superposed images can be sent to a server for checking, and when the superposition effect is poor, the superposed images can be adjusted to ensure the superposition playing effect;
(3) the video mixer also sends the main video received in real time to the server, so that the server can trigger and send the auxiliary video conveniently, and the auxiliary video is sent after the set characters appear in the main video; secondly, monitoring the main video, and if the main video has a problem, reminding a third party, namely a main video provider or a video output equipment manager; and thirdly, before the server sends the auxiliary video, the size of the main video is obtained, and the matched auxiliary video is sent.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention.
Drawings
FIG. 1 is a flow chart of video overlay;
FIG. 2 is a flow chart of image overlay;
fig. 3 is a block diagram of the present invention.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. Specific structural and functional details disclosed herein are merely illustrative of example embodiments of the invention. This invention may, however, be embodied in many alternate forms and should not be construed as limited to the embodiments set forth herein.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments of the present invention.
It should be understood that, for the term "and/or" as may appear herein, it is merely an associative relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, B exists alone, and A and B exist at the same time; for the term "/and" as may appear herein, which describes another associative object relationship, it means that two relationships may exist, e.g., a/and B, may mean: a exists independently, and A and B exist independently; in addition, for the character "/" that may appear herein, it generally means that the former and latter associated objects are in an "or" relationship.
It will be understood that when an element is referred to herein as being "connected," "connected," or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Conversely, if a unit is referred to herein as being "directly connected" or "directly coupled" to another unit, it is intended that no intervening units are present. In addition, other words used to describe the relationship between elements should be interpreted in a similar manner (e.g., "between … …" versus "directly between … …", "adjacent" versus "directly adjacent", etc.).
It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, numbers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative designs, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
It should be understood that specific details are provided in the following description to facilitate a thorough understanding of example embodiments. However, it will be understood by those of ordinary skill in the art that the example embodiments may be practiced without these specific details. For example, systems may be shown in block diagrams in order not to obscure the examples in unnecessary detail. In other instances, well-known processes, structures and techniques may be shown without unnecessary detail in order to avoid obscuring example embodiments.
In a first aspect, a method of video overlay, comprising:
s101, a video mixer acquires an auxiliary video, wherein the auxiliary video comprises a plurality of surface layer images, each surface layer image corresponds to a playing time period, and the playing time periods of the surface layer images are in a continuous state;
s102, a video mixer intercepts a main video output by video output equipment from a playing time period corresponding to a certain surface layer image until the playing time period is finished, the video mixer acquires images of one frame in the main video and defines the images as bottom layer images, and the playing time point of the bottom layer images is in the playing time period corresponding to the surface layer images;
s103, after the video mixer intercepts the first frame of bottom layer image, an image processor in the video mixer fuses the first frame of bottom layer image and the corresponding surface layer image into an image which is defined as a pre-playing image, and the video mixer sends the pre-playing image to a display for playing;
s104, after the video mixer intercepts the second frame of bottom layer image, an image processor in the video mixer fuses the second frame of bottom layer image and the corresponding surface layer image into a pre-playing image and sends the pre-playing image to a display for playing;
and S105, repeating the steps, and fusing the bottom layer image and the corresponding surface layer image into a pre-playing image by the video mixer when the video mixer intercepts one frame of bottom layer image in the playing time period, and sending the pre-playing image to the display for playing.
The invention discloses a video superposition method, which is characterized in that a video is decomposed into images, and the video superposition is realized through image superposition, as is known, the video is formed by combining images of one frame, one frame of image is a static image, one second of video generally comprises dozens of frames, for example, a movie generally has 24 frames in one second, or 24 images are played in 1 second, and the human eyes can look very smooth due to the continuity of the images.
The video output device is directly connected to the display, the display directly plays the video output by the video output device, when advertisements or marks need to be added in the video, a video mixer is arranged, the input end of the video mixer is connected to the video output device, and the output end of the video mixer is connected to the display.
In general, the video mixer directly transmits the video output by the video output device to the display for playing, and when the video mixer receives the auxiliary video, the auxiliary video can be formed by one picture, and only the starting time and the ending time of playing need to be set. The auxiliary video may also be a video, and the surface image is a combination of frames of images, and the start-stop time of playing needs to be set.
After the auxiliary video is obtained by the video mixer, the video output by the video output device is intercepted from the starting time according to the starting and stopping time of playing, namely, one frame of frame is intercepted, and after one frame of bottom layer image is intercepted, namely, after being fused with the corresponding surface layer image, the video mixer immediately plays the video on a display. Therefore, a certain time is needed from the interception processing to the playing of the image, so that a certain playing delay exists, the processing delay of the method adopted by the invention is less than 17ms, and the delay is very low and basically has no influence.
In one possible design, the method for fusing the base layer image and the corresponding surface layer image into one image by the video mixer includes:
s201, obtaining a bottom layer image and a surface layer image, wherein the size of the bottom layer image is consistent with that of the surface layer image;
s202, processing the surface layer image, setting the transparency of the appointed color part in the surface layer image to be 0, and adopting the set transparency for other non-appointed color parts;
and S203, enabling each pixel point on the bottom layer image to correspond to each pixel point on the surface layer image one by one, and fusing the bottom layer image and the surface layer image one by adopting a set fusion algorithm to enable the bottom layer image and the surface layer image to be fused into an image.
In step S202, the processing of the surface layer image means that when the surface layer image is received, a designated color command is received, that is, a designated color portion in the surface layer image is scratched, and the transparency of the designated color portion becomes 0. A transparency command is also received to adjust the non-designated color portions to corresponding transparency. Transparency is expressed in terms of alpha values, which is a prior art term, and the alpha values in an image are typically 0-255. The alpha value is 0, i.e. the pixel is completely transparent. When the alpha value is 255, the pixel is completely opaque.
In step S201, it is stated that if the sizes of the underlying image and the surface image should be the same, the underlying image and the surface image have pixels corresponding to each other, where the pixels are basic units of the image, and the image is composed of a plurality of pixels.
In one possible design, a fusion algorithm used when fusing pixel points on the base image and the surface image is as follows:
adopting corresponding pixel points to fuse one by one, taking any one pixel point on the surface layer image, setting the pixel value of the pixel point as M, setting the transparency as P, setting the pixel value of the pixel point on the corresponding bottom layer image as N, and setting the fusion algorithm when the two pixel points are fused as M P + N (255-P)/255. The transparency P is represented by one byte, which may represent the range 0-255, i.e. the range of P is 0-255. The formula is decomposed as M P/255+ N ((255-P)/255), when P is equal to 255, the surface picture is completely opaque, and the formula is substituted to obtain: m × 255/255+ N (255-; when P is 0, it indicates that the surface layer image is completely transparent, and the above formula M × 0/255+ N (255-0)/255 is substituted by M × 0+ N × 1 — N, and it can be seen that the merged pixel is the bottom layer image N; when P takes other values, fusion graphs with different degrees of transparency can be obtained.
Firstly, the pixel values N of all the pixels in the surface layer image are known, the transparency P is also determined in S102, then the pixel values M of all the pixels in the bottom layer image are also known, so that the fused pixels include two kinds of pixels, of course, when P is 0, M × P is also 0, so that the bottom layer image is completely displayed at this time, when P is not 0, the fused pixels include two kinds of pixels including the surface layer pixel and the bottom layer pixel, where, when the transparency of the surface layer pixel is changed, the transparency of the bottom layer pixel is also changed. In this way, the transparency of the top and bottom images can be adjusted.
In one possible design, the video mixer is connected to a remote server via a network, and the server sends the secondary video to the video mixer.
The invention sends the auxiliary video through a remote server, if the server decides to carry out advertising in a period of time, the server sends the auxiliary video to the video mixer, the two are communicated through a wireless or wired network, and certainly, when the auxiliary video is sent, the server also needs to insert a coming playing period of time.
In one possible design, the video mixer sends the current pre-play image to the server at intervals, and the server monitors the superimposition effect.
In the invention, the video mixer sends a pre-played image to the server one second, the server confirms the video superposition effect, if the superposition effect is not good, the server can send a stop command to the video mixer, and the video mixer stops fusion after receiving the stop command.
In one possible design, the video mixer converts the format of the underlying image into YUV422 format after converting the main video into the underlying image.
The image format obtained after the video processing is generally RGB888 (one pixel storage space in RGB888 format is 3 bytes), the RGB888 format is converted into YUV422 (one pixel storage space in YUV422 format is 2 bytes) through hardware conversion, the storage space of the image is reduced by 1/3 through the conversion, so that the processing speed is accelerated, a part of delay is reduced, the conversion is processed by a nxp i.mx6 type special processor, and the processing delay is less than 2 ms.
In one possible design, the video mixer continuously acquires the main video sent by the video output device and sends the main video to the server.
Therefore, in general time, after the video mixer receives the video output by the video output equipment, one path of the video is processed or not processed and is sent to the display for display, and the other path of the video is sent to the server, so that the server can monitor the video in real time.
The server monitors videos in real time: when the video mixer is installed on a game machine in actual use, when a user plays a game, the game over characters appear or the game loading characters appear, mainly at intervals of game appearance, the server sends an auxiliary video to the video mixer, the video is loaded with advertisements, and the user watches the advertisements while waiting.
Of course, the server monitors the video in real time, and has other uses, for example, the server should measure the size of the image in the video received in real time before sending the surface image, so that the sent surface image is matched with the size of the video image.
In a second aspect, a video mixing apparatus includes:
the field video input port is used for receiving the video output by the video output equipment;
the network interface is used for interacting data with the server;
the picture generator is used for converting the received main video into a bottom picture and converting the received network data into a surface picture; the network message carries picture data, the format of the carried picture can be PNG, JPG and the like, and the corresponding picture can be read out through a library function carried by an existing graphical user interface application program development framework (QT) development kit;
the image processor is used for receiving the bottom layer picture and the surface layer picture sent by the picture generator and fusing the corresponding bottom layer picture and the corresponding surface layer picture into a picture;
and the on-site video output port is used for receiving the pictures processed by the image processor and sending the pictures to the display for playing.
In the invention, the video mixer is the video mixer proposed in the second aspect, wherein the field video input port is connected to the video output device and receives the video output from the video output device, the field video input port can receive the videos with formats such as VGA, DVI, HDMI and the like, specifically, the field video input port comprises a VGA-DVI converter, a DVI switch and a DVI-HDMI converter, the VGA-DVI converter can receive the videos with VGA formats, and the VGA format video is converted into DVI format video, the DVI switch can receive the DVI format video, the output of the VGA-DVI converter is connected to the DVI switch, the output end of the DVI switch is connected to the DVI-HDMI converter, the output end of the DVI-HDMI converter is connected to a processing chip, and the processing chip comprises a picture generator and a picture processor.
The network interface is connected with the processing chip, and the processing chip is connected to an external server through the network interface. For receiving the server transmission data. The network interface is an optical fiber socket or a wireless network card, so that the video mixing device is connected with the server through a wired network or a wireless network.
A live video output port through which the video mixing device is connected to a display. The video output port can output and receive VGA, DVI, HDMI and other format videos, and concretely, the field video output port comprises a DVI switch and a DVI-VGA converter.
In one possible design, the apparatus further includes a video processor, and the video processor encodes the video and transmits the encoded video to a remote server. The video coding of the video processor adopts H264 format, and the video processor can adopt but is not limited to nxp i.mx6 type main chip.
While embodiments of the invention have been described above, it is not limited to the applications set forth in the description and the embodiments, which are fully applicable in various fields of endeavor to which the invention pertains, and further modifications may readily be made by those skilled in the art, it being understood that the invention is not limited to the details shown and described herein without departing from the general concept defined by the appended claims and their equivalents.

Claims (10)

1. A method of video overlay, comprising:
the method comprises the steps that a video mixer obtains an auxiliary video, wherein the auxiliary video comprises a plurality of surface layer images, each surface layer image corresponds to a playing time period, and the playing time periods of the surface layer images are in a continuous state;
the video mixer intercepts a main video output by the video output equipment from a playing time period corresponding to a certain surface layer image until the playing time period is finished, the video mixer acquires images of one frame in the main video and defines the images as bottom layer images, and the playing time point of the bottom layer images is in the playing time period corresponding to the surface layer images;
after the video mixer intercepts a first frame of bottom layer image, an image processor in the video mixer fuses the first frame of bottom layer image and a corresponding surface layer image into an image defined as a pre-playing image, and the video mixer sends the pre-playing image to a display for playing;
after the video mixer intercepts the second frame of bottom layer image, an image processor in the video mixer fuses the second frame of bottom layer image and the corresponding surface layer image into a pre-playing image and sends the pre-playing image to a display for playing;
in the playing time period, each frame of bottom layer image is captured by the video mixer, fused with the corresponding surface layer image to form a pre-playing image, and sent to the display for playing.
2. The method of claim 1, wherein the video mixer is a method of fusing a base image and a corresponding surface image into one image, comprising:
obtaining a bottom layer image and a surface layer image, wherein the size of the bottom layer image is consistent with that of the surface layer image;
processing the surface layer image, setting the transparency of the designated color part in the surface layer image to be 0, and adopting the set transparency for the non-designated other color parts;
and enabling each pixel point on the bottom layer image to correspond to each pixel point on the surface layer image one by one, and fusing the bottom layer image and the surface layer image one by adopting a set fusion algorithm so as to fuse the bottom layer image and the surface layer image into an image.
3. The method of claim 1, wherein the fusion algorithm used in fusing the pixel points on the base image and the surface image is:
adopting corresponding pixel points to fuse one by one, taking any one pixel point on the surface layer image, setting the pixel value of the pixel point as M, setting the transparency as P, setting the pixel value of the pixel point on the corresponding bottom layer image as N, and setting the fusion algorithm when the two pixel points are fused as M P + N (255-P)/255.
4. The method of claim 1, wherein the video mixer is connected to a remote server via a network, the server sending the secondary video to the video mixer.
5. The method of claim 4, wherein the video mixer interval sends the current pre-play image to a server, which monitors the overlay effect.
6. The method of claim 1, wherein the video mixer further converts the format of the base layer image to YUV422 format after converting the main video to the base layer image.
7. The method of claim 4, wherein the video mixer continuously obtains the primary video from the video output device and sends the primary video to the server.
8. The method of claim 7, wherein the server receives the image size in the video received in real time before transmitting the skin image such that the transmitted skin image matches the size of the video image.
9. A video mixing apparatus, comprising:
the field video input port is used for receiving the video output by the video output equipment;
the network interface is used for interacting data with the server;
the picture generator is used for converting the received main video into a bottom picture and converting the received network data into a surface picture;
the image processor is used for receiving the bottom layer picture and the surface layer picture sent by the picture generator and fusing the corresponding bottom layer picture and the corresponding surface layer picture into a picture;
and the on-site video output port is used for receiving the pictures processed by the image processor and sending the pictures to the display for playing.
10. The apparatus of claim 9, further comprising a video processor for encoding the video for transmission to a remote server.
CN202010838937.1A 2020-08-19 2020-08-19 Video overlapping method and device adopting same Pending CN111935419A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010838937.1A CN111935419A (en) 2020-08-19 2020-08-19 Video overlapping method and device adopting same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010838937.1A CN111935419A (en) 2020-08-19 2020-08-19 Video overlapping method and device adopting same

Publications (1)

Publication Number Publication Date
CN111935419A true CN111935419A (en) 2020-11-13

Family

ID=73306139

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010838937.1A Pending CN111935419A (en) 2020-08-19 2020-08-19 Video overlapping method and device adopting same

Country Status (1)

Country Link
CN (1) CN111935419A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105791889A (en) * 2016-05-04 2016-07-20 武汉斗鱼网络科技有限公司 Advertisement inter-cut method for video live broadcasting and advertisement inter-cut device for video live broadcasting
CN106911936A (en) * 2017-03-01 2017-06-30 北京牡丹电子集团有限责任公司数字电视技术中心 Dynamic video flowing picture covering method
US20170289643A1 (en) * 2016-03-31 2017-10-05 Valeria Kachkova Method of displaying advertising during a video pause
CN108989883A (en) * 2018-07-06 2018-12-11 武汉斗鱼网络科技有限公司 A kind of living broadcast advertisement method, apparatus, equipment and medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170289643A1 (en) * 2016-03-31 2017-10-05 Valeria Kachkova Method of displaying advertising during a video pause
CN105791889A (en) * 2016-05-04 2016-07-20 武汉斗鱼网络科技有限公司 Advertisement inter-cut method for video live broadcasting and advertisement inter-cut device for video live broadcasting
CN106911936A (en) * 2017-03-01 2017-06-30 北京牡丹电子集团有限责任公司数字电视技术中心 Dynamic video flowing picture covering method
CN108989883A (en) * 2018-07-06 2018-12-11 武汉斗鱼网络科技有限公司 A kind of living broadcast advertisement method, apparatus, equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵小川: "《MATLAB图像处理 能力提高与应用案例》", 31 January 2014 *

Similar Documents

Publication Publication Date Title
US10511803B2 (en) Video signal transmission method and device
US8911291B2 (en) Display system and display method for video wall
US10965904B2 (en) Display method and display device, television and storage medium
CN106791865A (en) The method of the self adaptation form conversion based on high dynamic range video
EP3685575B1 (en) Display apparatus, method for controlling the same and image providing apparatus
CN101986382B (en) Wireless network transmission RGB signal processing method for multi-screen splicing display wall
JP5862112B2 (en) Head mounted display and display control method
US20160057488A1 (en) Method and System for Providing and Displaying Optional Overlays
KR20200027491A (en) Adaptive high dynamic range tone mapping using overlay instructions
US20060209213A1 (en) Using an electronic paper-based screen to improve contrast
US11936936B2 (en) Method and system for providing and displaying optional overlays
CN103152528A (en) Method for assembling television wall by self splicing of televisions
CN105100870A (en) Screenshot method and terminal equipment
CN105915972A (en) Virtual reality 4K video optimization method and device
Hutchison Introducing DLP 3-D TV
US9842572B2 (en) Methods and apparatus for displaying video including variable frame rates
CN111935419A (en) Video overlapping method and device adopting same
CN103888808A (en) Video display method, display device, auxiliary device and system
CN107580228B (en) Monitoring video processing method, device and equipment
US8872893B2 (en) Systems and methods for managing distribution of 3D media
CN112218000B (en) Multi-picture monitoring method, device and system
GB2616188A (en) Method and apparatus for creating panoramic picture on basis of large screen, and intelligent terminal and medium
CN113099237A (en) Video processing method and device
US20140056524A1 (en) Image processing device, image processing method, and program
CN116389794A (en) Techniques for enabling ultra high definition alliance specified reference mode (UHDA-SRM)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201113

RJ01 Rejection of invention patent application after publication