CN106454256A - Real-time splicing method and apparatus of multiple videos - Google Patents
Real-time splicing method and apparatus of multiple videos Download PDFInfo
- Publication number
- CN106454256A CN106454256A CN201610955546.1A CN201610955546A CN106454256A CN 106454256 A CN106454256 A CN 106454256A CN 201610955546 A CN201610955546 A CN 201610955546A CN 106454256 A CN106454256 A CN 106454256A
- Authority
- CN
- China
- Prior art keywords
- video
- present image
- data
- video present
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
The present invention belongs to the technical field of information processing, and particularly relates to a real-time splicing method and apparatus of multiple videos. The splicing method comprises the following steps of receiving video information comprising at least first video data and second video data; respectively storing the first video data and the second video data in a first buffer area and a second buffer area that are corresponding to first video data and the second video data; reading the first video data in the first buffer area so as to obtain decoding data comprising current image information of a first video; reading the second video data in the second buffer area so as to obtain decoding data comprising current image information of a second video; and splicing the current image information of the first video and the current image information of the second video, so as to obtain current spliced image information of the current image of the first video and the current image of the second video. The splicing method can be used for splicing multiple videos in real time, and the splicing process is simple and has high efficiency and flexibility.
Description
Technical field
The invention belongs to technical field of information processing is and in particular to a kind of real-time joining method of many videos and device.
Background technology
More and more flourishing with network, people, during carrying out social activity, entertaining, are much to enter row information by network to hand over
Stream.Video chat therein or net cast especially show the technology of multi-channel video simultaneously, be very easy to person to person it
Between information communication.In addition, current can simultaneously showing the technology of multi-path monitoring video is also increasingly favored by people.
No matter being Video chat or the Video Supervision Technique showing multi-channel video, the demand of people is not only by multichannel
Video is shown simultaneously, is also not necessarily limited to ensure the definition of each picture, even more multi-channel video is shown simultaneously
Real-time proposes very high requirement, and current video-splicing technology has flow process is complicated, less efficient, motility is little etc.
Deficiency, largely limits the raising of real-time.
Content of the invention
In order to solve above-mentioned technical problem, the present invention provides a kind of real-time joining method of many videos and device, this splicing
Method flow is simple, efficiency is higher, motility is larger, can largely improve the real-time of video-splicing.
On the one hand, the present invention provides a kind of many videos real-time joining method, and it comprises the following steps:
Receive the video information including at least the first video data and the second video data;Described first video data includes first
Video present image information, described second video data includes the second video present image information;
Described first video data and the second video data are respectively stored into corresponding first buffering area, the second buffering
Area;
Read described first video data in described first buffering area, and described first video data is decoded, obtain
Comprise the decoding data of the first video present image information;Read described second video data in described second buffering area, and
Described second video data is decoded, obtains comprising the decoding data of the second video present image information;
Splice described first video present image information and the second video present image information, obtain the first video present image and
The current stitching image information of the second video present image.
Preferably, in the described real-time joining method of many videos, described first video present image information includes first
The pixel yuv data information of video present image, described second video present image information includes the second video present image
Pixel yuv data information;Described current stitching image information includes the pixel yuv data information of described first video present image
Merging pixel yuv data information with the pixel yuv data information of described second video present image;Described pixel yuv data
Information includes bright degrees of data and the chroma data of pixel.
It may further be preferable that in the real-time joining method of described many videos, described merging pixel yuv data information is level
Merge pixel yuv data information, the step that acquisition level merges pixel yuv data information includes:
Obtain from the pixel yuv data information of the first video present image the first video present image every one-row pixels
One bright degrees of data and the first chroma data, obtain the second video from the pixel yuv data information of the second video present image
The bright degrees of data of the second of every one-row pixels of present image and the second chroma data;
Will be current with described second video respectively for the first of every one-row pixels of described first video present image the bright degrees of data
The bright degrees of data of the second of the corresponding row pixel of image carries out level merging, obtains level and merges the bright degrees of data of pixel;
First chroma data of every one-row pixels of described first video present image is currently schemed with described second video respectively
Second chroma data of the corresponding row pixel of picture carries out level merging, obtains level and merges pixel chromaticity data.
It is preferred that described merging pixel yuv data information is vertical merged pixel yuv data information, obtain vertical merged
The step of pixel yuv data information includes:
Obtain from the pixel yuv data information of the first video present image the first video present image whole pixels first
Bright degrees of data and the first chroma data, obtain the second video from the pixel yuv data information of the second video present image and work as
The bright degrees of data of the second of whole pixels of front image and the second chroma data;
The first of whole pixels of described first video present image bright degrees of data is currently schemed with described second video respectively
Second bright degrees of data of whole pixels of picture carries out vertical merged, obtains vertical merged pixel and becomes clear degrees of data;
By the first chroma data of whole pixels of described first video present image respectively with described second video present image
The second chroma data of whole pixels carry out vertical merged, obtain vertical merged pixel chromaticity data.
Preferably, described first video present image information includes the geomery of the first video present image, described
Second video present image information includes the geomery of the second video present image;Splice described first video present image letter
Before breath and the second video present image information, also include:
Judge whether the geomery of the geomery of the first video present image and the second video present image is consistent;
If the geomery of the geomery of the first video present image and the second video present image is inconsistent, will be described
In first video present image and the second video present image the larger image of geomery be cut out process or will be described
In first video present image and the second video present image, the less image of geomery is amplified processing so that described the
The geomery of one video present image and the second video present image is consistent.
On the other hand, the present invention also provides a kind of many videos real-time splicing apparatus, and it includes:
Receiver module, for receiving the video information including at least the first video data and the second video data;Described first regards
According to including the first video present image information, described second video data includes the second video present image information to frequency;
Memory module, for being respectively stored into the first corresponding buffering by described first video data and the second video data
Area, second buffering area;
Decoder module, for reading described first video data in described first buffering area, and to described first video data
It is decoded, obtain comprising the decoding data of the first video present image information;Read described in described second buffering area
Two video datas, and described second video data is decoded, obtain comprising the solution yardage of the second video present image information
According to;
Concatenation module, for splicing described first video present image information and the second video present image information, obtains first
Video present image and the current stitching image information of the second video present image.
Preferably, described first video present image information includes the pixel yuv data letter of the first video present image
Breath, described second video present image information includes the pixel yuv data information of the second video present image;Described current splicing
Image information includes the pixel yuv data information of described first video present image and the pixel of described second video present image
The merging pixel yuv data information of yuv data information;Described pixel yuv data information includes bright degrees of data and the color of pixel
Degrees of data.
The described real-time splicing apparatus of many videos merges pixel it is preferable that described merging pixel yuv data information is level
Yuv data information, and described concatenation module includes:
Obtaining unit, for obtaining the first video present image from the pixel yuv data information of the first video present image
First bright degrees of data of every one-row pixels and the first chroma data, from the pixel yuv data information of the second video present image
Second bright degrees of data of the middle every one-row pixels obtaining the second video present image and the second chroma data;
Horizontal combining unit, for by the first of every one-row pixels of described first video present image the bright degrees of data respectively with
The bright degrees of data of the second of the corresponding row pixel of described second video present image carries out level merging, obtains level and merges picture
The bright degrees of data of element;
Described horizontal combining unit, is additionally operable to divide the first chroma data of every one-row pixels of described first video present image
Do not carry out level merging with the second chroma data of the corresponding row pixel of described second video present image, obtain level and merge
Pixel chromaticity data.
The real-time joining method of described many videos is it is preferred that described merging pixel yuv data information is vertical merged pixel
Yuv data information, and described concatenation module includes:
Obtaining unit, for obtaining the first video present image from the pixel yuv data information of the first video present image
All the first of pixel the bright degrees of data and the first chroma datas, from the pixel yuv data information of the second video present image
Obtain the second bright degrees of data and second chroma data of whole pixels of the second video present image;
Vertical merged unit, for by the first of whole pixels of described first video present image the bright degrees of data respectively with institute
The the second bright degrees of data stating whole pixels of the second video present image carries out vertical merged, obtains vertical merged pixel and becomes clear
Degrees of data;
Described vertical merged unit, the first chroma data being additionally operable to whole pixels of just described first video present image divides
Do not carry out with the second chroma data of whole pixels of described second video present image vertical merged, obtain vertical merged pixel
Chroma data.
Preferably, described first video present image information includes the geomery of the first video present image, described
Second video present image information includes the geomery of the second video present image;And the real-time splicing apparatus of described many videos is also
Including:
Judge module, before splicing described first video present image information and the second video present image information, judges
Whether the geomery of the geomery of the first video present image and the second video present image is consistent;
Processing module, if the geomery of the geomery for the first video present image and the second video present image is not
Unanimously, then the larger image of geomery in described first video present image and the second video present image is cut out locating
Manage or be amplified locating by the less image of geomery in described first video present image and the second video present image
Reason is so that the geomery of described first video present image and the second video present image is consistent.
The present image that video acquisition end collects can be believed by the real-time joining method of many videos in the embodiment of the present invention
Breath is sent to server, in real time after server receives the first video data and the video information of the second video data, in order to accelerate
Processing speed, first the first video data and the second video data is respectively stored in corresponding relief area, facilitating and
When read the first video data and the second video data;Through decoding, the solution of the first video present image information can be obtained
Code data and the decoding data of the second video present image information, the decoding data obtaining can be used for image information is spelled
Connect;Through splicing, the current stitching image information of the first video present image and the second video present image can be obtained, pass through
First video and the multiple continuous present image of the second video are spliced, you can obtain continuous stitching image information, that is,
Can achieve and the first video data and the second video data are spliced in real time.Embodiment of the present invention splicing flow process is simple, efficiency
Higher, motility is larger, largely improves the real-time of many video-splicings.
Brief description
Fig. 1 is the flow chart of the real-time joining method of many videos in one embodiment of the present invention.
Fig. 2 is the partial process view based on the real-time joining method of the video shown in Fig. 1 in one embodiment of the present invention.
Fig. 3 is the part flow process based on the real-time joining method of the video shown in Fig. 1 in another preferred embodiment of the present invention
Figure.
Fig. 4 is the part flow process based on the real-time joining method of the video shown in Fig. 1 in another preferred embodiment of the present invention
Figure.
Fig. 5 is the structure chart of the real-time splicing apparatus of many videos in one embodiment of the present invention.
Fig. 6 is the structure chart based on the real-time splicing apparatus of many videos shown in Fig. 1 in one embodiment of the present invention.
Fig. 7 is the structure chart based on the real-time splicing apparatus of many videos shown in Fig. 1 in another preferred embodiment of the present invention.
Fig. 8 is the structure chart based on the real-time splicing apparatus of many videos shown in Fig. 1 in another preferred embodiment of the present invention.
Specific embodiment
In order to clearly understand technical scheme, below in conjunction with the accompanying drawings the present invention is described in detail.
Embodiments of the invention have exemplary effect, and it is no substantive that those skilled in the art make on the basis of the embodiment of the present invention
The improvement of property, all should belong to protection scope of the present invention.
The real-time joining method of many videos as described in Figure 1, it can be applicable to server end, comprises the following steps:
S101:Receive the video information including at least the first video data and the second video data;Described first video data bag
Include the first video present image information, described second video data includes the second video present image information.
When video acquisition end collects video data, need to be uploaded to server, through server to current video data
Process after, be sent to video display end, multi-channel video shown by video display end simultaneously.Can be specifically:Work as first user
When carrying out video communication with second user, the first video data is gathered by the first client, and the first video data is uploaded to
Server, gathers the second video data by the second client, and the second video data is uploaded to server, server receives
After the video information of the first video data and the second video data, video data is processed, then will process after video letter
Breath is sent respectively to the first client and the second client.It is of course also possible to plural user carries out video communication simultaneously.
Or, when carrying out video monitoring, gather multiple video datas respectively by multiple monitoring clients, and multiple regard collect respectively
Frequency according to being uploaded to server, after server receives multiple video datas, is processed to video data, then after processing
Video information is sent to video display end, shows multi-channel video by video display end simultaneously.
Described first video present image information, refers to the image information that the first video acquisition end currently collects;Described
Second video present image information, refers to the image information that the second video acquisition end currently collects.Video acquisition end is current
The image information collecting is sent to server, after server process, can show that multiple collection terminals are adopted in real time in display end
The image information collecting.
Generally, in order to accelerate transmission speed and real-time display, each collection terminal, when sending video data, can only comprise
Present image information, of course for meeting different demands, can also comprise its in addition to present image information in video data
His image information.In addition, server can receive two-path video information or three tunnel video informations or more video letter simultaneously
Breath.
S102:By described first video data and the second video data be respectively stored into corresponding first buffering area,
Second buffering area.
Can be the first video data and the corresponding relief area of the second video data distribution in advance, when server receives
After first video data and the second video data, the first video data is stored first buffering area, the second video data is deposited
Store up second buffering area, in case before server splicing video data, quickly reading corresponding video data from relief area.
S103:Read described first video data in described first buffering area, and described first video data is carried out
Decoding, obtains comprising the decoding data of the first video present image information;Described second reading in described second buffering area regards
Frequency evidence, and described second video data is decoded, obtain comprising the decoding data of the second video present image information.
In order to accelerate the transfer rate between collection terminal and server, collection terminal, before being transmitted video data, leads to
Often video data is compressed processing, such as:After collection terminal collects video data, can be the video data conversion collecting
Become I420 form(I.e. YUV reference format 4:2:0)Or YV12 form, then the yuv data of I420 form or YV12 form is entered
Row compression coding, specifically can be compressed into H264 code stream, and carries out RTP encapsulation to H264 code stream, be then sent to Real-time Transport Protocol
Server.H264, is a kind of high compression digital video coding-coding device standard.RTP(Real-time Transport
Protocol, RTP)It is a network transmission protocol." Y " in yuv data represents lightness(Luminance
Or Luma), that is, grey decision-making;And " U " and " V " represents is then colourity(Chrominance or Chroma), effect is to retouch
State colors of image and saturation, for the color of specified pixel.
The decoding data of the first video present image information obtaining through server decoding and the second video present image letter
The decoding data of breath, is the formatted data before collection terminal compression.If collection terminal is the YUV to I420 form or YV12 form
Data carries out compression coding, and the decoding data of the first video present image information obtaining through server decoding and the second video are worked as
The decoding data of front image information is I420 form or the yuv data of YV12 form.
S104:Splice described first video present image information and the second video present image information, obtain the first video
Present image and the current stitching image information of the second video present image.
Splice described first video present image information and the second video present image information, can be specifically to regard first
Frequency present image information and the second video present image information carry out level splicing or vertical splicing or diagonal splicing or
The splicing of other forms.
The source-information of the first video present image, described second video counts are generally also included in described first video data
According in generally also include the source-information of the second video present image.Typically, splicing described first video present image information
Before the second video present image information, also include:Source-information according to the first video present image and the second video are worked as
The source-information of front image and rules of arrangement set in advance, described in layout, described in current stitching image information, the first video is worked as
Front image information and the order of described second video present image information.Described rules of arrangement set in advance, typically basis
Concrete condition and the rule that sets, for example:If level is spliced, rules of arrangement set in advance can be:First video is worked as
Front image is arranged on the left side and shows, the second video present image is arranged on the right display;If vertical splice, set in advance
Rules of arrangement can be:First video present image setting is shown on top, the second video present image is arranged on below
Display;If diagonal splice, rules of arrangement set in advance can be:First video present image is arranged on the upper left corner show
Show, the second video present image is arranged on the lower right corner and shows.Certainly, "left", "right" here, " on ", D score, " upper left
Angle ", " lower right corner " be all between the first video present image and the second video present image comparatively.
If in addition, the first video present image information and the second video present image information carry out level splicing, obtaining
Current stitching image information be horizontal stitching image information, this horizontal stitching image information transfer is shown by server to video
Behind end, the first video present image and the second video present image can be shown in video display end simultaneously, and the first video
Present image and the second video present image are by horizontally arranged.If the first video present image information and the second video are worked as
Front image information is vertically spliced, then the current stitching image information obtaining is vertical stitching image information, and server should
After vertical stitching image information transfer is to video display end, the first video present image can be shown in video display end simultaneously
With the second video present image, and the first video present image and the second video present image be by arranged vertically.
Generally for convenient, first video present image information and the second video present image information are decoded, right
When first video present image information and the second video present image information are encoded, identical coded method should be adopted, with
It is easy to be decoded using identical coding/decoding method, decoding efficiency can be accelerated, and then improve splicing efficiency further.In addition,
The resolution of the first video present image and the second video present image is identical, in order to using identical coded method.
Service implement body in the embodiment of the present invention can be processed with two threads, is main thread and sub-line journey respectively;
Main thread is responsible for receiving the video information including at least the first video data and the second video data, and by described first video counts
It is respectively stored into corresponding first buffering area, second buffering area according to the second video data.Sub-line journey is responsible for video-splicing
Process, can include:The first video data and the second video is read respectively from the first buffering area, second buffering area of main thread
Data Data, and the first video data and the second video data are decapsulated respectively, obtain the first video present image letter
First video present image is believed by the decoding data of breath and the decoding data of the second video present image information by splicing
The decoding data of the decoding data of breath and the second video present image information is spliced, and can obtain the first video present image
Current stitching image information with the second video present image.
The real-time joining method of many videos in the embodiment of the present invention, its present image that video acquisition end can be collected
Information is sent to server in real time, after server receives the first video data and the video information of the second video data, in order to add
Fast processing speed, first the first video data and the second video data is respectively stored in corresponding relief area, with convenient
Read the first video data and the second video data in time;Through server decoding, the first video present image can be obtained
The decoding data of information and the decoding data of the second video present image information, the decoding data obtaining can be used for image information
Spliced;Through splicing, the current stitching image letter of the first video present image and the second video present image can be obtained
Breath, by splicing to the first video and the multiple continuous present image of the second video, you can obtain continuous stitching image
Information, you can realize the first video data and the second video data are spliced in real time.Embodiment of the present invention splicing flow process letter
List, efficiency are higher, motility is larger, largely improve the real-time of many video-splicings.
On the basis of the real-time joining method of many videos shown in Fig. 1, it is preferred that described first video present image information
Including the pixel yuv data information of the first video present image, described second video present image information includes the second video and works as
The pixel yuv data information of front image;Described current stitching image information includes pixel YUV of described first video present image
The merging pixel yuv data information of the pixel yuv data information of data message and described second video present image;Described pixel
Yuv data information includes bright degrees of data and the chroma data of pixel.
The first video present image information that server receives includes the pixel yuv data letter of the first video present image
Breath, the second video present image information include the pixel yuv data information of the second video present image, you can be:Collection terminal
The first video collecting(Or second video)The pixel data information of present image is converted into I420 form(I.e. YUV standard
Format 4:2:0)Or YV12 form, it is the first video(Or second video)The pixel yuv data information of present image is then right
First video(Or second video)The pixel yuv data information of present image carries out compression coding, by the first video after compression
(Or second video)The pixel yuv data information transmission of present image, to server, is using the pixel yuv data after compressing
Information is transmitted, and is transmitted with respect to the data using rgb format, reduces size of data, can improve transfer rate,
Improve the real-time of splicing further.
Further, on the basis of the real-time joining method of many videos shown in Fig. 1, as shown in Fig. 2 wherein, described merging
Pixel yuv data information is that level merges pixel yuv data information, obtains the step bag that level merges pixel yuv data information
Include:
S201:Every a line picture of the first video present image is obtained from the pixel yuv data information of the first video present image
First bright degrees of data of element and the first chroma data, obtain the from the pixel yuv data information of the second video present image
The bright degrees of data of the second of every one-row pixels of two video present images and the second chroma data.
Wherein, the first of every one-row pixels of the first video present image the bright degrees of data and the first chroma data, refer to
Each of every one-row pixels bright degrees of data of pixel and chroma data;Every one-row pixels of the second video present image
Second bright degrees of data and the second chroma data, refer to bright degrees of data and the colourity number of each of every one-row pixels pixel
According to.As:The pixel of the first video present image is 704*288, typically shows that the pixel of this image is 288 row, and often row has 704
Pixel, and the first of every one-row pixels of this first video present image the bright degrees of data and the first chroma data, refer to each
The respective bright degrees of data of 704 pixels in row pixel and chroma data.
S202:By the first of every one-row pixels of described first video present image the bright degrees of data respectively with described second
The bright degrees of data of the second of the corresponding row pixel of video present image carries out level merging, obtains level and merges pixel lightness
Data.
Described corresponding row, is to regard the row pixel of the first video present image with second according to line discipline set in advance
The row pixel of frequency present image carries out corresponding;Line discipline set in advance can be:If the picture of the first video present image
Plain line number is identical with the number of lines of pixels of the second video present image, then the first row pixel of the first video present image regards with second
The first row pixel of frequency present image is corresponding, the second row pixel of the first video present image and the second video present image
Second row pixel is corresponding, the third line pixel phase of the third line pixel of the first video present image and the second video present image
Corresponding, by that analogy, until last column pixel of the first video present image and last column of the second video present image
Pixel is corresponding.Or, if the number of lines of pixels of the first video present image is more than the number of lines of pixels of the second video present image,
Then the first row pixel of the first video present image is corresponding with the first row pixel of the second video present image, the first video ought
Second row pixel of front image is corresponding with the second row pixel of the second video present image, by that analogy, until the first video
The nth row of pixels of present image is corresponding with last column pixel of the second video present image, and now, the first video is currently schemed
The N+1 row pixel of picture, last column pixel of N+2 row pixel ... the first video present image are without carrying out level conjunction
And, these row pixels can merge pixel directly as level and become clear degrees of data using in order to be able to by described first video
Present image intactly shows;Wherein N is the number of lines of pixels of the second video present image.Or, if the first video is worked as
The number of lines of pixels of front image is more than the number of lines of pixels of the second video present image, then the M row pixel of the first video present image
Corresponding with the first row pixel of the second video present image, the first video present image M+1 row pixel and the second video
Second row pixel of present image is corresponding, by that analogy, until the M+N row pixel of the first video present image regards with second
Last column pixel of frequency present image is corresponding, and now, the first row pixel of the first video present image, the first video are current
The M-1 row pixel of second row pixel ... the first video present image of image and the M+N+ of the first video present image
1 row pixel, last column pixel of M+N+2 row pixel ... the first video present image of the first video present image, this
A little row pixels without carrying out level merging, can merge pixel directly as level and become clear degrees of data using in order to be able to will
Described first video present image intactly shows;Wherein N is the number of lines of pixels of the second video present image, and M is basis
Need set to need the line number corresponding with the first row pixel of the second video present image.Certainly, above-mentioned line discipline
The number of lines of pixels going for the second video present image is more than the number of lines of pixels of the first video present image, and as needed
Also other line disciplines can be set.
Described level merges the bright degrees of data of pixel, refers to increased in the horizontal direction the bright degrees of data of pixel.Example
As:The pixel of the first video present image is 704*288, and the bright degrees of data of the first row pixel of the first video present image
It is a1, a2, a3 ... a704 respectively;The pixel of the second video present image is 704*288, and the of the second video present image
The bright degrees of data of one-row pixels is b1, b2, b3 ... b704 respectively;After merging through level, the first row level obtaining merges
Pixel become clear degrees of data be:a1、a2、a3……a704、b1、b2、b3……b704;Accordingly, the second row level can be obtained close
And the bright degrees of data of pixel, the third line level merge bright degrees of data of pixel etc..
S203:First chroma data of every one-row pixels of described first video present image is regarded with described second respectively
Second chroma data of the corresponding row pixel of frequency present image carries out level merging, obtains level and merges pixel chromaticity data.
Described level merges pixel chromaticity data, refers to increased the chroma data of pixel in the horizontal direction.For example:The
The pixel of one video present image is 704*288, and the chroma data of the first row pixel of the first video present image is respectively
c1、c2、c3……c704;The pixel of the second video present image is 704*288, and the first row picture of the second video present image
The chroma data of element is d1, d2, d3 ... d704 respectively;After merging through level, the first row level obtaining merges pixel chromaticity
Data:C1, c2, c3 ... c704, d1, d2, d3 ... d704, accordingly, can obtain the second row level and merge the pixel color number of degrees
Merge pixel chromaticity data etc. according to, the third line level.
The level splicing adopting in the real-time joining method of many videos of the embodiment of the present invention, this joining method splices
Video image in terms of height be with splicing before in the first video present image and the second video present image highest that
Image is identical, is the width sum of the first video present image and the second video present image before splicing in terms of width.This
Inventive embodiments splicing flow process is simple, efficiency is higher, motility is larger, largely improves the real-time of many video-splicings
Property, meanwhile, the video that the joining method of the embodiment of the present invention splices can be not enough to show in display end screen height simultaneously
Show that the first video is spliced with the second video and when screen width can show the first video and the second video simultaneously.
On the basis of the real-time joining method of many videos shown in Fig. 1, as shown in figure 3, described merging pixel yuv data letter
Breath is vertical merged pixel yuv data information, and the step obtaining vertical merged pixel yuv data information includes:
S301:Whole pixels of the first video present image are obtained from the pixel yuv data information of the first video present image
The first bright degrees of data and the first chroma data, obtain second from the pixel yuv data information of the second video present image
The bright degrees of data of the second of whole pixels of video present image and the second chroma data.
Wherein, the first of whole pixels of the first video present image the bright degrees of data and the first chroma data, are to comprise
The bright degrees of data of each of this image pixel and chroma data;The second of whole pixels of the second video present image is bright
Brightness data and the second chroma data, are the bright degrees of data and chroma data comprising each of this image pixel.As:The
The pixel of one video present image is 704*288, typically may indicate that the pixel of this image is 704 row, often shows 288 pictures
Element, and the first of whole pixels of this first video present image the bright degrees of data and the first chroma data, are to comprise 704*288
The bright degrees of data of individual pixel and chroma data.
S302:The first of whole pixels of described first video present image bright degrees of data is regarded with described second respectively
Second bright degrees of data of whole pixels of frequency present image carries out vertical merged, obtains vertical merged pixel and becomes clear degrees of data.
Described vertical merged pixel becomes clear degrees of data, refers to increased in vertical direction the bright degrees of data of pixel.Example
As:The pixel of the first video present image is 704*288, and the bright degrees of data of whole pixels of the first video present image is divided
It is not e1, e2, e3 ... e704*288;The pixel of the second video present image is 704*288, and the second video present image
All the bright degrees of data of pixel is f1, f2, f3 ... f704*288 respectively;After vertical merged, the vertical merged picture that obtains
The bright degrees of data of element is:e1、e2、e3……e704*288、f1、f2、f3……f704*288.
S303:By the first chroma data of whole pixels of described first video present image respectively with described second video
Second chroma data of whole pixels of present image carries out vertical merged, obtains vertical merged pixel chromaticity data.
Described vertical merged pixel chromaticity data, refers to increased the chroma data of pixel in vertical direction.For example:The
The pixel of one video present image is 704*288, and the chroma data of whole pixels of the first video present image be respectively g1,
g2、g3……g704*288;The pixel of the second video present image is 704*288, and whole pictures of the second video present image
The chroma data of element is h1, h2, h3 ... h704*288 respectively;After vertical merged, the vertical merged pixel color number of degrees that obtain
According to:g1、g2、g3……g704*288、h1、h2、h3……h704*288.
The vertical splicing adopting in the real-time joining method of many videos of the embodiment of the present invention, this joining method splices
Video image in terms of width be with splicing before the widest that in the first video present image and the second video present image
Image is identical, is the height sum of the first video present image and the second video present image before splicing in terms of height.This
Inventive embodiments splicing flow process is simple, efficiency is higher, motility is larger, largely improves the real-time of many video-splicings
Property, meanwhile, the video that the joining method of the embodiment of the present invention splices can be not enough to show in display end screen width simultaneously
Show that the first video is spliced with the second video and when screen height can show the first video and the second video simultaneously.
Preferably, on the basis of the real-time joining method of many videos shown in Fig. 1, as shown in figure 4, described first video
Present image information includes the geomery of the first video present image, and described second video present image information includes second and regards
The geomery of frequency present image;Before splicing described first video present image information and the second video present image information,
Also include:
S1041:Judge whether the geomery of the geomery of the first video present image and the second video present image is consistent.
S1042:If the geomery of the geomery of the first video present image and the second video present image differs
Cause, be then cut out processing by the larger image of geomery in described first video present image and the second video present image
Or the less image of geomery in described first video present image and the second video present image is amplified processing,
Described first video present image and the geomery of the second video present image are consistent.
Generally, described geomery includes height and the width of image, if image is highly consistent, can carry out level
Splicing;If the width of image is consistent, can vertically be spliced.Concaveconvex structure is had with the image avoiding splicing.If image
Height and width are all inconsistent, then can as needed the geomery of image be adjusted.Certainly, the geomery of image
Can also there are other shapes, such as:Circle, triangle, trapezoidal etc., according to different splicing demands, can be to the shape chi of image
Very little it is adjusted.
The real-time splicing apparatus of many videos as shown in Figure 5, it includes:
Receiver module, for receiving the video information including at least the first video data and the second video data;Described first regards
According to including the first video present image information, described second video data includes the second video present image information to frequency;
Memory module, for being respectively stored into the first corresponding buffering by described first video data and the second video data
Area, second buffering area;
Decoder module, for reading described first video data in described first buffering area, and to described first video data
It is decoded, obtain comprising the decoding data of the first video present image information;Read described in described second buffering area
Two video datas, and described second video data is decoded, obtain comprising the solution yardage of the second video present image information
According to;
Concatenation module, for splicing described first video present image information and the second video present image information, obtains first
Video present image and the current stitching image information of the second video present image.
The source-information of the first video present image, described second video counts are generally also included in described first video data
According in generally also include the source-information of the second video present image.Typically, the real-time splicing apparatus of described many videos also includes:Compile
Row's module, for before splicing described first video present image information and the second video present image information, according to first
The source-information of the source-information of video present image and the second video present image and rules of arrangement set in advance, layout institute
State the first video present image information described in current stitching image information and the order of described second video present image information.
The real-time splicing apparatus of many videos in the embodiment of the present invention, it is possible to achieve the real-time joining method of above-mentioned many video, its
The present image information that video acquisition end collects can be sent to server in real time, server receive the first video data and
After the video information of the second video data, for speed up processing, first by the first video data and the second video data difference
Store in corresponding relief area, read the first video data and the second video data in time to facilitate;Through service
Device decodes, and can obtain the decoding data of the first video present image information and the solution yardage of the second video present image information
According to the decoding data obtaining can be used for image information is spliced;Through splicing, can obtain the first video present image and
The current stitching image information of the second video present image, by the first video and the multiple continuous present image of the second video
Spliced, you can obtain continuous stitching image information, you can realize the first video data and the second video data are carried out
Splice in real time.Using embodiment of the present invention splicing apparatus, can make that splicing flow process is simple, efficiency is higher, motility is larger, very greatly
Improve to degree the real-time of many video-splicings.
It is preferred that described first video present image information includes the pixel yuv data letter of the first video present image
Breath, described second video present image information includes the pixel yuv data information of the second video present image;Described current splicing
Image information includes the pixel yuv data information of described first video present image and the pixel of described second video present image
The merging pixel yuv data information of yuv data information;Described pixel yuv data information includes bright degrees of data and the color of pixel
Degrees of data.
On the basis of the real-time splicing apparatus of many videos as shown in Figure 5, as shown in fig. 6, wherein, described merging pixel
Yuv data information is that level merges pixel yuv data information, and described concatenation module includes:
Obtaining unit, for obtaining the first video present image from the pixel yuv data information of the first video present image
First bright degrees of data of every one-row pixels and the first chroma data, from the pixel yuv data information of the second video present image
Second bright degrees of data of the middle every one-row pixels obtaining the second video present image and the second chroma data;
Horizontal combining unit, for by the first of every one-row pixels of described first video present image the bright degrees of data respectively with
The bright degrees of data of the second of the corresponding row pixel of described second video present image carries out level merging, obtains level and merges picture
The bright degrees of data of element;
Described horizontal combining unit, is additionally operable to divide the first chroma data of every one-row pixels of described first video present image
Do not carry out level merging with the second chroma data of the corresponding row pixel of described second video present image, obtain level and merge
Pixel chromaticity data.
The real-time splicing apparatus of many videos of the embodiment of the present invention can be used for level splicing, and what this level spliced regards
Frequency image in terms of height be with splicing before that figure of highest in the first video present image and the second video present image
As identical, it is the width sum of the first video present image and the second video present image before splicing in terms of width.Using
This device carries out splicing that flow process is simple, efficiency is higher, motility is larger, largely improves the real-time of many video-splicings
Property, meanwhile, the video that the splicing apparatus of the embodiment of the present invention splices can be not enough to show in display end screen height simultaneously
Show that the first video is spliced with the second video and when screen width can show the first video and the second video simultaneously.
On the basis of the real-time splicing apparatus of many videos as shown in Figure 5, as shown in fig. 7, wherein, described merging pixel
Yuv data information is vertical merged pixel yuv data information, and described concatenation module includes:
Obtaining unit, for obtaining the first video present image from the pixel yuv data information of the first video present image
All the first of pixel the bright degrees of data and the first chroma datas, from the pixel yuv data information of the second video present image
Obtain the second bright degrees of data and second chroma data of whole pixels of the second video present image;
Vertical merged unit, for by the first of whole pixels of described first video present image the bright degrees of data respectively with institute
The the second bright degrees of data stating whole pixels of the second video present image carries out vertical merged, obtains vertical merged pixel and becomes clear
Degrees of data;
Described vertical merged unit, the first chroma data being additionally operable to whole pixels of just described first video present image divides
Do not carry out with the second chroma data of whole pixels of described second video present image vertical merged, obtain vertical merged pixel
Chroma data.
Many videos real-time splicing apparatus method of the embodiment of the present invention can vertically be spliced, and this splicing apparatus splices
Video image in terms of width be with splicing before the widest that in the first video present image and the second video present image
Individual image is identical, is the height sum of the first video present image and the second video present image before splicing in terms of height.
Using the splicing apparatus splicing flow process of the embodiment of the present invention simple, efficiency is higher, motility is larger, largely improves many
The real-time of video-splicing, meanwhile, the video that the splicing apparatus of the embodiment of the present invention splices can be in display end screen width
When degree is not enough to show the first video and the second video simultaneously and screen height can show the first video and the second video simultaneously
Spliced.
On the basis of the real-time splicing apparatus of many videos as shown in Figure 5, as shown in figure 8, described first video is currently schemed
As information includes the geomery of the first video present image, it is current that described second video present image information includes the second video
The geomery of image;And the real-time splicing apparatus of described many videos also includes:
Judge module, before splicing described first video present image information and the second video present image information, judges
Whether the geomery of the geomery of the first video present image and the second video present image is consistent;
Processing module, if the geomery of the geomery for the first video present image and the second video present image is not
Unanimously, then the larger image of geomery in described first video present image and the second video present image is cut out locating
Manage or be amplified locating by the less image of geomery in described first video present image and the second video present image
Reason is so that the geomery of described first video present image and the second video present image is consistent.
The above, only embodiments of the invention, but protection scope of the present invention is not limited thereto, any it is familiar with basis
Those skilled in the art the invention discloses technical scope in, change or replacement can be readily occurred in, all should cover this
Within the protection domain of invention.Therefore, protection scope of the present invention should be defined by described scope of the claims.
Claims (10)
1. the real-time joining method of a kind of many videos, it comprises the following steps:
Receive the video information including at least the first video data and the second video data;Described first video data includes first
Video present image information, described second video data includes the second video present image information;
Described first video data and the second video data are respectively stored into corresponding first buffering area, the second buffering
Area;
Read described first video data in described first buffering area, and described first video data is decoded, obtain
Comprise the decoding data of the first video present image information;Read described second video data in described second buffering area, and
Described second video data is decoded, obtains comprising the decoding data of the second video present image information;
Splice described first video present image information and the second video present image information, obtain the first video present image and
The current stitching image information of the second video present image.
2. the real-time joining method of many videos according to claim 1 is it is characterised in that described first video present image letter
Breath includes the pixel yuv data information of the first video present image, and described second video present image information includes the second video
The pixel yuv data information of present image;Described current stitching image information includes the pixel of described first video present image
The merging pixel yuv data information of the pixel yuv data information of yuv data information and described second video present image;Described
Pixel yuv data information includes bright degrees of data and the chroma data of pixel.
3. the real-time joining method of many videos according to claim 2 is it is characterised in that described merging pixel yuv data is believed
Breath is that level merges pixel yuv data information, and the step that acquisition level merges pixel yuv data information includes:
Obtain from the pixel yuv data information of the first video present image the first video present image every one-row pixels
One bright degrees of data and the first chroma data, obtain the second video from the pixel yuv data information of the second video present image
The bright degrees of data of the second of every one-row pixels of present image and the second chroma data;
Will be current with described second video respectively for the first of every one-row pixels of described first video present image the bright degrees of data
The bright degrees of data of the second of the corresponding row pixel of image carries out level merging, obtains level and merges the bright degrees of data of pixel;
First chroma data of every one-row pixels of described first video present image is currently schemed with described second video respectively
Second chroma data of the corresponding row pixel of picture carries out level merging, obtains level and merges pixel chromaticity data.
4. the real-time joining method of many videos according to claim 2 is it is characterised in that described merging pixel yuv data is believed
Breath is vertical merged pixel yuv data information, and the step obtaining vertical merged pixel yuv data information includes:
Obtain from the pixel yuv data information of the first video present image the first video present image whole pixels first
Bright degrees of data and the first chroma data, obtain the second video from the pixel yuv data information of the second video present image and work as
The bright degrees of data of the second of whole pixels of front image and the second chroma data;
The first of whole pixels of described first video present image bright degrees of data is currently schemed with described second video respectively
Second bright degrees of data of whole pixels of picture carries out vertical merged, obtains vertical merged pixel and becomes clear degrees of data;
By the first chroma data of whole pixels of described first video present image respectively with described second video present image
The second chroma data of whole pixels carry out vertical merged, obtain vertical merged pixel chromaticity data.
5. the real-time joining method of many videos according to claim 1 is it is characterised in that described first video present image letter
Breath includes the geomery of the first video present image, and described second video present image information includes the second video present image
Geomery;Before splicing described first video present image information and the second video present image information, also include:
Judge whether the geomery of the geomery of the first video present image and the second video present image is consistent;
If the geomery of the geomery of the first video present image and the second video present image is inconsistent, will be described
In first video present image and the second video present image the larger image of geomery be cut out process or will be described
In first video present image and the second video present image, the less image of geomery is amplified processing so that described the
The geomery of one video present image and the second video present image is consistent.
6. the real-time splicing apparatus of a kind of many videos, it includes:
Receiver module, for receiving the video information including at least the first video data and the second video data;Described first regards
According to including the first video present image information, described second video data includes the second video present image information to frequency;
Memory module, for being respectively stored into the first corresponding buffering by described first video data and the second video data
Area, second buffering area;
Decoder module, for reading described first video data in described first buffering area, and to described first video data
It is decoded, obtain comprising the decoding data of the first video present image information;Read described in described second buffering area
Two video datas, and described second video data is decoded, obtain comprising the solution yardage of the second video present image information
According to;
Concatenation module, for splicing described first video present image information and the second video present image information, obtains first
Video present image and the current stitching image information of the second video present image.
7. the real-time splicing apparatus of many videos according to claim 6 is it is characterised in that described first video present image letter
Breath includes the pixel yuv data information of the first video present image, and described second video present image information includes the second video
The pixel yuv data information of present image;Described current stitching image information includes the pixel of described first video present image
The merging pixel yuv data information of the pixel yuv data information of yuv data information and described second video present image;Described
Pixel yuv data information includes bright degrees of data and the chroma data of pixel.
8. the real-time splicing apparatus of many videos according to claim 7 is it is characterised in that described merging pixel yuv data is believed
Breath is that level merges pixel yuv data information, and described concatenation module includes:
Obtaining unit, for obtaining the first video present image from the pixel yuv data information of the first video present image
First bright degrees of data of every one-row pixels and the first chroma data, from the pixel yuv data information of the second video present image
Second bright degrees of data of the middle every one-row pixels obtaining the second video present image and the second chroma data;
Horizontal combining unit, for by the first of every one-row pixels of described first video present image the bright degrees of data respectively with
The bright degrees of data of the second of the corresponding row pixel of described second video present image carries out level merging, obtains level and merges picture
The bright degrees of data of element;
Described horizontal combining unit, is additionally operable to divide the first chroma data of every one-row pixels of described first video present image
Do not carry out level merging with the second chroma data of the corresponding row pixel of described second video present image, obtain level and merge
Pixel chromaticity data.
9. the real-time joining method of many videos according to claim 7 is it is characterised in that described merging pixel yuv data is believed
Breath is vertical merged pixel yuv data information, and described concatenation module includes:
Obtaining unit, for obtaining the first video present image from the pixel yuv data information of the first video present image
All the first of pixel the bright degrees of data and the first chroma datas, from the pixel yuv data information of the second video present image
Obtain the second bright degrees of data and second chroma data of whole pixels of the second video present image;
Vertical merged unit, for by the first of whole pixels of described first video present image the bright degrees of data respectively with institute
The the second bright degrees of data stating whole pixels of the second video present image carries out vertical merged, obtains vertical merged pixel and becomes clear
Degrees of data;
Described vertical merged unit, the first chroma data being additionally operable to whole pixels of just described first video present image divides
Do not carry out with the second chroma data of whole pixels of described second video present image vertical merged, obtain vertical merged pixel
Chroma data.
10. the real-time splicing apparatus of many videos according to claim 6 is it is characterised in that described first video present image
Information includes the geomery of the first video present image, and described second video present image information includes the second video and currently schemes
The geomery of picture;And the real-time splicing apparatus of described many videos also includes:
Judge module, before splicing described first video present image information and the second video present image information, judges
Whether the geomery of the geomery of the first video present image and the second video present image is consistent;
Processing module, if the geomery of the geomery for the first video present image and the second video present image is not
Unanimously, then the larger image of geomery in described first video present image and the second video present image is cut out locating
Manage or be amplified locating by the less image of geomery in described first video present image and the second video present image
Reason is so that the geomery of described first video present image and the second video present image is consistent.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610955546.1A CN106454256B (en) | 2016-11-03 | 2016-11-03 | A kind of real-time joining method of more videos and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610955546.1A CN106454256B (en) | 2016-11-03 | 2016-11-03 | A kind of real-time joining method of more videos and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106454256A true CN106454256A (en) | 2017-02-22 |
CN106454256B CN106454256B (en) | 2019-09-13 |
Family
ID=58179989
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610955546.1A Active CN106454256B (en) | 2016-11-03 | 2016-11-03 | A kind of real-time joining method of more videos and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106454256B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107071358A (en) * | 2017-04-19 | 2017-08-18 | 中国电子科技集团公司电子科学研究院 | Panorama live broadcast system under video-splicing server and mobile status |
CN110087054A (en) * | 2019-06-06 | 2019-08-02 | 北京七鑫易维科技有限公司 | The processing method of image, apparatus and system |
CN110570614A (en) * | 2018-06-05 | 2019-12-13 | 杭州海康威视数字技术股份有限公司 | Video monitoring system and intelligent camera |
WO2020094089A1 (en) * | 2018-11-08 | 2020-05-14 | 北京字节跳动网络技术有限公司 | Video picture adjustment method and apparatus, and computer device and storage medium |
CN111918142A (en) * | 2020-07-29 | 2020-11-10 | 杭州叙简科技股份有限公司 | Smoothing method, device, equipment and medium for converting national standard video code stream into RTP stream |
CN113315940A (en) * | 2021-03-23 | 2021-08-27 | 海南视联通信技术有限公司 | Video call method, device and computer readable storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110037486A (en) * | 2009-10-07 | 2011-04-13 | (주)아구스 | Intelligent video surveillance device |
CN102256111A (en) * | 2011-07-17 | 2011-11-23 | 西安电子科技大学 | Multi-channel panoramic video real-time monitoring system and method |
CN102724477A (en) * | 2012-05-25 | 2012-10-10 | 黑龙江大学 | Device and method for carrying out real-time splicing on surveillance videos based on FPGA (field programmable gata array) |
CN103595896A (en) * | 2013-11-19 | 2014-02-19 | 广东威创视讯科技股份有限公司 | Method and system for synchronously displaying images with UHD resolution ratio |
CN103686307A (en) * | 2013-12-24 | 2014-03-26 | 北京航天测控技术有限公司 | Digital signal processor based multi-screen splicing display device |
CN104618648A (en) * | 2015-01-29 | 2015-05-13 | 桂林长海发展有限责任公司 | Panoramic video splicing system and splicing method |
-
2016
- 2016-11-03 CN CN201610955546.1A patent/CN106454256B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110037486A (en) * | 2009-10-07 | 2011-04-13 | (주)아구스 | Intelligent video surveillance device |
CN102256111A (en) * | 2011-07-17 | 2011-11-23 | 西安电子科技大学 | Multi-channel panoramic video real-time monitoring system and method |
CN102724477A (en) * | 2012-05-25 | 2012-10-10 | 黑龙江大学 | Device and method for carrying out real-time splicing on surveillance videos based on FPGA (field programmable gata array) |
CN103595896A (en) * | 2013-11-19 | 2014-02-19 | 广东威创视讯科技股份有限公司 | Method and system for synchronously displaying images with UHD resolution ratio |
CN103686307A (en) * | 2013-12-24 | 2014-03-26 | 北京航天测控技术有限公司 | Digital signal processor based multi-screen splicing display device |
CN104618648A (en) * | 2015-01-29 | 2015-05-13 | 桂林长海发展有限责任公司 | Panoramic video splicing system and splicing method |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107071358A (en) * | 2017-04-19 | 2017-08-18 | 中国电子科技集团公司电子科学研究院 | Panorama live broadcast system under video-splicing server and mobile status |
CN110570614A (en) * | 2018-06-05 | 2019-12-13 | 杭州海康威视数字技术股份有限公司 | Video monitoring system and intelligent camera |
CN110570614B (en) * | 2018-06-05 | 2022-03-04 | 杭州海康威视数字技术股份有限公司 | Video monitoring system and intelligent camera |
WO2020094089A1 (en) * | 2018-11-08 | 2020-05-14 | 北京字节跳动网络技术有限公司 | Video picture adjustment method and apparatus, and computer device and storage medium |
KR20200141468A (en) * | 2018-11-08 | 2020-12-18 | 베이징 마이크로라이브 비전 테크놀로지 컴퍼니 리미티드 | Video screen adjustment method and device, computer equipment and storage medium |
US11144201B2 (en) | 2018-11-08 | 2021-10-12 | Beijing Microlive Vision Technology Co., Ltd | Video picture adjustment method and apparatus, computer device and storage medium |
KR102490938B1 (en) * | 2018-11-08 | 2023-01-19 | 베이징 마이크로라이브 비전 테크놀로지 컴퍼니 리미티드 | Video screen adjustment method and device, computer equipment and storage media |
CN110087054A (en) * | 2019-06-06 | 2019-08-02 | 北京七鑫易维科技有限公司 | The processing method of image, apparatus and system |
CN110087054B (en) * | 2019-06-06 | 2021-06-18 | 北京七鑫易维科技有限公司 | Image processing method, device and system |
CN111918142A (en) * | 2020-07-29 | 2020-11-10 | 杭州叙简科技股份有限公司 | Smoothing method, device, equipment and medium for converting national standard video code stream into RTP stream |
CN113315940A (en) * | 2021-03-23 | 2021-08-27 | 海南视联通信技术有限公司 | Video call method, device and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN106454256B (en) | 2019-09-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106454256A (en) | Real-time splicing method and apparatus of multiple videos | |
US20190297362A1 (en) | Downstream video composition | |
US10511803B2 (en) | Video signal transmission method and device | |
CN106878658B (en) | Automatic video layout for multi-stream multi-site telepresence conferencing system | |
US8184142B2 (en) | Method and system for composing video images from a plurality of endpoints | |
EP2521351A1 (en) | Method and device for processing multi-picture video image | |
US20080291265A1 (en) | Smart cropping of video images in a videoconferencing session | |
US8427520B2 (en) | Removing a self image from a continuous presence video image | |
CN109640167B (en) | Video processing method and device, electronic equipment and storage medium | |
US10334219B2 (en) | Apparatus for switching/routing image signals through bandwidth splitting and reduction and the method thereof | |
KR20150083012A (en) | Method and Apparatus for Generating Single Bit Stream from Multiple Video Stream | |
EP2713648A2 (en) | Method and apparatus for controlling a data rate in a wireless communication system | |
CN107113447A (en) | High frame rate low frame rate rate transmission technology | |
US10674163B2 (en) | Color space compression | |
CN109587489A (en) | A kind of method of video compression | |
CN101437159B (en) | Method and apparatus for sending digital image | |
CN108307163A (en) | Image processing method and device, computer installation and readable storage medium storing program for executing | |
EP2526689B1 (en) | Method for transporting information and/or application data inside a digital video stream, and relative devices for generating and playing such video stream | |
TWI472232B (en) | Video transmission by decoupling color components and apparatus thereof and processor readable tangible medium encoded with instructions | |
CN114827620A (en) | Image processing method, apparatus, device and medium | |
CN114245027A (en) | Video data mixing processing method, system, electronic equipment and storage medium | |
CN114173156A (en) | Video transmission method, electronic device, and storage medium | |
CN114422734B (en) | Video recorder, video data processing method and device and electronic equipment | |
CN106792123A (en) | Dynamic station symbol embedded system and method | |
CN115002468A (en) | Video processing method, device and system and client |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |