CN105282477A - Multiparty video data fusion realization method, device, system and fusion server - Google Patents

Multiparty video data fusion realization method, device, system and fusion server Download PDF

Info

Publication number
CN105282477A
CN105282477A CN201410254027.3A CN201410254027A CN105282477A CN 105282477 A CN105282477 A CN 105282477A CN 201410254027 A CN201410254027 A CN 201410254027A CN 105282477 A CN105282477 A CN 105282477A
Authority
CN
China
Prior art keywords
user terminal
video display
area information
memory map
display area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410254027.3A
Other languages
Chinese (zh)
Inventor
乔玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201410254027.3A priority Critical patent/CN105282477A/en
Publication of CN105282477A publication Critical patent/CN105282477A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a multiparty video data fusion realization method, a multiparty video data fusion realization device, a multiparty video data fusion realization system and a fusion server. The objective of the invention is to realize multiparty video data fusion at a server end so as to decrease bandwidth required by the fusion server to transmit video data to terminals and decrease processing resource and electric quantity consumption for the terminals in video data processing. The multiparty video data fusion realization method includes the following steps that: video image data transmitted by each user terminal which participates in multiparty video communication are received; each frame of video image data is drawn in any one of video image data drawing regions contained in pre-defined memory maps; and memory maps corresponding to video display region information are transmitted to corresponding user terminals respectively according to the video display region information of each user terminal, wherein the video display region information is used for indicating video image data to be displayed by the user terminals.

Description

Multi-party video data mixed screen implementation method, device, system and mixed ping server
Technical field
The present invention relates to video data processing technology field, particularly relate to a kind of multi-party video data mixed screen implementation method, device, system and mixed ping server.
Background technology
Along with the develop rapidly of mobile Internet, multi-party video calls business is provided to become one of mobile Internet hot spot service on mobile terminals.The existing multi-party video calls business realized on mobile terminals all adopts conventional shunt video to mix the mode of screen, namely at the server end of network side, the video data of each user in multi-party video calls is distributed to respectively each user of current talking, then completes mixed screen operation voluntarily by terminal equipment.Video calling is carried out for tripartite, as shown in Figure 1, it is existing network side video data processing mode, be positioned at the video data that the video server reception user 1 of network side, user 2 and user 3 send, and the video data of user 2 and user 3 is sent to user 1, the video data of user 1 and user 3 is sent to user 2, the video data of user 1 and user 2 is sent to user 3, user 1, user 2 and user 3, after receiving video data, carry out the mixed screen process of video data.
In multi-party video calls, video server needs for the user terminal in each multi-party video calls issues other proprietary video datas, thus cause, along with the user in video calling increases, also increasing accordingly for the bandwidth needed for each terminal transmission data.Wherein, the consumption of video server downlink bandwidth is as follows, and wherein m is call number, and n is call number:
BW = Σ 0 m ( n * ( n - 1 ) ) * Avg ( Kbps ) )
In above-mentioned video data handling procedure, user terminal is after the video data receiving other users that video server issues, carry out mixed screen process voluntarily, as shown in Figure 2, for the schematic diagram that user terminal processes the video data received, because terminal needs the video data of other user terminals in receiver, video call, this also can bring user terminal to the increase of downlink data transmission bandwidth demand undoubtedly, and need to carry out decode operation for each road video data due to terminal, thus cause the performance requirement of user terminal higher.Wherein, the downlink bandwidth needed for user terminal, CPU and electric quantity consumption are as follows:
The downlink bandwidth that terminal needs is as follows, and wherein n is call number:
The power consumption that terminal produces is as follows, and wherein, n is call number, and Q is for carrying out decoding institute consumes power to single channel video: BW = Σ 0 n - 1 ( Avg ( Q ) )
The CPU (CPU) of terminal consumes as follows, and n is call number, and CPU is the consumption rate of single channel video being carried out to decoding CPU:
As can be seen here, existing video data mixes in screen technology, on the one hand, because video server needs for each user, to the video data of this user transmission other call video participating users except self, add the bandwidth requirement needed for video data transmission; On the other hand, each user terminal needs to carry out mixed screen voluntarily after the multi-path video data receiving video server transmission, has both added terminal data transmission bandwidth demand, and has increased again the consumption of terminal processes resource and electricity.
Summary of the invention
The embodiment of the present invention provides a kind of multi-party video data mixed screen implementation method, device, system and mixed ping server, in order to realize the mixed screen of multi-party video data at server end, thus reduce mixed ping server to bandwidth required during terminal transmission video data, and to the consumption processing resource and electricity simultaneously when reducing terminal processes video data.
The embodiment of the present invention provides a kind of multi-party video data mixed screen implementation method, comprising:
Receive the vedio data of each user terminal transmission participating in multi-party video calls;
Each vedio data is plotted in respectively on arbitrary vedio data drawing area that predefined memory map comprises;
According to the video display area information of each user terminal, the user terminal respectively to correspondence sends memory map corresponding to described video display area information, and described video display area information is used to indicate described user terminal vedio data to be shown.
Wherein, rectangle RECT is used to represent the area information of each vedio data in described memory map, wherein, described RECT comprises the LEFT of the abscissa representing the rectangle upper left corner, represent the ordinate TOP in the rectangle upper left corner, the BOTTOM of the abscissa RIGHT representing the rectangle lower right corner and the ordinate representing the rectangle lower right corner; And
Determine the area information of each vedio data in described memory map in accordance with the following methods:
For each vedio data sets up image index;
For each vedio data, judge whether the user index that this vedio data is corresponding is greater than number of users that predefined user terminal one screen can show, that participate in video calling;
If so, then LEFT:LEFT=UI%2*DW is determined according to following formula;
If not, then LEFT:LEFT=UI%2*DW+SW/2 is determined according to following formula;
For each vedio data, judge whether image index that this video data is corresponding is positioned at the upper region of described memory map;
If so, then TOP:TOP=SH is determined according to following formula;
If not, then TOP:TOP=SH/2 is determined according to following formula;
For each vedio data, determine RIGHT and BOTTOM according to following formula respectively:
RIGHT=LEFT+DW;
BOTTOM=TOP-DH; Wherein:
UI represents the image index that each vedio data is corresponding;
DW represents the width of each vedio data;
DH represents the height of each vedio data;
SW represents the width of described memory map;
SH represents the height of described memory map.
Before sending memory map corresponding to described video display area information to the user terminal of correspondence, also comprise:
Number of users that show according to each screen display of user terminal preset, that participate in video calling divides described memory map;
The video image size that each partial memory figure that division obtains shows according to user terminal support is processed.
According to the video display area information of each user terminal, the user terminal respectively to correspondence sends memory map corresponding to described video display area information, specifically comprises:
According to the video display area information of each user terminal, memory map corresponding for described video display area information is encapsulated as realtime transmission protocol RTP Packet Generation to corresponding user terminal; And
In transmission RTP process data packet, RTCP Real-time Transport Control Protocol RTCP is used to carry out Packet Generation control.
At least one condition following need be met when memory map corresponding for described video display area information is encapsulated as RTP packet:
The size of the MTU MTU in RTP packet is no more than preset value; Not to the vedio data decoding in the arbitrary RTP grouping comprised in described RTP packet; Just the data type in described RTP packet can be detected without the need to whole data flow of decoding; Support that a network abstraction layer unit type NALU is split as multiple RTP to be wrapped; Support multiple NALU to be collected in a RTP grouping.
Also comprise:
Receive the video display region switching instruction that arbitrary user terminal sends, in the switching instruction of described video display region, carry at least one the video display area information be switched to;
Switch instruction for described video display region, memory map corresponding for described video display area information is sent to this user terminal.
The embodiment of the present invention provides a kind of multi-party video data mixed screen implement device, comprising:
User data processing unit, for receiving the vedio data of each user terminal transmission participating in multi-party video calls; And according to the video display area information of each user terminal, the user terminal respectively to correspondence sends memory map corresponding to described video display area information, and described video display area information is used to indicate described user terminal vedio data to be shown;
Mixed screen unit, for being plotted in arbitrary vedio data drawing area that predefined memory map comprises respectively by each vedio data.
Described mixed screen unit, comprising:
Audio filters, before sending memory map corresponding to described video display area information at described user data processing unit to the user terminal of correspondence, number of users that show according to each screen display of user terminal preset, that participate in video calling divides described memory map; The video image size that each partial memory figure that division obtains shows according to user terminal support is processed.
Described device, also comprises:
RTP/RTCP protocol stack, for the video display area information according to each user terminal, is encapsulated as realtime transmission protocol RTP Packet Generation to corresponding user terminal by memory map corresponding for described video display area information; And in transmission RTP process data packet, use RTCP Real-time Transport Control Protocol RTCP to carry out Packet Generation control.
Described mixed screen unit, the video display region also sent for receiving arbitrary user terminal switches instruction, carries at least one the video display area information be switched in the switching instruction of described video display region; Switch instruction for described video display region, memory map corresponding for described video display area information is sent to this user terminal.
The embodiment of the present invention provides a kind of mixed ping server, comprises above-mentioned multi-party video data mixed screen implement device.
The embodiment of the present invention provides a kind of multi-party video data to mix screen and realizes system, comprises at least two user terminals and mixed ping server, wherein:
Described user terminal, for sending vedio data to described mixed ping server;
Described mixed ping server, the vedio data for being sent by each user terminal is plotted in arbitrary vedio data drawing area that predefined memory map comprises respectively; And according to the video display area information of each user terminal, the user terminal respectively to correspondence sends memory map corresponding to described video display area information, and described video display area information is used to indicate described user terminal vedio data to be shown.
Described mixed ping server, also for before send memory map corresponding to described video display area information to the user terminal of correspondence, number of users that show according to each screen display of user terminal preset, that participate in video calling divides described memory map; The video image size that each partial memory figure that division obtains shows according to user terminal support is processed.
Described mixed ping server, specifically for the video display area information according to each user terminal, is encapsulated as realtime transmission protocol RTP Packet Generation to corresponding user terminal by memory map corresponding for described video display area information; And in transmission RTP process data packet, use RTCP Real-time Transport Control Protocol RTCP to carry out Packet Generation control.
Described user terminal, also switches instruction for sending video display region to described mixed ping server, carries at least one the video display area information be switched in the switching instruction of described video display region;
Described mixed ping server, also for switching instruction for described video display region, sends to described user terminal by memory map corresponding for described video display area information.
The multi-party video data that the embodiment of the present invention provides mixed screen implementation method, device, system and mixed ping server, respectively the vedio data that self gathers is sent to mixed ping server by each user terminal participating in multi-party video, on arbitrary region that the memory map that each vedio data received is plotted in movement in advance respectively comprises by mixed ping server, and wish according to each user terminal the area information that the vedio data of display is corresponding, the memory map of correspondence is sent to this user terminal, owing to realizing the mixed screen of vedio data in said process at server end, and according to the vedio data after clouding screen under the displaying demand of user terminal, like this, both server had been reduced to bandwidth demand required during user terminal transmitting video image data, on the other hand, the vedio data received due to user terminal is the view data of having mixed screen, thus user terminal is without the need to carrying out mixed screen operation, process resource and the electricity of user terminal consumption can be reduced.
Other features and advantages of the present invention will be set forth in the following description, and, partly become apparent from specification, or understand by implementing the present invention.Object of the present invention and other advantages realize by structure specifically noted in write specification, claims and accompanying drawing and obtain.
Accompanying drawing explanation
Accompanying drawing described herein is used to provide a further understanding of the present invention, forms a part of the present invention, and schematic description and description of the present invention, for explaining the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is in prior art, network side video data handling process schematic diagram;
Fig. 2 is in prior art, the schematic diagram that user terminal processes the video data received;
Fig. 3 is in the embodiment of the present invention, the schematic diagram that server processes the video data received;
Fig. 4 is in the embodiment of the present invention, the schematic diagram that user terminal processes the video data received;
Fig. 5 a is in the embodiment of the present invention, the form schematic diagram of the first memory map;
Fig. 5 b is in the embodiment of the present invention, the form schematic diagram of the second memory map;
Fig. 5 c is in the embodiment of the present invention, and the first memory map is at the schematic diagram of terminal display;
Fig. 6 is in the embodiment of the present invention, and multi-party video data mix the implementing procedure schematic diagram of screen implementation method;
Fig. 7 is in the embodiment of the present invention, determines the implementing procedure schematic diagram of the area information that drawing area is corresponding;
Fig. 8 is in the embodiment of the present invention, memory map cutting schematic diagram;
Fig. 9 is in the embodiment of the present invention, the system architecture schematic diagram of mixed ping server;
Figure 10 is in the embodiment of the present invention, vedio data handling process schematic diagram;
Figure 11 is in the embodiment of the present invention, the structural representation of user data processing unit;
Figure 12 is in the embodiment of the present invention, the structural representation of video synthesizer;
Figure 13 is in the embodiment of the present invention, and multi-party video data mix the structural representation of screen implement device;
Figure 14 is in the embodiment of the present invention, and multi-party video data mix the structural representation that screen realizes system.
Embodiment
During in order to reduce multi-party video calls, bandwidth needed for the vedio data that server transmits to user terminal, the process resource consumed when simultaneously reducing terminal processes vedio data and electricity, embodiments provide a kind of multi-party video data mixed screen implementation method, device, system and mixed ping server.
Below in conjunction with Figure of description, the preferred embodiments of the present invention are described, be to be understood that, preferred embodiment described herein is only for instruction and explanation of the present invention, be not intended to limit the present invention, and when not conflicting, the embodiment in the present invention and the feature in embodiment can combine mutually.
The embodiment of the present invention a kind of provide server side to realize method that vedio data mixes screen, namely sends to user terminal according to the different demand of each user terminal after server side carries out mixed screen.As shown in Figure 3, for the schematic diagram that server processes the video data received, server receive each user terminal send vedio data after (when specifically implementing, RTP/RTCP agreement is adopted to carry out the transmission of vedio data between user terminal and server), first decoding process is carried out, then directly carry out mixed screen operation, the screen view data of mixed screen is dynamically handed down to each user terminal according to the demand of different user terminals by server after mixed screen.Based on this, in the embodiment of the present invention, the bandwidth that server downlink transfer consumes is as follows, and wherein m is call number, and n is call number:
BW = Σ 0 m ( n * Avg ( Kbps ) )
In order to realize the mixed screen of multi-party video data in mixed ping server side, in the embodiment of the present invention, mixed ping server needs the memory map needed for pre-defined mixed screen.Preferably, as shown in Figure 5 a, the memory map of its first form provided for the embodiment of the present invention, it is with nine grids formal definition, is plotted in by the vedio data of the user terminal collection of each participation video calling on the memory map defined, when needing specifically to implement, show needs according to user terminal, split screen can be defined and show, be i.e. the first screen displaying 4 people, second screen displaying 4 people, the like.Memory map shown in Fig. 5 a its comprise 8 vedio data drawing areas altogether, represent with sequence number 1 ~ 8 respectively, can support have 8 people to participate in video calling simultaneously, when specifically implementing, be not limited thereto.
It should be noted that, above-mentioned just memory map preferably definition format, during concrete enforcement, can define voluntarily as required, as shown in Figure 5 b, it is the form of memory map another possibility, each screen display can be defined and show 3 people, certainly every page be can also define and 2 people, 5 people, 6 people shown ... Deng, can certainly show separately a wherein people, the embodiment of the present invention does not limit this.As long as meet the memory map form that user terminal shows demand.
As shown in Figure 5 c, in the embodiment of the present invention, user terminal often shields displaying 4 people and often shields the bandwagon effect schematic diagram of displaying 1 people.
Preferably, in order to ensure the definition that user terminal image shows, in the embodiment of the present invention, the memory map of mixed ping server definition when mixed screen can ensure with the video image of user terminal collection as to draw according to 1:1 ratio, like this, if a certain user terminal needs can ensure original definition during the video image checking unique user.
It should be noted that, according to the difference of the video image pixel that user terminal gathers, its original image size is different, the video image size of existing user terminal collection is generally 320*240, therefore, the video image gathered for each image terminal in the embodiment of the present invention is for 320*240, during concrete enforcement, if when the video image of each user terminal collection varies in size, before carrying out mixed screen operation, also need to be converted into normal size (can arrange voluntarily, the embodiment of the present invention does not limit this, such as, can be set to 320*240).
As shown in Figure 6, the multi-party video data provided for the embodiment of the present invention mix the implementing procedure schematic diagram shielding implementation method, comprise the following steps:
The vedio data that S61, each user terminal receiving participation multi-party video calls send.
During concrete enforcement, each user terminal participating in multi-party video calls gathers vedio data, and the vedio data of collection is sent to mixing server.
S62, each vedio data to be plotted in respectively on arbitrary vedio data drawing area that predefined memory map comprises.
During concrete enforcement, mixed ping server according to the time sequencing of the vedio data received for each vedio data sets up image index, such as, from 0, can be write according to the time sequencing received successively, suppose have 8 people to participate in video calling, then its image index set up can according to 0, and 1,2 ... the order of 7 is set up successively, when certainly specifically implementing, also can set up image index at random, the present invention does not limit this.
Be plotted in predefined memory map by each image index received, in memory map as shown in Figure 5 a, drawing image index is the vedio data of 0 ~ 7 respectively.
Preferably, in the embodiment of the present invention, rectangle RECT can be used to represent the area information of each vedio data in memory map, wherein, RECT comprises the LEFT of the abscissa representing the rectangle upper left corner, represent the ordinate TOP in the rectangle upper left corner, the BOTTOM of the abscissa RIGHT representing the rectangle lower right corner and the ordinate representing the rectangle lower right corner.
Thus, for each drawing area that memory map comprises, the area information of its correspondence can be determined according to the step shown in Fig. 7:
S621, for each vedio data, judge whether the user index that this vedio data is corresponding is greater than number of users that predefined user terminal one screen can show, that participate in video calling, if so, perform step S622, otherwise perform step S623;
For example is convenient to understand, be described for the memory map shown in Fig. 5 a.With sequence number in Fig. 5 a be 1 ~ 8 drawing area respectively drawing image index be the vedio data of 0 ~ 7, suppose that the number of users of the participation video calling that user terminal one screen pre-set can show is 4, namely each screen display shows the vedio data of 4 people.
S622, determine LEFT:LEFT=UI%2*DW according to following formula, and perform step S624;
Such as, image index be 0,1,2 and 3 vedio data can determine according to step S622: when image index is 0 and 2, the LEFT of its correspondence is 0, and when image index is 1 and 3, the LEFT of its correspondence is 320.
S623, determine LEFT:LEFT=UI%2*DW+SW/2 according to following formula;
Such as, image index be 4,5,6 and 7 vedio data can determine according to step S623: when image index is 4 and 6, the LEFT of its correspondence is 0+640, and when image index is 5 and 7, the LEFT of its correspondence is 320+640.
S624, for each vedio data, judge whether image index that this video data is corresponding is positioned at the upper region of described memory map, if so, perform step S625, otherwise, perform step S626;
During concrete enforcement, memory map shown in Fig. 5 a comprises upper and lower two pieces of regions, and the ordinate in the upper left corner of each drawing area (and being positioned at area information corresponding to vedio data on this drawing area) can be determined according to step S625 or step S626.
S625, determine TOP:TOP=SH according to following formula, and perform step S627;
During concrete enforcement, for each vedio data, if it is positioned at region, then the ordinate in its upper left corner is the height SH of memory map, as shown in Figure 5 a, and SH=480.
S626, determine TOP:TOP=SH/2 according to following formula;
During concrete enforcement, for each vedio data, if it is positioned at lower area, then the ordinate in its upper left corner is the height SH/2 of memory map, as shown in Figure 5 a, and SH=480/2=240.
S627, for each vedio data, determine RIGHT:RIGHT=LEFT+DW according to following formula;
Accordingly, the abscissa in the lower right corner of each drawing area can be: LEFT+DW, and as shown in Figure 5 a, DW is 320.
S628, for each vedio data, to determine according to following formula: BOTTOM=TOP-DH.
Accordingly, total coordinate in the lower right corner of each drawing area can be: TOP-DH, and as shown in Figure 5 a, DH is 240.
Wherein: UI represents the image index that each vedio data is corresponding; DW represents the width (for Fig. 5 a, the width 320 of each vedio data) of each vedio data; DH represents the height (for Fig. 5 a, the height 240 of each vedio data) of each vedio data; SW represents the width (for Fig. 5 a, the width 320*4=1280 of memory map) of predefined memory map; SH represents the height (for Fig. 5 a, the height 240*2=480 of memory map) of predefined memory map.
In Fig. 5 a, index be 0 vedio data (in Fig. 5, sequence number is 1) be positioned at the upper region of memory map, then it can be expressed as (0 with RETS, 480, 320, 240), image index be 1 vedio data (in memory map, sequence number is 2) it can be expressed as (320 with RETS, 480, 640, 240), image index be 2 vedio data (in memory map, sequence number is 3) it can be expressed as (0 with RETS, 240, 320, 0), image index be 3 vedio data (in memory map, sequence number is 4) it can be expressed as (320 with RETS, 240, 640, 0).
S63, video display area information according to each user terminal, the user terminal respectively to correspondence sends memory map corresponding to described video display area information;
Wherein, video display area information is used to indicate user terminal vedio data to be shown.
During concrete enforcement, time initial, mixed ping server section gives tacit consent to each user terminal needs the video area shown to be the vedio data of other all users except self, namely for each user terminal, the area information that the drawing area in memory map except the drawing area at the screen view data place of this user terminal self is corresponding is the video display area information of this user terminal.The follow-up video display region that can send according to user terminal switches the switching that video display region is carried out in instruction.
Preferably, if drawn according to the ratio of 1:1 when drawing memory map, suppose that each vedio data size is 320*240, the vedio data of 4 people is shown for a screen display, because it is drawn according to 1:1 ratio, then the memory map size occupied by vedio data of 4 people is (320*240) * 4, which is beyond that the vedio data size (320*240) that user terminal support shows, therefore, before sending memory map corresponding to video display area information to the user terminal of correspondence, user terminal each screen display according to presetting also is needed to show, the number of users participating in video calling divides memory map, the video image size that each partial memory figure that division obtains shows according to user terminal support is processed.For Fig. 5 a, divided two parts, sequence number be 1 ~ 4 for a part its show in the first screen display of user terminal, sequence number be 5 ~ 8 for a part its user terminal second screen show, as shown in Figure 8, under resolution is 320*240 transmission, user terminal will have the region of 4 users when showing left figure, be RECT (0 by coordinate in original memory map, 480,640,0) user terminal is sent to show again after the region of (namely sequence number is the drawing area of 1 ~ 4) is cut to 320*240.
Preferably, in the embodiment of the present invention, between user terminal, during transmitting video image data, (before comprising mixed screen and after mixed screen) can be, but not limited to use RTP/RTCP agreement.Namely, after completing mixed screen, memory map corresponding for video display area information according to the video display area information of each user terminal, can be encapsulated as RTP (RTP) Packet Generation to corresponding user terminal by mixed ping server; In transmission RTP process data packet, RTCP (RTCP Real-time Transport Control Protocol) is used to carry out Packet Generation control.
Better, in the embodiment of the present invention, at least one condition following need be met when memory map corresponding for video display area information being encapsulated as RTP packet: the size of the MTU MTU in RTP packet is no more than preset value; Not to the vedio data decoding in the arbitrary RTP grouping comprised in described RTP packet; Just the data type in RTP packet can be detected without the need to whole data flow of decoding; Support that a NALU (network abstraction layer unit type) is split as multiple RTP to be wrapped; Support multiple NALU to be collected in a RTP grouping.
During concrete enforcement, in multi-party video calls process, user terminal can also send video display region to mixed ping server and switch instruction, at least one video display area information changed to be cut is carried in this instruction, switch instruction for video display region, send to this user terminal to show memory map corresponding for video display area information.Such as, user is in communication process, when it wishes the vedio data showing separately a wherein people, this user's screen view data region can be touched, user terminal is by identifying the area information that this region is corresponding, and be carried at video display region and switch in instruction and send this mixed ping server, memory map corresponding for this area information sends to user terminal to carry out showing by mixed ping server.
For a better understanding of the present invention, the specific implementation process of system architecture schematic diagram to the embodiment of the present invention below in conjunction with mixed ping server is described, as shown in Figure 9, it is the system architecture schematic diagram of mixed ping server, primarily of NetStack, RTP/RTCP protocol stack, codec, user data processing unit, mixed screen unit five part composition, wherein, mixed screen unit comprises mixed screen timer, video synthesizer, audio filters, video distribution device, user video buffer area and User Status buffer, each several part has been shared out the work and help one another the arrival of video, mixed screen, issue process.Below will illustrate one by one each several part.
After the vedio data that each user terminal gathers arrives mixed ping server, first vedio data is carried out RTP to unpack, user video buffer area is submitted to after video decode, mixed screen timer is responsible for starting mixed screen operation, after mixed screen completes, send the vedio data of the zones of different in memory map to user terminal according to User Status buffer.Below each several part is elaborated.
NetStack:NetStack provides the IWF of UDP (User Datagram Protoco (UDP)) data.For tackling the complexity of internet environment, NetStack provides ICE to consult, support TURN STUN flow process, ensure that the interoperability of network, for the intercommunication of vedio data provides feasibility.
RTP/RTCP protocol stack: realtime transmission protocol RTP is a host-host protocol for multimedia data stream on Internet, it is defined as working one to one or in the transmission situation of one-to-many, its objective is and provides temporal information and realize stream synchronous.RTP itself only ensures the transmission of real time data, can not provide reliable transfer mechanism, also not providing flow control or congestion control for transmitting packet in order, and it needs to rely on RTCP to provide these to serve.RTCP Real-time Transport Control Protocol RTCP is in charge of transmission quality exchange of control information between current application process.During RTP session, each participant periodically transmits RTCP bag, the statistics such as the quantity containing the quantity of packet sent, the packet of loss in bag, therefore, server can utilize these information dynamically to change transmission rate, even changes PT Payload Type.RTP and RTCP with the use of, more effective feedback and minimum expense can make efficiency of transmission optimization.
Utilize the unique ability that RTCP provides: utilize feedback information to provide the delivery quality of distribute data, this feedback can be used for carrying out the congestion control of flow, also can be used for monitoring network and the problem that is used in diagnostic network; For RTP source provides the permanent CNAME transport layer mark of (normative name), because SSRC (Synchronization Source) can become when finding that conflict or program updates are restarted, need one to operate vestige, in one group of session of being correlated with, recipient also obtains the data flow (as Voice & Video) be associated from the participant that is specified with CNAME; The transmission rate of RTCP bag is adjusted according to the quantity of participant; Transmit session control information.
As shown in Figure 10, for in the embodiment of the present invention, the handling process schematic diagram of vedio data, at vedio data transmitting terminal, first encapsulate through RTP after vedio data decoding, and become the packet of applicable Internet Transmission to transmit in Internet (the Internet) according to IP/UDP protocol packing.RTCP and RTP data protocol together with the use of, when a startup RTP session by taking two ports simultaneously, respectively for RTP and RTCP.Because RTP itself can not provide reliable guarantee for transmitting packet according to the order of sequence, do not provide flow control and congestion control, so these have all been responsible for by RTCP yet.RTCP can adopt the distribution mechanisms identical with RTP, periodically send control information to all members in session, program is by receiving these data, therefrom obtain the related data of session participant, and the feedback information such as network condition, packet loss probability submits to QoS (service quality) reponse system, thus can control service quality or network condition be diagnosed.Accordingly, according to the inverse process of said process vedio data to be processed at vedio data receiving terminal and decode.Wherein, SR represents sender report, and it refers to the application program or terminal that send RTP packet, and transmitting terminal also can be receiving terminal simultaneously, and RR represents that receiving terminal is reported, it refers to only reception but does not send application program or the terminal of RTP packet.
Because the present invention can also improve the efficiency of Internet Transmission, so it is very important for designing that suitable RTP assembly strategy encapsulates vedio data.Preferably, in the embodiment of the present invention, at least one design principle following can be followed: 1, lower expense when carrying out RTP encapsulation, therefore the size of MTU (MTU) is no more than preset value, such as, can be, but not limited to be limited in 100-64K byte (as far as possible little byte) scope.2, the importance distinguishing grouping is easy to, and need not to the data decode in grouping.3, the type of data should be able to be detected, and do not need to decode whole data flow, and hash can be abandoned according to the correlation between encoding stream.Such as gateway should be able to detect the loss of A type segmentation, and can abandon corresponding Type B and the segmentation of C type.4, should supporting that a NALU (network abstraction layer unit type) is split as several RTP to be wrapped: the length that the input picture of different size determines NALU may be greater than MTU, IP layer after only having fractionation, just being avoided to occur burst when transmitting.5, support to be collected in by multiple NALU in a RTP grouping, namely in a RTP bag, transmission, more than a NALU, just considers this pattern, to improve network transmission efficiency when the coding of multiple picture exports and is less than M1IU.
Video Codec: along with the application of digital multimedia is day by day extensive, video decode becomes a fundamental in system.Video standard has multiple, depends on product and can implement one of them or multiple standard.Certainly this is not whole, and video is only a part for multimedia code stream, also has audio frequency or voice to need parallel processing in addition.Video decode itself is higher to performance requirement, needs to be different from the system architecture previously based on voice and Information application; This just proposes special challenge to portable system, and desktop application faces these problems equally.Therefore can consume for the new of terminal to reduce, we adopt server encoding and decoding technique.
Be support terminal demand in the mixed screen of server, various video encoding and decoding (such as H264-BP, H264-MP, VP8, H263, H263+, MP4V-ES etc.) can be adopted.
In data transmission procedure with RTP/RTCP protocol stack with the use of completing flow control and controlling with filling in, and ensure the quality of image.
User data processing unit: user data processing unit is responsible for decoded vedio data to be delivered to user video buffer area, and be responsible for the vedio data after mixed screen to deliver to user; It also determines user terminal Downstream video view data frame per second.
As shown in figure 11, for the structural representation of user data processing unit in the embodiment of the present invention, comprise buffer area, user area and timer, wherein, buffer area: the vedio data needing to be handed down to user terminal for depositing, the region that this buffer area is deposited for each user terminal application one pictures; User area (User): for depositing the information of session participant, is delivered to mixed screen unit according to this party information by the vedio data received; Timer: for starting the transmission of vedio data, this Time dependent frame per second of mixed ping server to user terminal pushing video view data, when specifically implementing, such as, can be, but not limited to set interval into T (ms)=1000/Fps.
Mixed screen unit: wherein, the mixed timer that shields is responsible for starting mixed screen operation, and it is responsible for vedio data being obtained rear delivery from user cache district and carries out synthesis whole video memory figure to video synthesizer.With reference to the time interval of the set timer of user data processing unit, the time interval of this mixed screen timer can be, but not limited to the maximum frame per second interval of the vedio data into all user terminals: T (ms)=1000/MAX (Fps).User video buffer area: for depositing the video memory figure of predefined memory map and each user.Video synthesizer: for the vedio data of each user is plotted to memory map, it is made up of a series of filter, drafting is called ffmpeg and is completed.As shown in figure 12, be the structural representation of video synthesizer, comprise ButterFilter, ScaleFilter, BorderFilter, ButterFilter.
During concrete enforcement, under ensureing the single video image situation of user terminal displays, ensure video quality, in internal memory, personal data area ensures as source video sequence size as far as possible.The particular location of different user in memory map by calculating acquisition, the flow process that account form can be shown in Figure 7.
User Status buffer: what achieve mixed screen dynamically updates function, and user terminal carries out the video display of zones of different according to the demand of oneself.User operation is divided into upgrading video (only needing transmitting video image data), be downgraded to audio frequency (only needing transmission of audio data), check left screen (namely needing transmission first to shield corresponding memory map to show to user terminal), check right screen (namely needing the memory map transmitting next screen corresponding to show to user terminal), check designated person's (namely needing to transmit single vedio data).
During concrete enforcement, participate in after the user terminal of multi-party video calls adds or exit having, User Status buffer does the renewal rewards theory of corresponding user video buffer area, and the user newly added or exit is handed down to each user terminal.
Audio filters: user terminal is wished that the zones of different of display section carries out cutting according to User Status buffer.Its concrete operating process can see Fig. 8.
Video distribution device is responsible for the vedio data after by mixed screen and is pushed to each user data processing unit, and that carries out vedio data issues operation.
What the embodiment of the present invention provided realizes at server side the method that multi-party video data mix screen, the mixed screen effect of user terminal displays can be controlled by server section, and carry out compared with mixed scheme of shielding with traditional in end side, terminal only needs after receiving vedio data, carry out decoding and carries out showing, as shown in Figure 4, for in the embodiment of the present invention, the schematic diagram that user terminal processes the video data received, user terminal, after the RTP/RTCP packet receiving server transmission, only needs to carry out decoding to it and exports.Server side carries out the display (namely user terminal wishes the vedio data of the certain customers of the participation video calling shown) of zones of different according to the real needs of user terminal, therefore, according to the method that the embodiment of the present invention provides, for user terminal, be equivalent to two people's video callings, the extra increase that bandwidth, CPU etc. process resource and electricity can not be produced, and for mixed ping server, it only needs transmission one tunnel to mix the screen view data after screen for each user terminal, thus reduces the demand to transmission bandwidth.
Based on same inventive concept, additionally provide one in the embodiment of the present invention and move multi-party video data mixed screen implement device, system and mixed ping server, the principle of dealing with problems due to said apparatus, system and equipment is similar to the mixed implementation method of shielding of multi-party video data, therefore the enforcement of said apparatus, system and equipment see the enforcement of method, can repeat part and repeats no more.
As shown in figure 13, the multi-party video data provided for the embodiment of the present invention mix the structural representation shielding implement device, comprising:
User data processing unit 131, for receiving the vedio data of each user terminal transmission participating in multi-party video calls; And according to the video display area information of each user terminal, the user terminal respectively to correspondence sends memory map corresponding to described video display area information, and described video display area information is used to indicate described user terminal vedio data to be shown;
Mixed screen unit 132, for being plotted in arbitrary vedio data drawing area that predefined memory map comprises respectively by each vedio data.
Wherein, mixed screen unit 132, comprise: audio filters, before sending memory map corresponding to described video display area information at described user data processing unit to the user terminal of correspondence, number of users that show according to each screen display of user terminal preset, that participate in video calling divides described memory map; The video image size that each partial memory figure that division obtains shows according to user terminal support is processed.
During concrete enforcement, the multi-party video data that the embodiment of the present invention provides mixed screen implement device, can also comprise:
RTP/RTCP protocol stack, for the video display area information according to each user terminal, is encapsulated as realtime transmission protocol RTP Packet Generation to corresponding user terminal by memory map corresponding for described video display area information; And in transmission RTP process data packet, use RTCP Real-time Transport Control Protocol RTCP to carry out Packet Generation control.
During concrete enforcement, mix screen unit 132, the video display region that can also be used for receiving the transmission of arbitrary user terminal switches instruction, carries at least one the video display area information be switched in the switching instruction of described video display region; Switch instruction for described video display region, memory map corresponding for described video display area information is sent to this user terminal.
For convenience of description, above each several part is divided into each module (or unit) according to function and describes respectively.Certainly, the function of each module (or unit) can be realized in same or multiple software or hardware when implementing of the present invention, multi-party video data described above mixed screen implement device can be arranged in mixed ping server.
As shown in figure 14, the multi-party video data provided for the embodiment of the present invention mix the structural representation that screen realizes system, comprise at least two user terminals 141 and mixed ping server 142, wherein:
User terminal 141, for sending vedio data to mixed ping server 142;
Mixed ping server 142, the vedio data for being sent by each user terminal 141 is plotted in arbitrary vedio data drawing area that predefined memory map comprises respectively; And according to the video display area information of each user terminal 141, the user terminal 141 respectively to correspondence sends memory map corresponding to video display area information, and video display area information is used to indicate user terminal 141 vedio data to be shown.
During concrete enforcement, mixed ping server 142, before can also being used for sending memory map corresponding to video display area information to the user terminal 141 of correspondence, number of users that show according to each screen display of user terminal 141 preset, that participate in video calling divides memory map; Support that the video image size shown processes by dividing each partial memory figure obtained according to user terminal 141.
During concrete enforcement, mixed ping server 142, can also be used for the video display area information according to each user terminal 141, memory map corresponding for video display area information is encapsulated as realtime transmission protocol RTP Packet Generation to corresponding user terminal 141; And in transmission RTP process data packet, use RTCP Real-time Transport Control Protocol RTCP to carry out Packet Generation control.
During concrete enforcement, user terminal 141, can also be used for sending video display region to mixed ping server 142 and switch instruction, carries at least one the video display area information be switched in the switching instruction of video display region; Mixed ping server 142, can also be used for switching instruction for video display region, memory map corresponding for video display area information is sent to user terminal 141.
Those skilled in the art should understand, embodiments of the invention can be provided as method, system or computer program.Therefore, the present invention can adopt the form of complete hardware embodiment, completely software implementation or the embodiment in conjunction with software and hardware aspect.And the present invention can adopt in one or more form wherein including the upper computer program implemented of computer-usable storage medium (including but not limited to magnetic disc store, CD-ROM, optical memory etc.) of computer usable program code.
The present invention describes with reference to according to the flow chart of the method for the embodiment of the present invention, equipment (system) and computer program and/or block diagram.Should understand can by the combination of the flow process in each flow process in computer program instructions realization flow figure and/or block diagram and/or square frame and flow chart and/or block diagram and/or square frame.These computer program instructions can being provided to the processor of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing device to produce a machine, making the instruction performed by the processor of computer or other programmable data processing device produce device for realizing the function of specifying in flow chart flow process or multiple flow process and/or block diagram square frame or multiple square frame.
These computer program instructions also can be stored in can in the computer-readable memory that works in a specific way of vectoring computer or other programmable data processing device, the instruction making to be stored in this computer-readable memory produces the manufacture comprising command device, and this command device realizes the function of specifying in flow chart flow process or multiple flow process and/or block diagram square frame or multiple square frame.
These computer program instructions also can be loaded in computer or other programmable data processing device, make on computer or other programmable devices, to perform sequence of operations step to produce computer implemented process, thus the instruction performed on computer or other programmable devices is provided for the step realizing the function of specifying in flow chart flow process or multiple flow process and/or block diagram square frame or multiple square frame.
Although describe the preferred embodiments of the present invention, those skilled in the art once obtain the basic creative concept of cicada, then can make other change and amendment to these embodiments.So claims are intended to be interpreted as comprising preferred embodiment and falling into all changes and the amendment of the scope of the invention.
Obviously, those skilled in the art can carry out various change and modification to the present invention and not depart from the spirit and scope of the present invention.Like this, if these amendments of the present invention and modification belong within the scope of the claims in the present invention and equivalent technologies thereof, then the present invention is also intended to comprise these change and modification.

Claims (15)

1. a multi-party video data mixed screen implementation method, is characterized in that, comprising:
Receive the vedio data of each user terminal transmission participating in multi-party video calls;
Each vedio data is plotted in respectively on arbitrary vedio data drawing area that predefined memory map comprises;
According to the video display area information of each user terminal, the user terminal respectively to correspondence sends memory map corresponding to described video display area information, and described video display area information is used to indicate described user terminal vedio data to be shown.
2. the method for claim 1, it is characterized in that, rectangle RECT is used to represent the area information of each vedio data in described memory map, wherein, described RECT comprises the LEFT of the abscissa representing the rectangle upper left corner, represent the ordinate TOP in the rectangle upper left corner, the BOTTOM of the abscissa RIGHT representing the rectangle lower right corner and the ordinate representing the rectangle lower right corner; And
Determine the area information of each vedio data in described memory map in accordance with the following methods:
For each vedio data sets up image index;
For each vedio data, judge whether the user index that this vedio data is corresponding is greater than number of users that predefined user terminal one screen can show, that participate in video calling;
If so, then LEFT:LEFT=UI%2*DW is determined according to following formula;
If not, then LEFT:LEFT=UI%2*DW+SW/2 is determined according to following formula;
For each vedio data, judge whether image index that this video data is corresponding is positioned at the upper region of described memory map;
If so, then TOP:TOP=SH is determined according to following formula;
If not, then TOP:TOP=SH/2 is determined according to following formula;
For each vedio data, determine RIGHT and BOTTOM according to following formula respectively:
RIGHT=LEFT+DW;
BOTTOM=TOP-DH; Wherein:
UI represents the image index that each vedio data is corresponding;
DW represents the width of each vedio data;
DH represents the height of each vedio data;
SW represents the width of described memory map;
SH represents the height of described memory map.
3. the method for claim 1, is characterized in that, before sending memory map corresponding to described video display area information, also comprises to the user terminal of correspondence:
Number of users that show according to each screen display of user terminal preset, that participate in video calling divides described memory map;
The video image size that each partial memory figure that division obtains shows according to user terminal support is processed.
4. the method for claim 1, is characterized in that, according to the video display area information of each user terminal, the user terminal respectively to correspondence sends memory map corresponding to described video display area information, specifically comprises:
According to the video display area information of each user terminal, memory map corresponding for described video display area information is encapsulated as realtime transmission protocol RTP Packet Generation to corresponding user terminal; And
In transmission RTP process data packet, RTCP Real-time Transport Control Protocol RTCP is used to carry out Packet Generation control.
5. method as claimed in claim 4, is characterized in that, need meet at least one condition following when memory map corresponding for described video display area information is encapsulated as RTP packet:
The size of the MTU MTU in RTP packet is no more than preset value; Not to the vedio data decoding in the arbitrary RTP grouping comprised in described RTP packet; Just the data type in described RTP packet can be detected without the need to whole data flow of decoding; Support that a network abstraction layer unit type NALU is split as multiple RTP to be wrapped; Support multiple NALU to be collected in a RTP grouping.
6. the method as described in claim as arbitrary in Claims 1 to 5, is characterized in that, also comprise:
Receive the video display region switching instruction that arbitrary user terminal sends, in the switching instruction of described video display region, carry at least one the video display area information be switched to;
Switch instruction for described video display region, memory map corresponding for described video display area information is sent to this user terminal.
7. a multi-party video data mixed screen implement device, is characterized in that, comprising:
User data management unit, for receiving the vedio data of each user terminal transmission participating in multi-party video calls; And according to the video display area information of each user terminal, the user terminal respectively to correspondence sends memory map corresponding to described video display area information, and described video display area information is used to indicate described user terminal vedio data to be shown;
Mixed screen unit, for being plotted in arbitrary vedio data drawing area that predefined memory map comprises respectively by each vedio data.
8. device as claimed in claim 7, it is characterized in that, described mixed screen unit, comprising:
Audio filters, before sending memory map corresponding to described video display area information in described user data management unit to the user terminal of correspondence, number of users that show according to each screen display of user terminal preset, that participate in video calling divides described memory map; The video image size that each partial memory figure that division obtains shows according to user terminal support is processed.
9. device as claimed in claim 7, is characterized in that, also comprise:
RTP/RTCP protocol stack, for the video display area information according to each user terminal, is encapsulated as realtime transmission protocol RTP Packet Generation to corresponding user terminal by memory map corresponding for described video display area information; And in transmission RTP process data packet, use RTCP Real-time Transport Control Protocol RTCP to carry out Packet Generation control.
10. the device as described in claim 7,8 or 9, is characterized in that,
Described mixed screen unit, the video display region also sent for receiving arbitrary user terminal switches instruction, carries at least one the video display area information be switched in the switching instruction of described video display region; Switch instruction for described video display region, memory map corresponding for described video display area information is sent to this user terminal.
11. 1 kinds of mixed ping servers, is characterized in that, comprise the device described in the arbitrary claim of claim 7 ~ 10.
12. 1 kinds of multi-party video data are mixed screen and are realized system, it is characterized in that, comprise at least two user terminals and mixed ping server, wherein:
Described user terminal, for sending vedio data to described mixed ping server;
Described mixed ping server, the vedio data for being sent by each user terminal is plotted in arbitrary vedio data drawing area that predefined memory map comprises respectively; And according to the video display area information of each user terminal, the user terminal respectively to correspondence sends memory map corresponding to described video display area information, and described video display area information is used to indicate described user terminal vedio data to be shown.
13. systems as claimed in claim 12, is characterized in that,
Described mixed ping server, also for before send memory map corresponding to described video display area information to the user terminal of correspondence, number of users that show according to each screen display of user terminal preset, that participate in video calling divides described memory map; The video image size that each partial memory figure that division obtains shows according to user terminal support is processed.
14. systems as claimed in claim 12, is characterized in that,
Described mixed ping server, specifically for the video display area information according to each user terminal, is encapsulated as realtime transmission protocol RTP Packet Generation to corresponding user terminal by memory map corresponding for described video display area information; And in transmission RTP process data packet, use RTCP Real-time Transport Control Protocol RTCP to carry out Packet Generation control.
15. systems as described in claim 12,13 or 14, is characterized in that,
Described user terminal, also switches instruction for sending video display region to described mixed ping server, carries at least one the video display area information be switched in the switching instruction of described video display region;
Described mixed ping server, also for switching instruction for described video display region, sends to described user terminal by memory map corresponding for described video display area information.
CN201410254027.3A 2014-06-09 2014-06-09 Multiparty video data fusion realization method, device, system and fusion server Pending CN105282477A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410254027.3A CN105282477A (en) 2014-06-09 2014-06-09 Multiparty video data fusion realization method, device, system and fusion server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410254027.3A CN105282477A (en) 2014-06-09 2014-06-09 Multiparty video data fusion realization method, device, system and fusion server

Publications (1)

Publication Number Publication Date
CN105282477A true CN105282477A (en) 2016-01-27

Family

ID=55150703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410254027.3A Pending CN105282477A (en) 2014-06-09 2014-06-09 Multiparty video data fusion realization method, device, system and fusion server

Country Status (1)

Country Link
CN (1) CN105282477A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111970474A (en) * 2020-08-28 2020-11-20 北京容联易通信息技术有限公司 Intelligent screen mixing method and system for multi-channel videos
CN113259618A (en) * 2021-05-12 2021-08-13 中移智行网络科技有限公司 Audio and video session method and device, first terminal and session server
CN114071063A (en) * 2021-11-15 2022-02-18 深圳市健成云视科技有限公司 Information sharing method, device, equipment and medium based on bidirectional option

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103051978A (en) * 2012-12-16 2013-04-17 华南理工大学 H264-based real-time mobile video service control method
CN103238317A (en) * 2010-05-12 2013-08-07 布鲁珍视网络有限公司 Systems and methods for scalable distributed global infrastructure for real-time multimedia communication
CN103516887A (en) * 2012-06-29 2014-01-15 中国移动通信集团公司 Display method, device and system of multiple terminal screens

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103238317A (en) * 2010-05-12 2013-08-07 布鲁珍视网络有限公司 Systems and methods for scalable distributed global infrastructure for real-time multimedia communication
CN103516887A (en) * 2012-06-29 2014-01-15 中国移动通信集团公司 Display method, device and system of multiple terminal screens
CN103051978A (en) * 2012-12-16 2013-04-17 华南理工大学 H264-based real-time mobile video service control method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111970474A (en) * 2020-08-28 2020-11-20 北京容联易通信息技术有限公司 Intelligent screen mixing method and system for multi-channel videos
CN113259618A (en) * 2021-05-12 2021-08-13 中移智行网络科技有限公司 Audio and video session method and device, first terminal and session server
CN113259618B (en) * 2021-05-12 2022-06-10 中移智行网络科技有限公司 Audio and video session method and device, first terminal and session server
CN114071063A (en) * 2021-11-15 2022-02-18 深圳市健成云视科技有限公司 Information sharing method, device, equipment and medium based on bidirectional option

Similar Documents

Publication Publication Date Title
EP2700244B1 (en) Flow-control based switched group video chat and real-time interactive broadcast
WO2022095795A1 (en) Communication method and apparatus, computer readable medium, and electronic device
CN111225230B (en) Management method and related device for network live broadcast data
US20170134831A1 (en) Flow Controlled Based Synchronized Playback of Recorded Media
CN108886669B (en) Dynamic switching of streaming services between broadcast and unicast delivery
US20130215215A1 (en) Cloud-based interoperability platform using a software-defined networking architecture
KR101972692B1 (en) Data transfer method and system and related device
CN101895718B (en) Video conference system multi-image broadcast method, and device and system thereof
CN104604263A (en) Method for seamless unicast-broadcast switching during dash-formatted content streaming
TWI440347B (en) Method of multimedia broadcast multicast service content aware scheduling and receiving in a wireless communication system and related communication device
TWI415491B (en) Method of multimedia broadcast multicast service content aware scheduling and receiving in a wireless communication system
Minopoulos et al. A survey on haptic data over 5g networks
KR20100131956A (en) Method and apparatus for handling mbms dynamic scheduling information
CN110943977B (en) Multimedia service data transmission method, server, equipment and storage medium
CN113923470A (en) Live stream processing method and device
Christodoulou et al. Adaptive subframe allocation for next generation multimedia delivery over hybrid LTE unicast broadcast
CN105282477A (en) Multiparty video data fusion realization method, device, system and fusion server
CN104754519B (en) Processing method, user equipment and the network side equipment of a kind of group of communication service
JP5957143B2 (en) Broadcast service resource allocation method, resource management center, and MME
EP3734967A1 (en) Video conference transmission method and apparatus, and mcu
CN114598853A (en) Video data processing method and device and network side equipment
CN112236986B (en) Network controlled upstream media delivery for collaborative media production in network capacity limited scenarios
JP6346710B2 (en) Data transmission method, apparatus and storage medium
WO2022121819A1 (en) Call method and device
CN105359556B (en) Eliminate the silence during multicast broadcast service (EMBS) service change of evolution

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160127

RJ01 Rejection of invention patent application after publication