CN112866725A - Live broadcast control method and device - Google Patents
Live broadcast control method and device Download PDFInfo
- Publication number
- CN112866725A CN112866725A CN202110051301.7A CN202110051301A CN112866725A CN 112866725 A CN112866725 A CN 112866725A CN 202110051301 A CN202110051301 A CN 202110051301A CN 112866725 A CN112866725 A CN 112866725A
- Authority
- CN
- China
- Prior art keywords
- video data
- video
- data
- live broadcast
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 230000006855 networking Effects 0.000 claims abstract description 57
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 29
- 238000004891 communication Methods 0.000 claims abstract description 10
- 239000000872 buffer Substances 0.000 claims description 41
- 230000015572 biosynthetic process Effects 0.000 claims description 29
- 238000003786 synthesis reaction Methods 0.000 claims description 29
- 238000004590 computer program Methods 0.000 claims description 10
- 238000003860 storage Methods 0.000 claims description 10
- 230000003213 activating effect Effects 0.000 claims 1
- 230000000007 visual effect Effects 0.000 abstract description 11
- 238000005516 engineering process Methods 0.000 description 19
- 230000006870 function Effects 0.000 description 18
- 238000010586 diagram Methods 0.000 description 15
- 238000012545 processing Methods 0.000 description 10
- 230000005540 biological transmission Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 230000015654 memory Effects 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 5
- 238000009826 distribution Methods 0.000 description 4
- 238000007726 management method Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 238000011144 upstream manufacturing Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 241000700605 Viruses Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000002155 anti-virotic effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/21805—Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The embodiment of the invention provides a live broadcast control method and a device, which are applied to a live broadcast client, wherein the live broadcast client is in communication connection with a video networking server, and the method comprises the following steps: starting a plurality of cameras in the mobile equipment; acquiring multi-channel video data respectively acquired by the plurality of cameras; synthesizing the multi-channel video data to obtain target video data; encoding the target video data to obtain encoded data; and sending the coded data to the video networking server so as to send the coded data to an on-demand client through the video networking server. By simultaneously acquiring two paths of video pictures with different visual angles and synthesizing the two paths of video pictures in the same picture, the live broadcast function of the live broadcast client is greatly enhanced, a viewer can see the front and back paths of pictures of the live broadcast side at the same time, different contents are respectively played, a receiver obtains more visual information, and the understanding of the broadcast contents of the live broadcast side is facilitated.
Description
Technical Field
The invention relates to the technical field of video conferences, in particular to a live broadcast control method and a live broadcast control device.
Background
The video networking is an important milestone for network development, is a higher-level form of the Internet, is a real-time network, can realize the real-time transmission of full-network high-definition videos which cannot be realized by the existing Internet, and pushes a plurality of Internet applications to high-definition video, and high definition faces each other. Finally, world no-distance is realized, and the distance between people in the world is only the distance of one screen. The video networking realizes multiple video functions on one protocol, one network and one set of platform, such as functions of high-definition video conference, video monitoring, intelligent monitoring analysis, emergency command, digital broadcast television, delay television, network teaching, live broadcast, VOD live broadcast, multimedia distribution, video recording, intelligent video broadcast control, information distribution and the like, and realizes the unification of the protocol, the product and the management.
At present, on a video network Mobile terminal, a live broadcast function is to open a default front-facing camera, acquire a video picture by using the camera and transmit the video picture to a video network, and then to view the video on demand by various other video network terminals, and the function is commonly used in occasions such as site interview, learning and training and the like. However, the current live broadcast function can only live broadcast one camera picture, and other watching parties display one picture in full screen, and usually default to the video picture collected by the front camera, and the video picture collected by the rear camera can also be manually switched. The two methods need to be switched back and forth, only one method can be selected, and multi-view playing cannot be carried out. The information seen by the receiver is limited, and the sender is cumbersome to switch repeatedly.
Therefore, how to fully utilize hardware resources of the mobile device and play multiple video pictures at the same time at multiple viewing angles needs to be solved, and under the condition that video flow is not increased, a sender can send more visual information in the same video picture.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide a live control method and a corresponding live control apparatus that overcome or at least partially solve the above problems.
The embodiment of the invention discloses a live broadcast control method, which is applied to a live broadcast client, wherein the live broadcast client is in communication connection with a video networking server, and the method comprises the following steps:
starting a plurality of cameras in the mobile equipment;
acquiring multi-channel video data respectively acquired by the plurality of cameras;
synthesizing the multi-channel video data to obtain target video data;
encoding the target video data to obtain encoded data;
and sending the coded data to the video networking server so as to send the coded data to an on-demand client through the video networking server.
Optionally, the starting the multiple cameras in the mobile device includes:
calling a video acquisition interface provided by the mobile equipment;
and starting a front camera and a rear camera in the mobile equipment through the video acquisition interface.
Optionally, the synthesizing the multiple paths of video data to obtain the target video data includes:
respectively storing the multiple paths of video data into a plurality of video buffers;
and synthesizing the multi-channel video data in the plurality of video buffers according to preset synthesis parameters to obtain target video data.
Optionally, the video data is composed of a plurality of frame images; the synthesizing the multiple paths of video data in the multiple video buffers according to the preset synthesis parameters to obtain the target video data comprises the following steps:
reading each frame of picture image of the multi-channel video data from the plurality of frequency buffer areas respectively;
and synthesizing each frame of picture image of the multi-channel video data according to preset synthesis parameters to obtain target video data.
Optionally, the preset synthesis parameters include one or more of resolution, size, and layout style.
Optionally, after the step of synthesizing the multiple paths of video data to obtain the target video data, the method further includes:
and displaying the target video data on a display screen of the mobile equipment.
Optionally, the encoding the target video data to obtain encoded data includes:
and reading the target video data according to a preset frame rate, and coding based on an H264 protocol to obtain coded data.
The embodiment of the invention also discloses a live broadcast control device, which is applied to a live broadcast client, wherein the live broadcast client is in communication connection with the video networking server, and the device comprises:
the camera starting module is used for starting a plurality of cameras in the mobile equipment;
the video data acquisition module is used for acquiring the multi-channel video data respectively acquired by the plurality of cameras;
the video data synthesis module is used for synthesizing the multi-channel video data to obtain target video data;
the video data coding module is used for coding the target video data to obtain coded data;
and the coded data sending module is used for sending the coded data to the video networking server so as to send the coded data to the on-demand client through the video networking server.
The embodiment of the invention also discloses an electronic device, which comprises:
one or more processors; and
one or more machine-readable media having instructions stored thereon, which when executed by the one or more processors, cause the electronic device to perform the steps of a method according to any one of the embodiments of the invention.
The embodiment of the invention also discloses a computer readable storage medium, which stores a computer program for enabling a processor to execute the steps of the method according to any one of the embodiments of the invention.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, a plurality of cameras in the mobile equipment are started; acquiring multi-channel video data respectively acquired by the plurality of cameras; synthesizing the multi-channel video data to obtain target video data; encoding the target video data to obtain encoded data; the coded data are sent to the video networking server, so that the coded data are sent to the on-demand client through the video networking server, two paths of video pictures with different visual angles are collected simultaneously, the two paths of video pictures are synthesized in the same picture, the live broadcast function of the live broadcast client is greatly enhanced, a viewer can see the front and back paths of pictures of the live broadcast side at the same time, different contents are played respectively, a receiver obtains more visual information, understanding of broadcast contents of the live broadcast side is facilitated, the video is synthesized into one path in two paths, video flow is not increased, the receiver does not need to be modified, and the video on-demand client is convenient to realize and good in compatibility.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the description of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic diagram of a frame structure of a live broadcast control method according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating steps of a live broadcast control method according to an embodiment of the present invention;
fig. 3 is a schematic frame structure diagram of another live broadcast control method according to an embodiment of the present invention;
fig. 4 is a block diagram of a live broadcast control apparatus according to an embodiment of the present invention;
FIG. 5 is a networking schematic of a video network of the present invention;
FIG. 6 is a schematic diagram of a hardware architecture of a node server according to the present invention;
fig. 7 is a schematic diagram of a hardware structure of an access switch of the present invention;
fig. 8 is a schematic diagram of a hardware structure of an ethernet protocol conversion gateway according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The problem of how to fully utilize hardware resources of mobile equipment to play multiple paths of video pictures at the same time in multiple visual angles is solved. As shown in fig. 1, the embodiment of the present invention is applied to a live broadcast service in a video networking conference system implemented on a live broadcast client, and is started by a live broadcast service control module, when the live broadcast service starts, a front camera and a rear camera are initialized and opened simultaneously to collect two paths of videos, and video picture synthesis is performed on collected video pictures according to the setting of the live broadcast service control module, so that pictures of the front camera and the rear camera are superimposed on the same picture to form a mode of a large picture or a small picture or a left picture and a right picture. One path of the synthesized picture is output to a local video output for direct display on a live client, and the other path of the picture is output to a video network server after video coding, so that the picture can be distributed to other on-demand clients by the video network server. By simultaneously acquiring two paths of video pictures at different visual angles and then synthesizing the two paths of video pictures in the same picture, the two paths of pictures can be simultaneously displayed when the video networking live broadcast is carried out on the mobile equipment, so that the multi-visual-angle broadcast is realized. Moreover, two paths of video pictures are synthesized in the same picture, and then video coding is carried out, so that the problem of multi-path video acquisition synchronization can be solved, and the system is easier to realize. The method can greatly improve the practicability and the usability of the mobile flat-panel video network live broadcast, and provides very powerful support in various tasks such as on-site interview, learning and training and the like, so that the live broadcast function is better used.
Wherein, the video networking conference system: the video networking conference system is a real-time high-definition conference system which is constructed on the basis of high-definition audio and video transmission based on a video networking network through corresponding management software and a client and supports access of various special terminals and mobile terminals. The main functions include: the method comprises the following steps of establishing a conference, carrying out video call, releasing live broadcast, watching live broadcast and the like, wherein the related applications comprise: the system comprises a conference control end, a conference moving end, a conference scheduling service end, a conference management Web background and the like. The supported hardware terminal comprises: PC, mobile device and video networking tablet etc..
A conference control end: the video networking conference scheduling system comprises a client and client software running on a PC platform, wherein the client is used for conference reservation, conference process management, control, conference starting and stopping, speaker switching, split screen mode switching and the like, is a main operation module of the front end of a video networking conference system and is used for completely controlling the whole process of a conference.
And a conference mobile terminal: a video networking mobile conference terminal is a conference control terminal which runs on a mobile platform through partial simplified functions, a hardware platform is usually a flat panel (such as PAD of an android system) or an android mobile phone and the like, can be connected to a video networking conference system management terminal through a wireless IP network to perform conference control and participate in conference operations, and also has the functions of live broadcasting, video telephone and the like.
Live broadcast of the video network: the video networking conference system has the advantages that the video networking conference system can comprise various terminals, the video networking conference system comprises a plurality of high-definition special terminals independently researched and developed by the video networking, and also comprises a mobile terminal and the like, the live broadcasting function is realized by using any terminal, local audio and video data are collected through a camera and are pushed to the video networking, and live broadcasting audio and video of the terminal can be selectively watched through any other terminals.
The following is a detailed description of embodiments of the invention:
referring to fig. 2, a flowchart illustrating steps of a live broadcast control method provided in an embodiment of the present invention is shown, where the live broadcast client is in communication connection with a video network server, and the method specifically includes the following steps:
it should be noted that the mobile device may include various mobile terminals, such as a mobile phone, a tablet computer, a PDA, and the like. The operating system of the mobile device may include Android (Android), IOS, Windows Phone, Windows, and the like, and may generally support the running of various application programs.
The live broadcast client is a terminal which runs on the mobile equipment and is used for carrying out live broadcast services, and in the embodiment of the invention, the live broadcast client can start a plurality of cameras in the mobile equipment so as to acquire audio and video data through the plurality of cameras in the mobile equipment.
Specifically, the mobile device can comprise a front camera and a rear camera, and the live client can start the front camera and the rear camera in the mobile device.
video data under different visual angles can be gathered respectively to a plurality of cameras in the mobile device, for example, video data under the front camera place ahead visual angle can be gathered to leading camera, and video data under the front visual angle of trailing camera can be gathered to trailing camera.
In the embodiment of the invention, the live broadcast client can acquire the multiple paths of video data respectively acquired by the multiple cameras in the mobile equipment, namely, the live broadcast client can acquire the video data acquired by the mobile equipment at different visual angles.
in order to solve the problem that the acquired multi-channel video data are not synchronous, in the embodiment of the invention, after the multi-channel video data respectively acquired by a plurality of cameras are acquired, the multi-channel video data are not directly output, but the multi-channel video data are synthesized to obtain the target video data.
Specifically, the multiple paths of video data may be scaled, superimposed and combined according to preset combination parameters (including a designated resolution, a designated size, and the like) to form a frame in a specific form, for example, a frame in a form of a large-size frame superposition or a side-by-side left-right frame. The preset synthesis parameters may be parameters that specify a synthesis format that is preset.
in the embodiment of the present invention, after synthesizing the multiple paths of video data to obtain the target video data, the target video data may be encoded to obtain encoded data.
Specifically, the target video data may be encoded based on a video compression protocol (e.g., h.264 protocol) to obtain encoded data, so that the encoded data is transmitted to the video networking server, and is used for other on-demand clients to obtain the encoded data from the video networking server to watch live broadcast.
In the embodiment of the invention, the coded data can be sent to the video network server, so that the coded data can be sent to the on-demand client through the video network server, and the on-demand client can acquire the coded data to watch live broadcast. The on-demand client can refer to a client which enters a live broadcast room created by the live broadcast client to watch live broadcast in the live broadcast service.
In a specific implementation, the live client may be in communication connection with the video networking server based on an ethernet protocol, so that the live client may transmit the encoded data to the video networking server based on the ethernet protocol. After the video network server receives the coded data, the coded data can be transmitted in the video network based on the video network protocol, and the coded data can be transmitted to the on-demand client based on the Ethernet protocol.
In a preferred embodiment of the present invention, the plurality of cameras include a front camera and a rear camera, and the step 201 may include the following sub-steps:
calling a video acquisition interface provided by the mobile equipment; and starting a front camera and a rear camera in the mobile equipment through the video acquisition interface.
The mobile device may be provided with a video capture interface for providing ports for other applications to obtain video data captured by the front-facing camera and the rear-facing camera of the mobile device.
In the embodiment of the invention, when the live broadcast service is carried out, the live broadcast client can call the video acquisition interface provided by the mobile equipment, and start the front camera and the rear camera in the mobile equipment through the video acquisition interface.
Specifically, the live client can comprise a live service control module and a video acquisition module, the live service control module is adopted to start the video acquisition module, a video acquisition interface provided by the mobile equipment is called through the video acquisition module, and a front camera and a rear camera in the mobile equipment are started through the video acquisition interface.
In a preferred embodiment of the present invention, the step 203 may comprise the following sub-steps:
respectively storing the multiple paths of video data into a plurality of video buffers; and synthesizing the multi-channel video data in the plurality of video buffers according to preset synthesis parameters to obtain target video data.
In order to solve the problem that the acquired multiple paths of video data are not synchronous, in the embodiment of the present invention, corresponding video buffer areas may be respectively allocated to the video data acquired by each camera, so that after the video data acquired by each camera is acquired, the video data may be stored in the corresponding video buffer areas.
As an example, the mobile device may include a front camera and a rear camera, and the video data collected by the front camera may be stored in the video buffer 1, and the video data collected by the rear camera may be stored in the video buffer.
After the multiple paths of video data are stored in the multiple video buffers, the multiple paths of video data in the multiple video buffers can be synthesized according to preset synthesis parameters to obtain target video data. In a specific implementation, the live broadcast client may include a video image synthesis module, and may read a plurality of video data from the plurality of video buffers through the video image synthesis module, and then preset synthesis parameters to synthesize the plurality of read video data to obtain target video data.
In a preferred embodiment of the present invention, the video data is composed of a plurality of frame images; synthesizing the multi-channel video data in the video buffer area according to preset synthesis parameters to obtain target video data, wherein the target video data comprises:
reading each frame of picture image of the multi-channel video data from the frequency buffer area respectively; and synthesizing each frame of picture image of the multi-channel video data according to preset synthesis parameters to obtain target video data.
In the embodiment of the invention, the video data may be composed of a plurality of frames of picture images, and when the plurality of paths of video data are synthesized, each frame of picture image of the plurality of paths of video data is read from the frequency buffer respectively, and then each frame of picture image of the plurality of paths of video data is synthesized according to the preset synthesis parameters.
In a preferred embodiment of the present invention, the preset synthesis parameters include one or more of resolution, size, and layout style.
The resolution may refer to the amount of information stored in the picture image, which is how many pixels are in each inch of the picture image. The size may refer to the size of the picture image. The layout style may refer to a manner of arranging pictures, for example, a size-picture stacking style, a side-by-side left-right picture style, or a side-by-side top-down picture style, and the like, which is not limited in this embodiment of the present invention.
In a preferred embodiment of the present invention, the following steps may be further included after step 203:
and displaying the target video data on a display screen of the mobile equipment.
In the embodiment of the invention, after the target video data is obtained by synthesizing the plurality of paths of video data, one path of the target video data can be directly output to a local display, so that the target video data is displayed on the display screen of the mobile equipment.
Specifically, when the video picture is displayed locally at the live broadcast client, the data does not need to be acquired through the video networking server, so that one path of the target video data can be directly output to the local display, and the target video data can be displayed on the display screen of the mobile equipment.
In a preferred embodiment of the present invention, the step 204 may comprise the following sub-steps:
and reading the target video data according to a preset frame rate, and coding based on an H264 protocol to obtain coded data.
In the embodiment of the present invention, after synthesizing multiple paths of video data to obtain target video data, one path of the target video data may be read according to a preset frame rate, and encoded based on an H264 protocol to obtain encoded data. The preset frame rate may be a preset frequency of the frame. The H264 protocol is the latest video coding format defined by the MPEG-4 standard, and the H264 video format is subjected to lossy compression, but better image quality and low-bandwidth image fast transmission are obtained under the condition of reducing the storage volume as much as technically possible.
In a specific implementation, the live broadcast client may include a video stream encoding module, and the video stream encoding module may read target video data according to a preset frame rate and encode the target video data based on an H264 protocol to obtain encoded data.
In order to enable those skilled in the art to better understand the above steps, the following description is provided for the embodiment of the present invention with reference to fig. 3, but it should be understood that the embodiment of the present invention is not limited thereto.
Specifically, the live broadcast service control module may start the video capture module, and the video capture module is responsible for connecting each camera to capture video, and in fig. 3, connects camera 1 and camera 2 to capture video. It should be noted that the live broadcast service control module may further start an audio acquisition module of the mobile device, acquire a path of audio data through the audio acquisition module, and then integrate the path of audio data with the video data to obtain audio and video data.
After the video data collected by the camera is acquired, the video data collected by the camera 1 can be stored in the camera video collection buffer area 1, and the video data collected by the camera 2 can be stored in the camera video collection buffer area 2. Furthermore, the latest image data is respectively read from the two buffer areas through the video image synthesis module, and the two paths of videos are zoomed, superposed and synthesized according to the specified resolution and size, so that a mode of superposing large and small pictures or arranging left and right pictures side by side is formed. One path of the synthesized video is directly output to a local video for display, the other path of the synthesized video is read by a video stream coding module according to a set frame rate, H264 coding is carried out, and data obtained by coding is sent to a video network server through a video network V2V interface, so that the data are forwarded to other on-demand clients by the video network server.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 4, a block diagram of an apparatus provided in an embodiment of the present invention is shown, and is applied to a live client, where the live client is in communication connection with a video network server, and the apparatus may specifically include the following modules:
a camera starting module 401, configured to start a plurality of cameras in the mobile device;
a video data obtaining module 402, configured to obtain multiple paths of video data respectively collected by the multiple cameras;
a video data synthesizing module 403, configured to synthesize the multiple paths of video data to obtain target video data;
a video data encoding module 404, configured to encode the target video data to obtain encoded data;
and an encoded data sending module 405, configured to send the encoded data to the video networking server, so as to send the encoded data to an on-demand client through the video networking server.
In a preferred embodiment of the present invention, the plurality of cameras include a front camera and a rear camera, and the camera starting module 401 includes:
the video acquisition interface calling submodule is used for calling a video acquisition interface provided by the mobile equipment;
and the camera starting submodule is used for starting a front camera and a rear camera in the mobile equipment through the video acquisition interface.
In a preferred embodiment of the present invention, the video data synthesizing module 403 includes:
the video data storage submodule is used for respectively storing the multi-channel video data into a plurality of video buffer areas;
and the video data synthesis submodule is used for synthesizing the multi-channel video data in the plurality of video buffers according to preset synthesis parameters to obtain target video data.
In a preferred embodiment of the present invention, the video data is composed of a plurality of frame images; the video data synthesis sub-module comprises:
the picture image reading unit is used for respectively reading each frame of picture image of the multi-channel video data from the plurality of frequency buffer areas;
and the picture image synthesis unit is used for synthesizing each frame of picture image of the multi-channel video data according to preset synthesis parameters to obtain target video data.
In a preferred embodiment of the present invention, the preset synthesis parameters include one or more of resolution, size, and layout style.
In a preferred embodiment of the present invention, the method further comprises:
and the video data display module is used for displaying the target video data on a display screen of the mobile equipment.
In a preferred embodiment of the present invention, the video data encoding module 404 includes:
and the video data coding submodule is used for reading the target video data according to a preset frame rate and coding the target video data based on an H264 protocol to obtain coded data.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
An embodiment of the present invention further provides an electronic device, including:
one or more processors; and
one or more machine-readable media having instructions stored thereon, which when executed by the one or more processors, cause the electronic device to perform the steps of a method according to any one of the embodiments of the invention.
Embodiments of the present invention also provide a computer-readable storage medium storing a computer program for causing a processor to perform the steps of the method according to any one of the embodiments of the present invention.
The video networking is an important milestone for network development, is a real-time network, can realize high-definition video real-time transmission, and pushes a plurality of internet applications to high-definition video, and high-definition faces each other.
The video networking adopts a real-time high-definition video exchange technology, can integrate required services such as dozens of services of video, voice, pictures, characters, communication, data and the like on a system platform on a network platform, such as high-definition video conference, video monitoring, intelligent monitoring analysis, emergency command, digital broadcast television, delayed television, network teaching, live broadcast, VOD on demand, television mail, Personal Video Recorder (PVR), intranet (self-office) channels, intelligent video broadcast control, information distribution and the like, and realizes high-definition quality video broadcast through a television or a computer.
To better understand the embodiments of the present invention, the following description refers to the internet of view:
some of the technologies applied in the video networking are as follows:
network Technology (Network Technology)
Network technology innovation in video networking has improved the traditional Ethernet (Ethernet) to face the potentially huge first video traffic on the network. Unlike pure network Packet Switching (Packet Switching) or network Circuit Switching (Circuit Switching), the Packet Switching is adopted by the technology of the video networking to meet the Streaming requirement. The video networking technology has the advantages of flexibility, simplicity and low price of packet switching, and simultaneously has the quality and safety guarantee of circuit switching, thereby realizing the seamless connection of the whole network switching type virtual circuit and the data format.
Switching Technology (Switching Technology)
The video network adopts two advantages of asynchronism and packet switching of the Ethernet, eliminates the defects of the Ethernet on the premise of full compatibility, has end-to-end seamless connection of the whole network, is directly communicated with a user terminal, and directly bears an IP data packet. The user data does not require any format conversion across the entire network. The video networking is a higher-level form of the Ethernet, is a real-time exchange platform, can realize the real-time transmission of the whole-network large-scale high-definition video which cannot be realized by the existing Internet, and pushes a plurality of network video applications to high-definition and unification.
Server Technology (Server Technology)
The server technology on the video networking and unified video platform is different from the traditional server, the streaming media transmission of the video networking and unified video platform is established on the basis of connection orientation, the data processing capacity of the video networking and unified video platform is independent of flow and communication time, and a single network layer can contain signaling and data transmission. For voice and video services, the complexity of video networking and unified video platform streaming media processing is much simpler than that of data processing, and the efficiency is greatly improved by more than one hundred times compared with that of a traditional server.
Storage Technology (Storage Technology)
The super-high speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content with super-large capacity and super-large flow, the program information in the server instruction is mapped to the specific hard disk space, the media content is not passed through the server any more, and is directly sent to the user terminal instantly, and the general waiting time of the user is less than 0.2 second. The optimized sector distribution greatly reduces the mechanical motion of the magnetic head track seeking of the hard disk, the resource consumption only accounts for 20% of that of the IP internet of the same grade, but concurrent flow which is 3 times larger than that of the traditional hard disk array is generated, and the comprehensive efficiency is improved by more than 10 times.
Network Security Technology (Network Security Technology)
The structural design of the video network completely eliminates the network security problem troubling the internet structurally by the modes of independent service permission control each time, complete isolation of equipment and user data and the like, generally does not need antivirus programs and firewalls, avoids the attack of hackers and viruses, and provides a structural carefree security network for users.
Service Innovation Technology (Service Innovation Technology)
The unified video platform integrates services and transmission, and is not only automatically connected once whether a single user, a private network user or a network aggregate. The user terminal, the set-top box or the PC are directly connected to the unified video platform to obtain various multimedia video services in various forms. The unified video platform adopts a menu type configuration table mode to replace the traditional complex application programming, can realize complex application by using very few codes, and realizes infinite new service innovation.
Networking of the video network is as follows:
the video network is a centralized control network structure, and the network can be a tree network, a star network, a ring network and the like, but on the basis of the centralized control node, the whole network is controlled by the centralized control node in the network.
As shown in fig. 5, the video network is divided into an access network and a metropolitan network.
The devices of the access network part can be mainly classified into 3 types: node server, access switch, terminal (including various set-top boxes, coding boards, memories, etc.). The node server is connected to an access switch, which may be connected to a plurality of terminals and may be connected to an ethernet network.
The node server is a node which plays a centralized control function in the access network and can control the access switch and the terminal. The node server can be directly connected with the access switch or directly connected with the terminal.
Similarly, devices of the metropolitan network portion may also be classified into 3 types: a metropolitan area server, a node switch and a node server. The metro server is connected to a node switch, which may be connected to a plurality of node servers.
The node server is a node server of the access network part, namely the node server belongs to both the access network part and the metropolitan area network part.
The metropolitan area server is a node which plays a centralized control function in the metropolitan area network and can control a node switch and a node server. The metropolitan area server can be directly connected with the node switch or directly connected with the node server.
Therefore, the whole video network is a network structure with layered centralized control, and the network controlled by the node server and the metropolitan area server can be in various structures such as tree, star and ring.
The access network part can form a unified video platform (the part in the dotted circle), and a plurality of unified video platforms can form a video network; each unified video platform may be interconnected via metropolitan area and wide area video networking.
Video networking device classification
1.1 devices in the video network of the embodiment of the present invention can be mainly classified into 3 types: server, exchanger (including Ethernet protocol conversion gateway), terminal (including various set-top boxes, code board, memory, etc.). The video network as a whole can be divided into a metropolitan area network (or national network, global network, etc.) and an access network.
1.2 wherein the devices of the access network part can be mainly classified into 3 types: node server, access exchanger (including Ethernet protocol conversion gateway), terminal (including various set-top boxes, coding board, memory, etc.).
The specific hardware structure of each access network device is as follows:
a node server:
as shown in fig. 6, the system mainly includes a network interface module 201, a switching engine module 202, a CPU module 203, and a disk array module 204;
the network interface module 201, the CPU module 203, and the disk array module 204 all enter the switching engine module 202; the switching engine module 202 performs an operation of looking up the address table 205 on the incoming packet, thereby obtaining the direction information of the packet; and stores the packet in a queue of the corresponding packet buffer 206 based on the packet's steering information; if the queue of the packet buffer 206 is nearly full, it is discarded; the switching engine module 202 polls all packet buffer queues for forwarding if the following conditions are met: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero. The disk array module 204 mainly implements control over the hard disk, including initialization, read-write, and other operations on the hard disk; the CPU module 203 is mainly responsible for protocol processing with an access switch and a terminal (not shown in the figure), configuring an address table 205 (including a downlink protocol packet address table, an uplink protocol packet address table, and a data packet address table), and configuring the disk array module 204.
The access switch:
as shown in fig. 7, the network interface module (downlink network interface module 301, uplink network interface module 302), switching engine module 303 and CPU module 304 are mainly included;
wherein, the packet (uplink data) coming from the downlink network interface module 301 enters the packet detection module 305; the packet detection module 305 detects whether the Destination Address (DA), the Source Address (SA), the packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id) and enters the switching engine module 303, otherwise, discards the stream identifier; the packet (downstream data) coming from the upstream network interface module 302 enters the switching engine module 303; the incoming data packet of the CPU module 304 enters the switching engine module 303; the switching engine module 303 performs an operation of looking up the address table 306 on the incoming packet, thereby obtaining the direction information of the packet; if the packet entering the switching engine module 303 is from the downstream network interface to the upstream network interface, the packet is stored in the queue of the corresponding packet buffer 307 in association with the stream-id; if the queue of the packet buffer 307 is nearly full, it is discarded; if the packet entering the switching engine module 303 is not from the downlink network interface to the uplink network interface, the data packet is stored in the queue of the corresponding packet buffer 307 according to the guiding information of the packet; if the queue of the packet buffer 307 is nearly full, it is discarded.
The switching engine module 303 polls all packet buffer queues and may include two cases:
if the queue is from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queued packet counter is greater than zero; 3) obtaining a token generated by a code rate control module;
if the queue is not from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero.
The rate control module 308 is configured by the CPU module 304, and generates tokens for packet buffer queues from all downstream network interfaces to upstream network interfaces at programmable intervals to control the rate of upstream forwarding.
The CPU module 304 is mainly responsible for protocol processing with the node server, configuration of the address table 306, and configuration of the code rate control module 308.
Ethernet protocol conversion gateway:
As shown in fig. 8, the apparatus mainly includes a network interface module (a downlink network interface module 401 and an uplink network interface module 402), a switching engine module 403, a CPU module 404, a packet detection module 405, a rate control module 408, an address table 406, a packet buffer 407, a MAC adding module 409, and a MAC deleting module 410.
Wherein, the data packet coming from the downlink network interface module 401 enters the packet detection module 405; the packet detection module 405 detects whether the ethernet MAC DA, the ethernet MAC SA, the ethernet length or frame type, the video network destination address DA, the video network source address SA, the video network packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id); then, the MAC deletion module 410 subtracts MAC DA, MAC SA, length or frame type (2byte) and enters the corresponding receiving buffer, otherwise, discards it;
the downlink network interface module 401 detects the sending buffer of the port, and if there is a packet, obtains the ethernet MAC DA of the corresponding terminal according to the destination address DA of the packet, adds the ethernet MAC DA of the terminal, the MAC SA of the ethernet protocol gateway, and the ethernet length or frame type, and sends the packet.
The other modules in the ethernet protocol gateway function similarly to the access switch.
A terminal:
the system mainly comprises a network interface module, a service processing module and a CPU module; for example, the set-top box mainly comprises a network interface module, a video and audio coding and decoding engine module and a CPU module; the coding board mainly comprises a network interface module, a video and audio coding engine module and a CPU module; the memory mainly comprises a network interface module, a CPU module and a disk array module.
1.3 devices of the metropolitan area network part can be mainly classified into 2 types: node server, node exchanger, metropolitan area server. The node switch mainly comprises a network interface module, a switching engine module and a CPU module; the metropolitan area server mainly comprises a network interface module, a switching engine module and a CPU module.
2. Video networking packet definition
2.1 Access network packet definition
The data packet of the access network mainly comprises the following parts: destination Address (DA), Source Address (SA), reserved bytes, payload (pdu), CRC.
As shown in the following table, the data packet of the access network mainly includes the following parts:
DA | SA | Reserved | Payload | CRC |
wherein:
the Destination Address (DA) is composed of 8 bytes (byte), the first byte represents the type of the data packet (such as various protocol packets, multicast data packets, unicast data packets, etc.), there are 256 possibilities at most, the second byte to the sixth byte are metropolitan area network addresses, and the seventh byte and the eighth byte are access network addresses;
the Source Address (SA) is also composed of 8 bytes (byte), defined as the same as the Destination Address (DA);
the reserved byte consists of 2 bytes;
the payload part has different lengths according to different types of datagrams, and is 64 bytes if the datagram is various types of protocol packets, and is 32+1024 or 1056 bytes if the datagram is a unicast packet, of course, the length is not limited to the above 2 types;
the CRC consists of 4 bytes and is calculated in accordance with the standard ethernet CRC algorithm.
2.2 metropolitan area network packet definition
The topology of a metropolitan area network is a graph and there may be 2, or even more than 2, connections between two devices, i.e., there may be more than 2 connections between a node switch and a node server, a node switch and a node switch, and a node switch and a node server. However, the metro network address of the metro network device is unique, and in order to accurately describe the connection relationship between the metro network devices, parameters are introduced in the embodiment of the present invention: a label to uniquely describe a metropolitan area network device.
In this specification, the definition of the Label is similar to that of the Label of MPLS (Multi-Protocol Label Switch), and assuming that there are two connections between the device a and the device B, there are 2 labels for the packet from the device a to the device B, and 2 labels for the packet from the device B to the device a. The label is classified into an incoming label and an outgoing label, and assuming that the label (incoming label) of the packet entering the device a is 0x0000, the label (outgoing label) of the packet leaving the device a may become 0x 0001. The network access process of the metro network is a network access process under centralized control, that is, address allocation and label allocation of the metro network are both dominated by the metro server, and the node switch and the node server are both passively executed, which is different from label allocation of MPLS, and label allocation of MPLS is a result of mutual negotiation between the switch and the server.
As shown in the following table, the data packet of the metro network mainly includes the following parts:
DA | SA | Reserved | label (R) | Payload | CRC |
Namely Destination Address (DA), Source Address (SA), Reserved byte (Reserved), tag, payload (pdu), CRC. The format of the tag may be defined by reference to the following: the tag is 32 bits with the upper 16 bits reserved and only the lower 16 bits used, and its position is between the reserved bytes and payload of the packet.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The live broadcast control method and the live broadcast control device provided by the invention are described in detail, specific examples are applied in the text to explain the principle and the implementation mode of the invention, and the description of the above embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (10)
1. A live broadcast control method is applied to a live broadcast client, the live broadcast client is in communication connection with a video networking server, and the method comprises the following steps:
starting a plurality of cameras in the mobile equipment;
acquiring multi-channel video data respectively acquired by the plurality of cameras;
synthesizing the multi-channel video data to obtain target video data;
encoding the target video data to obtain encoded data;
and sending the coded data to the video networking server so as to send the coded data to an on-demand client through the video networking server.
2. The method of claim 1, wherein the plurality of cameras comprises a front-facing camera and a rear-facing camera, and wherein the activating the plurality of cameras in the mobile device comprises:
calling a video acquisition interface provided by the mobile equipment;
and starting a front camera and a rear camera in the mobile equipment through the video acquisition interface.
3. The method of claim 1, wherein said synthesizing the plurality of video data to obtain the target video data comprises:
respectively storing the multiple paths of video data into a plurality of video buffers;
and synthesizing the multi-channel video data in the plurality of video buffers according to preset synthesis parameters to obtain target video data.
4. The method according to claim 3, wherein the video data is composed of a plurality of frame images; the synthesizing the multiple paths of video data in the multiple video buffers according to the preset synthesis parameters to obtain the target video data comprises the following steps:
reading each frame of picture image of the multi-channel video data from the plurality of frequency buffer areas respectively;
and synthesizing each frame of picture image of the multi-channel video data according to preset synthesis parameters to obtain target video data.
5. The method according to claim 3 or 4, wherein the preset synthesis parameters include one or more of resolution, size, and layout style.
6. The method of claim 1, further comprising, after the step of synthesizing the plurality of video data to obtain the target video data:
and displaying the target video data on a display screen of the mobile equipment.
7. The method of claim 1, wherein encoding the target video data into encoded data comprises:
and reading the target video data according to a preset frame rate, and coding based on an H264 protocol to obtain coded data.
8. The utility model provides a live broadcast controlling means which characterized in that is applied to live broadcast client, live broadcast client and video networking server communication connection, the device include:
the camera starting module is used for starting a plurality of cameras in the mobile equipment;
the video data acquisition module is used for acquiring the multi-channel video data respectively acquired by the plurality of cameras;
the video data synthesis module is used for synthesizing the multi-channel video data to obtain target video data;
the video data coding module is used for coding the target video data to obtain coded data;
and the coded data sending module is used for sending the coded data to the video networking server so as to send the coded data to the on-demand client through the video networking server.
9. An electronic device, comprising:
one or more processors; and
one or more machine readable media having instructions stored thereon, which when executed by the one or more processors, cause the electronic device to perform the steps of the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that it stores a computer program for causing a processor to perform the steps of the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110051301.7A CN112866725A (en) | 2021-01-14 | 2021-01-14 | Live broadcast control method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110051301.7A CN112866725A (en) | 2021-01-14 | 2021-01-14 | Live broadcast control method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112866725A true CN112866725A (en) | 2021-05-28 |
Family
ID=76006369
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110051301.7A Pending CN112866725A (en) | 2021-01-14 | 2021-01-14 | Live broadcast control method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112866725A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113891044A (en) * | 2021-09-29 | 2022-01-04 | 天翼物联科技有限公司 | Video live broadcast method and device, computer equipment and computer readable storage medium |
CN114143487A (en) * | 2021-12-15 | 2022-03-04 | 深圳市前海手绘科技文化有限公司 | Video recording method and device |
CN114501136A (en) * | 2022-01-12 | 2022-05-13 | 惠州Tcl移动通信有限公司 | Image acquisition method and device, mobile terminal and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106331576A (en) * | 2015-06-25 | 2017-01-11 | 中兴通讯股份有限公司 | Multimedia service processing method, system and device |
WO2018121405A1 (en) * | 2016-12-30 | 2018-07-05 | 北京奇虎科技有限公司 | Method and device for transmitting stream data |
CN108881957A (en) * | 2017-11-02 | 2018-11-23 | 北京视联动力国际信息技术有限公司 | A kind of mixed method and device of multimedia file |
CN109309787A (en) * | 2018-09-07 | 2019-02-05 | 视联动力信息技术股份有限公司 | A kind of operating method and system of panoramic video data |
CN110166728A (en) * | 2019-05-29 | 2019-08-23 | 视联动力信息技术股份有限公司 | A kind of view networking meeting method and device |
CN111107299A (en) * | 2019-12-05 | 2020-05-05 | 视联动力信息技术股份有限公司 | Method and device for synthesizing multi-channel video |
CN111901572A (en) * | 2020-08-14 | 2020-11-06 | 广州盈可视电子科技有限公司 | Multi-channel video stream synthesis method and device |
-
2021
- 2021-01-14 CN CN202110051301.7A patent/CN112866725A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106331576A (en) * | 2015-06-25 | 2017-01-11 | 中兴通讯股份有限公司 | Multimedia service processing method, system and device |
WO2018121405A1 (en) * | 2016-12-30 | 2018-07-05 | 北京奇虎科技有限公司 | Method and device for transmitting stream data |
CN108881957A (en) * | 2017-11-02 | 2018-11-23 | 北京视联动力国际信息技术有限公司 | A kind of mixed method and device of multimedia file |
CN109309787A (en) * | 2018-09-07 | 2019-02-05 | 视联动力信息技术股份有限公司 | A kind of operating method and system of panoramic video data |
CN110166728A (en) * | 2019-05-29 | 2019-08-23 | 视联动力信息技术股份有限公司 | A kind of view networking meeting method and device |
CN111107299A (en) * | 2019-12-05 | 2020-05-05 | 视联动力信息技术股份有限公司 | Method and device for synthesizing multi-channel video |
CN111901572A (en) * | 2020-08-14 | 2020-11-06 | 广州盈可视电子科技有限公司 | Multi-channel video stream synthesis method and device |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113891044A (en) * | 2021-09-29 | 2022-01-04 | 天翼物联科技有限公司 | Video live broadcast method and device, computer equipment and computer readable storage medium |
CN114143487A (en) * | 2021-12-15 | 2022-03-04 | 深圳市前海手绘科技文化有限公司 | Video recording method and device |
CN114501136A (en) * | 2022-01-12 | 2022-05-13 | 惠州Tcl移动通信有限公司 | Image acquisition method and device, mobile terminal and storage medium |
CN114501136B (en) * | 2022-01-12 | 2023-11-10 | 惠州Tcl移动通信有限公司 | Image acquisition method, device, mobile terminal and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108574688B (en) | Method and device for displaying participant information | |
CN109640029B (en) | Method and device for displaying video stream on wall | |
CN109640028B (en) | Method and device for carrying out conference combining on multiple video networking terminals and multiple Internet terminals | |
CN109803111B (en) | Method and device for watching video conference after meeting | |
CN109495713B (en) | Video conference control method and device based on video networking | |
CN111107299A (en) | Method and device for synthesizing multi-channel video | |
CN109168064B (en) | Synchronous display method and system of electronic data | |
CN110049271B (en) | Video networking conference information display method and device | |
CN112866725A (en) | Live broadcast control method and device | |
CN110049273B (en) | Video networking-based conference recording method and transfer server | |
CN113194278A (en) | Conference control method and device and computer readable storage medium | |
CN111131754A (en) | Control split screen method and device of conference management system | |
CN108574816B (en) | Video networking terminal and communication method and device based on video networking terminal | |
CN110830750A (en) | Data transmission method and device based on video network | |
CN110460898B (en) | Video processing method, system, device and machine readable medium | |
CN110049268B (en) | Video telephone connection method and device | |
CN110149305B (en) | Video network-based multi-party audio and video playing method and transfer server | |
CN110769179B (en) | Audio and video data stream processing method and system | |
CN111131743A (en) | Video call method and device based on browser, electronic equipment and storage medium | |
CN110769297A (en) | Audio and video data processing method and system | |
CN110392275B (en) | Sharing method and device for manuscript demonstration and video networking soft terminal | |
CN109963123B (en) | Camera control method and device | |
CN111447396A (en) | Audio and video transmission method and device, electronic equipment and storage medium | |
CN108881793B (en) | Data processing method and device for video network | |
CN113315940A (en) | Video call method, device and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |