CN112672174B - Split-screen live broadcast method, acquisition device, playing device and storage medium - Google Patents
Split-screen live broadcast method, acquisition device, playing device and storage medium Download PDFInfo
- Publication number
- CN112672174B CN112672174B CN202011456812.9A CN202011456812A CN112672174B CN 112672174 B CN112672174 B CN 112672174B CN 202011456812 A CN202011456812 A CN 202011456812A CN 112672174 B CN112672174 B CN 112672174B
- Authority
- CN
- China
- Prior art keywords
- video
- split
- screen
- live
- playing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 230000009471 action Effects 0.000 claims description 10
- 230000003321 amplification Effects 0.000 claims description 10
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 3
- 230000001815 facial effect Effects 0.000 claims description 3
- 238000001514 detection method Methods 0.000 description 20
- 230000000875 corresponding effect Effects 0.000 description 10
- 238000013461 design Methods 0.000 description 8
- 230000005540 biological transmission Effects 0.000 description 7
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000002860 competitive effect Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000001105 regulatory effect Effects 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The embodiment of the invention relates to the field of live video, and discloses a split-screen live video method, acquisition equipment, playing equipment and a storage medium. The split-screen live broadcast method comprises the following steps: acquiring original videos acquired by at least two cameras of the same equipment simultaneously; adjusting the picture of the original video according to a preset picture rule to obtain an adjusted video; generating a video live stream of the adjustment video, sending the video live stream to a playing end, analyzing the video live stream received by the playing end to obtain the adjustment video, playing the adjustment video in a split-screen playing window, and carrying out split-screen live broadcast to realize automatic adjustment of split-screen live broadcast pictures.
Description
Technical Field
The embodiment of the invention relates to the field of video live broadcasting, in particular to a split-screen live broadcasting method, acquisition equipment, playing equipment and a storage medium.
Background
Video live broadcasting is to utilize internet and streaming media technology to live broadcast, and video gradually becomes the mainstream expression mode of internet because of having fused rich elements such as image, characters, sound, in order to demonstrate live scene better, often uses the live broadcast mode of split screen live broadcast, plays the live broadcast picture of shooing from a plurality of angles.
In the related split-screen live broadcast technology, different acquisition devices are used for acquiring videos, the acquired videos are played in different split screens, and when inconsistent situations such as overlarge difference of character sizes exist in video pictures of different screens, workers are required to adjust the acquisition devices to shoot the video with coordinated pictures.
Therefore, the related split-screen live broadcast method has the following problems: the device needs to be manually adjusted to obtain live video with coordinated picture content.
Disclosure of Invention
The embodiment of the invention aims to provide a split-screen live broadcast method, acquisition equipment, playing equipment and storage medium, which can automatically adjust split-screen live broadcast pictures.
In order to solve the technical problems, the embodiment of the invention provides a split-screen live broadcast method which is applied to an acquisition end and comprises the following steps: acquiring original videos acquired by at least two cameras of the same equipment simultaneously; adjusting the picture of the original video according to a preset picture rule to obtain an adjusted video; generating a video live stream of the regulated video, and sending the video live stream to a playing end for playing by the playing end.
The embodiment of the invention also provides a split-screen live broadcast method which is applied to the playing end and comprises the following steps: receiving a video live stream sent by an acquisition end; analyzing the live video stream to obtain an adjusted video; and respectively playing the adjustment video in each split-screen playing window to perform split-screen live broadcast.
The embodiment of the invention also provides acquisition equipment, which comprises: at least one processor; a memory communicatively coupled to the at least one processor; at least two cameras in communication with the processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the split-screen live broadcast method applied to the acquisition end.
The embodiment of the invention also provides a playing device, which comprises: at least one processor; a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the split-screen live broadcast method applied to the playing end.
The embodiment of the invention also provides a computer readable storage medium which stores a computer program, and the computer program realizes the split-screen live broadcast method applied to the acquisition end or the playing end when being executed by a processor.
Compared with the prior art, the method and the device have the advantages that the original videos acquired by at least two cameras of the same equipment at the same time are acquired; adjusting the picture of the original video according to a preset picture rule to obtain an adjusted video; generating a video live stream of the adjustment video, sending the video live stream to a playing end, allowing the playing end to receive the video live stream, analyzing the adjustment video from the video live stream, playing the adjustment video in a split-screen playing window, and carrying out split-screen live broadcast.
In addition, according to a preset picture rule, the picture of the original video is adjusted to obtain an adjusted video, which comprises: and adjusting the picture of the original video according to a preset picture rule set for the picture content of the video to obtain an adjusted video. The frames of the original video are adjusted according to the preset frame rules set for the video frame contents, so that the adjusted video is obtained, the frames of the original video can be adjusted according to the video frame contents, and the split-screen live broadcast frames can be automatically adjusted in a personalized mode according to the video contents.
In addition, the preset picture rule includes: the sizes of characters in all video pictures are the same; according to a preset picture rule set for video picture content, adjusting a picture of an original video to obtain an adjusted video, including: respectively detecting the figure sizes in the original video; according to the figure sizes in the original video, respectively calculating amplification factors for making the figure sizes in the original video equal; and adjusting the picture of the original video according to the amplification factor to obtain an adjusted video with the same character size. The method comprises the steps of respectively detecting the sizes of characters in an original video, respectively calculating the amplification factors which enable the sizes of the characters in the original video to be equal according to the sizes of the characters in the original video, and adjusting the pictures of the original video according to the amplification factors to obtain adjusted videos with the equal sizes of the characters, so that automatic adjustment of split-screen live broadcast pictures with the same sizes of the characters in the split-screen live broadcast video pictures is achieved.
In addition, after the adjusting video is played in each split-screen playing window respectively and split-screen live broadcasting is carried out, the method further comprises the following steps: and adjusting the size of the split-screen playing window according to a preset window rule. The size of the split-screen playing window is adjusted according to a preset window rule, so that the size of a display picture of the played adjusted video on the playing terminal equipment is adjusted, and the split-screen live broadcast picture is automatically adjusted.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which the figures of the drawings are not to be taken in a limiting sense, unless otherwise indicated.
Fig. 1 is a flowchart of a split-screen live broadcast method according to a first embodiment of the present invention;
fig. 2 is a flowchart of a split-screen live broadcast method according to a second embodiment of the present invention;
fig. 3 is a diagram illustrating the original video character size of a split-screen live broadcast method according to a second embodiment of the present invention;
fig. 4 is a schematic diagram of adjusting the size of a video character according to a split-screen live broadcast method according to a second embodiment of the present invention;
fig. 5 is a flowchart of a split-screen live broadcast method according to a third embodiment of the present invention;
fig. 6 is a flowchart of a split-screen live broadcast method according to a fourth embodiment of the present invention;
fig. 7 is a schematic structural view of an acquisition device according to a fifth embodiment of the present invention;
fig. 8 is a schematic structural diagram of a playback device according to a sixth embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the following detailed description of the embodiments of the present invention will be given with reference to the accompanying drawings. However, those of ordinary skill in the art will understand that in various embodiments of the present invention, numerous technical details have been set forth in order to provide a better understanding of the present application. However, the technical solutions claimed in the present application can be implemented without these technical details and with various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not be construed as limiting the specific implementation of the present invention, and the embodiments can be mutually combined and referred to without contradiction.
The first embodiment of the invention relates to a split-screen live broadcast method which is applied to an acquisition end. The specific flow is shown in figure 1.
The split-screen live broadcast method is used for live video broadcast. The live video broadcast can be realized through a live broadcast system, original videos acquired by at least two cameras in acquisition end equipment at the same time are acquired through live broadcast acquisition software running on the acquisition end equipment, the original videos are adjusted to obtain adjusted videos, video live broadcast streams of the adjusted videos are generated, the video live broadcast streams are uploaded to a server side in the live broadcast system through the live broadcast acquisition software and are transmitted to a playing end through the server side, the playing end equipment analyzes the received video live broadcast streams to obtain the adjusted videos through running live broadcast watching software, the adjusted videos are played in a split-screen playing window, and split-screen live broadcast is performed. The live video can also be realized in a point-to-point transmission mode, and the acquisition end equipment directly sends the live video stream to the play end equipment. The collecting terminal device can be a mobile phone with front and back cameras or a camera with at least two cameras. The playing end device can be any device which can receive video data and play video, such as a mobile phone, a computer and the like.
The implementation details of the split-screen live broadcast method of the present embodiment are specifically described below, and the following details are provided only for facilitating understanding, and are not necessary for implementing the present embodiment.
In step 101, an acquisition end device acquires original videos acquired by at least two cameras in the device at the same time. The at least two cameras can be started simultaneously or sequentially, and the acquisition end equipment only needs to acquire at least two original videos which are acquired simultaneously after at least two cameras are not started.
In one example, before the original videos acquired by at least two cameras of the same device are acquired, the acquisition terminal device simultaneously opens at least two cameras to acquire the original videos when receiving a live broadcast start command. When the acquisition end equipment has more than two cameras, cameras with different numbers can be started according to the needs, and the split-screen live broadcasting method only limits the number of the started cameras to at least two and does not limit specific numerical values.
In step 102, the frames of the original video are adjusted according to a preset frame rule to obtain adjusted videos, which may be that the frames of each original video are respectively adjusted according to a preset frame rule set in advance by an acquisition end user, such as a live broadcast anchor, to obtain each adjusted video. The preset picture rule may be: enlarging the video picture of a certain camera, reducing the video picture of a certain camera, making all the video pictures have the same size or directly adopting the original picture, etc. Preferably, in order to identify the original video captured by the different cameras, and the corresponding adjusted video, an identifier may be set for each original video and its corresponding adjusted video.
In step 103, a live video stream of the adjustment video is generated, each adjustment video may be separately encoded and encapsulated to generate a live video stream, and each live video stream is sent to the playing end. When the video live stream data transmission is carried out, the video live stream data blocks can be immediately transmitted to the playing end according to a transmission protocol adopted by live broadcasting when each video live stream generates the video live stream data blocks in real time.
Preferably, corresponding identifiers can be set for each video live stream corresponding to different adjustment videos, and when the video live stream is transmitted, the corresponding identifiers are carried in video live stream data blocks, so that after receiving the video live stream data blocks, a playing end can quickly identify the video live stream to which the video live stream data blocks belong from the received video live stream data blocks, and accordingly corresponding adjustment videos are identified.
In the embodiment, original videos acquired by at least two cameras of the same equipment at the same time are acquired; adjusting the picture of the original video according to a preset picture rule to obtain an adjusted video; generating a video live stream of the adjustment video, sending the video live stream to a playing end, allowing the playing end to receive the video live stream, analyzing the adjustment video from the video live stream, playing the adjustment video in a split-screen playing window, and carrying out split-screen live broadcast.
The above steps of the methods are divided, for clarity of description, and may be combined into one step or split into multiple steps when implemented, so long as they include the same logic relationship, and they are all within the protection scope of this patent; it is within the scope of this patent to add insignificant modifications to the algorithm or flow or introduce insignificant designs, but not to alter the core design of its algorithm and flow.
The second embodiment of the invention relates to a split-screen live broadcast method which is applied to an acquisition end. The second embodiment is substantially the same as the first embodiment, and differs mainly in that: in the first embodiment, the original video frame is adjusted according to a preset frame rule set in advance by the user at the acquisition end. In the second embodiment of the present invention, the picture of the original video is adjusted according to the preset picture rule set for the content of the video picture.
The specific flow of this embodiment is shown in fig. 2, and includes:
Step 201 and step 203 are substantially the same as step 101 and step 103 in the first embodiment, and will not be described again.
In step 202, according to the preset picture rules set for the video picture content, the preset picture rules set for different live broadcast content may be set, for example, for live broadcast of basketball game, the picture of the basketball position part may be adjusted in an enlarged manner.
In this embodiment, the picture of the original video is adjusted according to the preset picture rule set for the video picture content to obtain the adjusted video, so that the picture of the original video can be adjusted according to the video picture content, and the split-screen live broadcast picture can be automatically adjusted individually according to the video content.
In one example, the preset frame rule may be to make the person sizes in the respective video frames the same. The acquisition end equipment can detect the figure size in each original video, and if the single principal angle exists in each original video, the original video picture can be adjusted according to the figure size of the principal angle. Assuming that the acquisition terminal equipment is a front-back double-shot mobile phone, the front-back camera of the mobile phone shoots to obtain two original videos, the picture sizes of the two original videos are w.h, the equipment detects that the sizes of one person are larger than the sizes of other persons in the pictures of the two original videos respectively by detecting the persons in the pictures, the size difference between the two persons and the sizes of other persons is larger than a preset threshold value, or only one person is in the pictures, and then the person is judged to be a single principal angle of the original video, as shown in fig. 3. To obtain adjustment for equalizing the sizes of the two principal angle charactersThe video needs to be amplified and adjusted to different degrees on the two original video character parts, wherein the amplified video picture size is the same as the picture size of the original video. The acquisition end equipment obtains detection frames of two main-angle characters by utilizing a character detection algorithm, and establishes a rectangular coordinate system by taking the upper left corner as an origin of coordinates and taking pixel points of a video picture as units based on an original video. The coordinates of the upper left corner and the lower right corner of the main corner human figure detection frame of the first original video of the two original videos are respectively:the width and the height of the main angle character detection frame of the first original video are respectively as follows: />The coordinates of the upper left corner and the lower right corner of the main corner character detection frame of the second original video are respectively: />The width and the height of the main angle character detection frame of the second original video are respectively as follows: />Since the enlarged principal angle character detection frame should not exceed the picture size of the original video, the maximum enlargement factor of the principal angle character detection frame of the first original video can be calculated>Maximum magnification factor of principal angle character detection frame of second original video +.> In order to make the sizes of the two principal angle persons identical, the heights of the principal angle person detection frame of the enlarged first original video and the principal angle person detection frame of the second original video are identical, whereby the two detections can be calculated according to the following procedureAmplification factor used in actual amplification of the measurement frame: if->The actual magnification factor of the principal angle character detection frame of the first original video +.>Main angle person detection frame actual magnification factor of second original video +.>If->The actual magnification factor of the principal angle character detection frame of the first original video +.>Actual magnification factor of main angle character detection frame of first original videoAfter the original video picture is amplified, as the amplified picture is the same as the picture of the original video in size, a part of pictures in the original video picture can be cut, and the coordinates of a reserved region of interest (Region Of Interest, abbreviated as 'ROI') in the original video picture are calculated according to a preset rule that a main angle character detection frame is aligned at the bottom of the amplified picture in the longitudinal direction and centered in the transverse direction: the coordinates of the upper left corner and the lower right corner of the first original video ROI area are respectively as follows:the coordinates of the upper left corner and the lower right corner of the second original video ROI area are respectively as follows: /> Preserving the ROI area of the original video to the original videoAnd cutting the picture, and multiplying the picture sizes of the ROI areas cut by the two original videos by corresponding actual amplification factors respectively to obtain the picture of the adjusted video as shown in fig. 4. And generating a live video stream of the adjusted video according to the step 203, sending the live video stream to a playing end, and enabling a user to watch live video pictures with the same main angle character size after the playing end obtains the adjusted video to perform split-screen playing.
In this embodiment, by detecting the sizes of the characters in the original video respectively, calculating the amplification factors for equalizing the sizes of the characters in the original video respectively according to the sizes of the characters in the original video, and adjusting the frames of the original video according to the amplification factors, an adjusted video with the equal sizes of the characters is obtained, so as to realize automatic adjustment of the split-screen live video frames with the same sizes of the characters in the split-screen live video frames.
The above steps of the methods are divided, for clarity of description, and may be combined into one step or split into multiple steps when implemented, so long as they include the same logic relationship, and they are all within the protection scope of this patent; it is within the scope of this patent to add insignificant modifications to the algorithm or flow or introduce insignificant designs, but not to alter the core design of its algorithm and flow.
The third embodiment of the invention relates to a split-screen live broadcast method, which is applied to a playing end, namely, the split-screen live broadcast method of the embodiment is applied to video live broadcast, and is implemented by the application of playing end equipment in a live broadcast system or a point-to-point live broadcast mode as introduced in the first embodiment. And analyzing and playing the adjusted video by the playing end equipment from the received video live stream, namely the video live stream generated by the acquisition end, and carrying out split-screen live broadcast.
The specific flow of this embodiment is shown in fig. 5, and includes:
In step 501, the playing end device receives each live video stream sent by the collecting end device, and receives a data block of each live video stream according to a transmission protocol adopted by live broadcast.
In step 502, the playing end device extracts the data of the live video from each live video stream data block, sorts each live video stream data block, parses each live video stream data block according to the used transmission protocol, obtains the encoded and encapsulated adjustment video data, and decodes the encoded and encapsulated adjustment video data to obtain the adjustment video.
In step 503, each adjustment video is played in the corresponding split-screen playing window according to the preset rule. And carrying out split-screen live broadcast. The number of the split-screen playing windows is equal to the number of the video live streams, namely the number of the adjusted videos. A video player is provided in a split-screen playing window to play the adjustment video.
In one example, after receiving the live video stream sent by the acquisition end, the playing end device may set the number and the position of the split-screen playing windows according to the number of live video streams, and set the size of each split-screen playing window, where the default window setting preset scheme is that the number of each playing window corresponds to the number of live video streams, and the sizes are equal to the average size of the user device screen.
In this embodiment, the live video stream is received and the live video stream is analyzed to obtain the adjusted video, and the adjusted video is played in each split-screen playing window for split-screen live broadcast.
It is to be noted that this embodiment is a method example corresponding to the first to second embodiments, and can be implemented in cooperation with the first to second embodiments. The related technical details mentioned in the first embodiment are still valid in this embodiment, and in order to reduce repetition, a detailed description is omitted here. Accordingly, the related art details mentioned in the present embodiment can also be applied to the first embodiment.
The above steps of the methods are divided, for clarity of description, and may be combined into one step or split into multiple steps when implemented, so long as they include the same logic relationship, and they are all within the protection scope of this patent; it is within the scope of this patent to add insignificant modifications to the algorithm or flow or introduce insignificant designs, but not to alter the core design of its algorithm and flow.
The fourth embodiment of the invention relates to a split-screen live broadcast method which is applied to a playing end. The fourth embodiment is substantially the same as the third embodiment, and differs mainly in that: in the fourth embodiment, the size of the split-screen playing window is fixed, and in the fourth embodiment of the present invention, the size of the split-screen playing window may be adjusted according to a preset window rule.
The specific flow of this embodiment is shown in fig. 6, and includes:
The steps 601, 602, 603 are substantially the same as the steps 501, 502, 503 in the third embodiment, and will not be described again.
In step 604, the preset window rule may be preset by the playing end user, or the playing end device may perform personalized customization for the user according to the user data counted by the live broadcast viewing software, or may be preset for the video picture content, for example, for live broadcast of a basketball game, may perform adjustment on a split-screen playing window where the basketball is located, and so on. When the split-screen playing window is adjusted, the size of the video picture in the split-screen playing window is correspondingly adjusted so as to adapt to the adjustment of the split-screen playing window.
In one example, the preset window rule is: the user viewing window is enlarged. The playing terminal equipment can capture facial images of the user by utilizing the camera, analyze the sight line change of the user by using a sight line tracking algorithm, judge the split-screen playing window watched by the user, enlarge the split-screen playing window watched by the user to a preset size, and correspondingly reduce other split-screen playing windows.
In one example, the windowing rules are: the highlight window is enlarged. The rule is applied to competitive live broadcast, when the playing end equipment identifies that the live video content is of a competitive type, the competitive activity is identified, and an action scoring algorithm corresponding to the competitive activity is obtained. And scoring the real-time actions of the characters in the adjusted video according to a pre-trained action scoring algorithm, and adjusting the size of the split-screen playing window according to the scoring result. Specifically, the real-time screenshot can be performed on the adjustment video, the person actions in the screenshot can be scored, and the action scoring algorithm can be constructed by using two deep neural networks. The first deep neural network is used for human body detection, so that a rectangular detection frame of a human body is obtained; the input of the second deep neural network is a human body part image in the rectangular detection frame, and the output is an action score. The training data of the action scoring algorithm is a motion action image with scores, the image is from an event video, the scores of the image are positively correlated with the applause decibels of the audience after the image is played, no applause is defined as 0 score, the highest decibel applause is defined as 100 score, and other applauses linearly correspond to 0-100 score according to the decibels. And the player terminal equipment adjusts the size of the split screen playing window in real time according to the score proportion.
Preferably, in order to avoid frequent and large-scale adjustment of the size of the split-screen playing window, a recent average score of the person action in each split-screen playing window, such as an average score of the last 100 frames, may be calculated, and the window is adjusted according to the average score. Assuming that two split-screen playing windows are provided, the average scores are respectively: s is(s) 1 Sum s 2 The screen size of the playing end device is as follows: w (w) u *h u The display widths of the two split-screen playing windows are respectively: w (w) u *s 1 /(s 1 +s 2 ) And w u *s 2 /(s 1 +s 2 )。
In one example, the player device may also broadcast the sports, determine the head pose of the character in each frame in real time in the adjusted video using the head pose estimation algorithm, and keep the faces of the characters in the frames deflected toward the center of the device screen by the operations such as position exchange or mirror surface turning of each live broadcast frame, so as to create the fight atmosphere.
In this embodiment, the size of the split-screen playing window is adjusted according to a preset window rule, so that the size of a display screen of the played adjusted video on the playing end device is adjusted, and thus the split-screen live broadcast screen is automatically adjusted.
The above steps of the methods are divided, for clarity of description, and may be combined into one step or split into multiple steps when implemented, so long as they include the same logic relationship, and they are all within the protection scope of this patent; it is within the scope of this patent to add insignificant modifications to the algorithm or flow or introduce insignificant designs, but not to alter the core design of its algorithm and flow.
A fifth embodiment of the present invention relates to an acquisition device, as shown in fig. 7, comprising: at least one processor 701; a memory 702 communicatively coupled to the at least one processor; at least two cameras 703 in communication with the one processor 701; the memory 702 stores instructions executable by the at least one processor 701, and the instructions are executed by the at least one processor 701 to perform the split-screen live broadcast method of the first embodiment and the second embodiment.
Taking an example of a two-camera acquisition device, the memory 702, the processor 701, the a-camera 7031, and the B-camera 7032 are connected by a bus, which may include any number of interconnected buses and bridges, the buses connecting together various circuits of the one or more processors 701, the memory 702, the camera 7031, and the camera 7032. The bus may also connect various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or may be a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The information processed by the processor 701 is transmitted over a wireless medium via an antenna, which in turn receives the information and transmits the information to the processor 701.
The processor 701 is responsible for managing the bus and general processing and may provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 702 may be used to store information used by the processor in performing operations.
A sixth embodiment of the present invention relates to a playback apparatus, as shown in fig. 8, including: at least one processor 801; a memory 802 communicatively coupled to the at least one processor; the memory 802 stores instructions executable by the at least one processor 801, and the instructions are executed by the at least one processor 801 to perform the split-screen live broadcast method according to the third embodiment or the fourth embodiment.
Where the memory 802 and the processor 801 are connected by a bus, the bus may comprise any number of interconnected buses and bridges, which connect the various circuits of the one or more processors 801 and the memory 802 together. The bus may also connect various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or may be a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The information processed by the processor 801 is transmitted over a wireless medium via an antenna, which in turn receives the information and communicates the information to the processor 801.
The processor 801 is responsible for managing the bus and general processing and may provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 802 may be used to store information used by the processor in performing operations.
A seventh embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program implements the above-described method embodiments when executed by a processor.
That is, it will be understood by those skilled in the art that all or part of the steps in implementing the methods of the embodiments described above may be implemented by a program stored in a storage medium, where the program includes several instructions for causing a device (which may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps in the methods of the embodiments described herein. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples of carrying out the invention and that various changes in form and details may be made therein without departing from the spirit and scope of the invention.
Claims (9)
1. The split-screen live broadcast method is characterized by being applied to an acquisition end and comprising the following steps of:
acquiring original videos acquired by at least two cameras of the same equipment simultaneously;
adjusting the picture of the original video according to a preset picture rule to obtain an adjusted video; the preset picture rules are preset picture rules preset by a user of the acquisition end;
generating a video live stream of the adjustment video, sending the video live stream to a playing end, analyzing the video live stream by the playing end to obtain the adjustment video, and playing the adjustment video in each split-screen playing window respectively for split-screen live broadcasting;
after split-screen live broadcasting, the broadcasting terminal captures facial images of the user by using a camera of the broadcasting terminal device, analyzes the line-of-sight change of the user by adopting a line-of-sight tracking algorithm, judges the split-screen broadcasting window currently watched by the user, enlarges the split-screen broadcasting window currently watched by the user to a preset size, and reduces the split-screen broadcasting windows except the split-screen broadcasting window currently watched by the user.
2. The split-screen live broadcast method according to claim 1, wherein the adjusting the picture of the original video according to the preset picture rule to obtain the adjusted video comprises:
and adjusting the picture of the original video according to a preset picture rule set for the picture content of the video to obtain the adjusted video.
3. The split-screen live broadcast method according to claim 2, wherein the preset picture rule includes: the sizes of characters in all video pictures are the same;
the step of adjusting the picture of the original video according to the preset picture rule set for the picture content of the video to obtain the adjusted video comprises the following steps:
respectively detecting the figure sizes in the original video;
according to the person sizes in the original video, respectively calculating amplification factors for enabling the person sizes in the original video to be equal;
and adjusting the picture of the original video according to the amplification factor to obtain the adjusted video with the same figure size.
4. The split-screen live broadcast method according to claim 1, further comprising, before the acquiring the original video acquired by at least two cameras of the same device simultaneously:
and after receiving the split-screen live broadcast command, simultaneously opening the at least two cameras, and collecting the original video.
5. The split-screen live broadcasting method is characterized by being applied to a broadcasting end and comprising the following steps of:
receiving a video live stream sent by an acquisition end; the video live stream is obtained by adjusting an original video acquired by the acquisition end according to a preset picture rule preset by a user of the acquisition end, and the original video is acquired by at least two cameras of the same equipment at the same time;
analyzing the video live stream to obtain the adjustment video;
respectively playing the adjustment video in each split-screen playing window to perform split-screen live broadcast;
wherein, play the said adjustment video in each split screen play window separately, after carrying on the split screen live broadcast, still include:
capturing a facial image of a user by using a camera of playing terminal equipment, analyzing the sight line change of the user by adopting a sight line tracking algorithm, judging a split-screen playing window currently watched by the user, amplifying the split-screen playing window currently watched by the user to a preset size, and shrinking the split-screen playing windows except the split-screen playing window currently watched by the user.
6. The split-screen live broadcast method according to claim 5, wherein after the adjusting video is played in each split-screen playing window, the method further comprises:
scoring real-time actions of the person in the adjusted video;
and adjusting the size of each split-screen playing window according to the scoring result.
7. An acquisition device, comprising:
at least one processor;
a memory communicatively coupled to the at least one processor;
at least two cameras communicatively coupled to the at least one processor;
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the split live method of any one of claims 1 to 4.
8. A playback device, comprising:
at least one processor;
a memory communicatively coupled to the at least one processor;
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the split live method of any one of claims 5 to 6.
9. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the split-screen live method of any one of claims 1 to 4 or the split-screen live method of any one of claims 5 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011456812.9A CN112672174B (en) | 2020-12-11 | 2020-12-11 | Split-screen live broadcast method, acquisition device, playing device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011456812.9A CN112672174B (en) | 2020-12-11 | 2020-12-11 | Split-screen live broadcast method, acquisition device, playing device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112672174A CN112672174A (en) | 2021-04-16 |
CN112672174B true CN112672174B (en) | 2023-07-07 |
Family
ID=75404215
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011456812.9A Active CN112672174B (en) | 2020-12-11 | 2020-12-11 | Split-screen live broadcast method, acquisition device, playing device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112672174B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113891103A (en) * | 2021-08-24 | 2022-01-04 | 广州方硅信息技术有限公司 | Live broadcast display method and device, storage medium and computer equipment |
CN114363685A (en) * | 2021-12-20 | 2022-04-15 | 咪咕文化科技有限公司 | Video interaction method and device, computing equipment and computer storage medium |
CN114449303B (en) * | 2022-01-26 | 2024-08-30 | 广州繁星互娱信息科技有限公司 | Live broadcast picture generation method and device, storage medium and electronic device |
CN114979746B (en) * | 2022-05-13 | 2024-03-12 | 北京字跳网络技术有限公司 | Video processing method, device, equipment and storage medium |
CN114845059B (en) * | 2022-07-06 | 2022-11-18 | 荣耀终端有限公司 | Shooting method and related equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106165430A (en) * | 2016-06-29 | 2016-11-23 | 北京小米移动软件有限公司 | Net cast method and device |
CN110062252A (en) * | 2019-04-30 | 2019-07-26 | 广州酷狗计算机科技有限公司 | Live broadcasting method, device, terminal and storage medium |
CN110784735A (en) * | 2019-11-12 | 2020-02-11 | 广州虎牙科技有限公司 | Live broadcast method and device, mobile terminal, computer equipment and storage medium |
CN111405339A (en) * | 2020-03-11 | 2020-07-10 | 咪咕互动娱乐有限公司 | Split screen display method, electronic equipment and storage medium |
CN111919451A (en) * | 2020-06-30 | 2020-11-10 | 深圳盈天下视觉科技有限公司 | Live broadcasting method, live broadcasting device and terminal |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103929611B (en) * | 2013-01-10 | 2017-04-05 | 杭州海康威视数字技术股份有限公司 | A kind of many picture paging player methods |
-
2020
- 2020-12-11 CN CN202011456812.9A patent/CN112672174B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106165430A (en) * | 2016-06-29 | 2016-11-23 | 北京小米移动软件有限公司 | Net cast method and device |
CN110062252A (en) * | 2019-04-30 | 2019-07-26 | 广州酷狗计算机科技有限公司 | Live broadcasting method, device, terminal and storage medium |
CN110784735A (en) * | 2019-11-12 | 2020-02-11 | 广州虎牙科技有限公司 | Live broadcast method and device, mobile terminal, computer equipment and storage medium |
CN111405339A (en) * | 2020-03-11 | 2020-07-10 | 咪咕互动娱乐有限公司 | Split screen display method, electronic equipment and storage medium |
CN111919451A (en) * | 2020-06-30 | 2020-11-10 | 深圳盈天下视觉科技有限公司 | Live broadcasting method, live broadcasting device and terminal |
Also Published As
Publication number | Publication date |
---|---|
CN112672174A (en) | 2021-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112672174B (en) | Split-screen live broadcast method, acquisition device, playing device and storage medium | |
CN108810649B (en) | Image quality adjusting method, intelligent television and storage medium | |
US11012727B2 (en) | Predictive content delivery for video streaming services | |
US11863801B2 (en) | Method and device for generating live streaming video data and method and device for playing live streaming video | |
US8773498B2 (en) | Background compression and resolution enhancement technique for video telephony and video conferencing | |
JP5305557B2 (en) | Method for viewing audiovisual records at a receiver and receiver for viewing such records | |
US20080235724A1 (en) | Face Annotation In Streaming Video | |
US9305331B2 (en) | Image processor and image combination method thereof | |
EP2240885A1 (en) | Electronic devices that pan/zoom displayed sub-area within video frames in response to movement therein | |
JP2009005238A (en) | Coder and encoding method | |
CN113315927B (en) | Video processing method and device, electronic equipment and storage medium | |
CN111405339A (en) | Split screen display method, electronic equipment and storage medium | |
CN110234015A (en) | Live broadcast control method and device, storage medium and terminal | |
CN114531564B (en) | Processing method and electronic equipment | |
CN112584189A (en) | Live broadcast data processing method, device and system and computer readable storage medium | |
CN114979755A (en) | Screen projection method and device, terminal equipment and computer readable storage medium | |
CN113301342A (en) | Video coding method, network live broadcast method, device and terminal equipment | |
CN111246224A (en) | Video live broadcast method and video live broadcast system | |
CN113784084A (en) | Processing method and device | |
CN112422828B (en) | Image processing method, image processing apparatus, electronic device, and readable storage medium | |
CN111163280A (en) | Asymmetric video conference system and method thereof | |
WO2010070820A1 (en) | Image communication device and image communication method | |
CN112507798A (en) | Living body detection method, electronic device, and storage medium | |
CN107484005A (en) | Monitoring method, set top box, monitoring system and storage medium | |
US12081906B2 (en) | Parallel processing of digital images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |