CN110784735A - Live broadcast method and device, mobile terminal, computer equipment and storage medium - Google Patents

Live broadcast method and device, mobile terminal, computer equipment and storage medium Download PDF

Info

Publication number
CN110784735A
CN110784735A CN201911100568.XA CN201911100568A CN110784735A CN 110784735 A CN110784735 A CN 110784735A CN 201911100568 A CN201911100568 A CN 201911100568A CN 110784735 A CN110784735 A CN 110784735A
Authority
CN
China
Prior art keywords
video signal
camera
displaying
window
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911100568.XA
Other languages
Chinese (zh)
Inventor
曾文舟
康谋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Technology Co Ltd
Original Assignee
Guangzhou Huya Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Technology Co Ltd filed Critical Guangzhou Huya Technology Co Ltd
Priority to CN201911100568.XA priority Critical patent/CN110784735A/en
Publication of CN110784735A publication Critical patent/CN110784735A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the invention discloses a live broadcast method, a live broadcast device, a mobile terminal, computer equipment and a storage medium, wherein the method comprises the following steps: receiving live preview operation; displaying a first video signal derived from a first camera and a second video signal derived from a second camera in response to the preview operation; receiving a selected operation; displaying the first video signal or the second video signal in a selected state in response to the selection operation; receiving picture adjusting operation; and displaying the adjustment of the first video signal or the second video signal in the selected state according to the picture adjustment operation. The anchor user can dynamically adjust the first video signal and the second video signal, and compared with a picture-in-picture mode, an up-down mode, a left-right mode and other fixed display modes, the display mode is various in self-adjusting mode and high in flexibility, and the anchor user can better fit live content with live pictures to ensure the live effect.

Description

Live broadcast method and device, mobile terminal, computer equipment and storage medium
Technical Field
The embodiment of the invention relates to a live broadcast technology, in particular to a live broadcast method, a live broadcast device, a mobile terminal, computer equipment and a storage medium.
Background
With the continuous development of network technology, especially the gradual improvement of mobile networks, the live broadcast of anchor users by using mobile terminals at any time and any place has become a social hotspot.
At present, when a mobile terminal is used for live broadcasting, a front-facing camera is usually started for live broadcasting, so that interaction between a main broadcasting user and audience users is facilitated. And when the outdoor scene needs to be shown, the anchor user starts the rear camera again to carry out live broadcasting so that the audience users can watch the scene conveniently.
In some scenes, both the anchor user and the scenery need to be shown, and in such scenes, the anchor user meets the requirements by adjusting the position and the angle of the mobile terminal and moving the body.
However, it is difficult for the mobile terminal to coordinate with the anchor user, and even if the mobile terminal is well coordinated, the live broadcast picture is easy to lose, such as only a part of human face.
Disclosure of Invention
The embodiment of the invention provides a live broadcasting method and device, a mobile terminal, computer equipment and a storage medium, and aims to solve the problems that live broadcasting of a single camera is difficult to coordinate with a main broadcasting user, and the live broadcasting picture is easy to lose.
In a first aspect, an embodiment of the present invention provides a live broadcasting method, including:
receiving live preview operation;
displaying a first video signal derived from a first camera and a second video signal derived from a second camera in response to the preview operation;
receiving a selected operation;
displaying the first video signal or the second video signal in a selected state in response to the selection operation;
receiving picture adjusting operation;
and displaying the adjustment of the first video signal or the second video signal in the selected state according to the picture adjustment operation.
In a second aspect, an embodiment of the present invention further provides a live broadcast apparatus, including:
the preview operation receiving module is used for receiving the preview operation of live broadcast;
the video signal display module is used for responding to the preview operation and displaying a first video signal from the first camera and a second video signal from the second camera;
a selected operation receiving module for receiving selected operation;
the video selection display module is used for responding to the selection operation and displaying the first video signal or the second video signal in a selected state;
the picture adjusting operation receiving module is used for receiving picture adjusting operation;
and the video adjusting and displaying module is used for displaying the adjustment of the first video signal or the second video signal in the selected state according to the picture adjusting operation.
In a third aspect, an embodiment of the present invention further provides a mobile terminal, including:
the touch screen is used for receiving live preview operation;
the display screen is used for responding to the preview operation and displaying a first video signal from the first camera and a second video signal from the second camera;
the touch screen is also used for receiving a selected operation;
the display screen is also used for responding to the selection operation and displaying the first video signal or the second video signal in a selected state;
the touch screen is also used for receiving picture adjustment operation;
and the display screen is also used for displaying the adjustment of the first video signal or the second video signal in the selected state according to the picture adjustment operation.
In a fourth aspect, an embodiment of the present invention further provides a computer device, where the computer device includes:
one or more processors;
a memory for storing one or more programs;
the first camera is used for acquiring a first video signal;
the second camera is used for acquiring a second video signal;
when executed by the one or more processors, cause the one or more processors to implement a live method as described in the first aspect.
In a fifth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the live broadcasting method according to the first aspect.
In this embodiment, a live preview operation is received, a first video signal from a first camera and a second video signal from a second camera are displayed in response to the preview operation, a selection operation is received, the first video signal or the second video signal in a selected state is displayed in response to the selection operation, a picture adjustment operation is received, and an adjustment of the first video signal or the second video signal in the selected state is displayed according to the picture adjustment operation, on one hand, live broadcast is performed by using the two cameras simultaneously, video signals can be collected simultaneously for two objects, a host user can conveniently adjust the video signal and a scene, loss of a live broadcast picture is avoided, on the other hand, the host user can dynamically adjust the first video signal and the second video signal, and the display modes are fixed relative to a picture-in-picture mode, a top-bottom mode, a left-right mode and the like, the form of autonomic adjustment is various, and the flexibility is strong, and the anchor user can be better with the live content of the laminating of live picture, guarantees live effect.
Drawings
Fig. 1 is a flowchart of a live broadcast method according to an embodiment of the present invention;
fig. 2A to 2C are exemplary diagrams of display modes;
FIGS. 3A-3B are exemplary diagrams of selected operations;
FIGS. 4A to 4F are exemplary diagrams of the screen adjustment operation;
fig. 5 is a flowchart of a live broadcasting method according to a second embodiment of the present invention;
fig. 6 is a flowchart of a live broadcasting method according to a third embodiment of the present invention;
FIG. 7 is an exemplary diagram of a live broadcast;
FIG. 8 is an exemplary diagram of a screen switching operation;
fig. 9 is a schematic structural diagram of a live broadcast apparatus according to a fourth embodiment of the present invention;
fig. 10 is a schematic structural diagram of a mobile terminal according to a fourth embodiment of the present invention;
fig. 11 is a schematic structural diagram of a computer device according to a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a live broadcasting method according to an embodiment of the present invention, where this embodiment is applicable to a situation where two cameras are used to perform live broadcasting simultaneously and dynamically adjust a live broadcasting picture, and the method may be executed by a live broadcasting device, where the live broadcasting device may be implemented by software and/or hardware and may be configured in a computer device, where the computer device includes a mobile terminal, such as a mobile phone and a tablet computer, and the method specifically includes the following steps:
and S101, receiving live preview operation.
In a specific implementation, the operating system of the computer device may include Android (Android), iOS, Windows, and the like, and the live application may be installed in the operating system.
The live application may be an independent application, an application configured with a live component (e.g., SDK), or an application having a browsing component, where the application having the browsing component may include a browser, an application configured with WebView (e.g., a music application, an instant messaging tool, etc.), and the like, which is not limited in this embodiment.
The anchor User logs in the live broadcast server by using information such as an account number, a password and the like in the live broadcast application, at the moment, a User Interface (UI) can be displayed in the live broadcast application, a plurality of operation controls are displayed on the User Interface, and different operations can be triggered by different operation controls.
In this embodiment, the anchor user clicks one of the controls by means of touch clicking and the like to trigger a live broadcast preview operation, that is, before formal live broadcast, a live broadcast effect is previewed locally on a computer device, which is convenient for the anchor user to adjust the first video signal and the second video signal, and also convenient for the anchor user to adjust the beauty effect and other operations.
Certainly, during formal live broadcasting, the live broadcasting effect may also be continuously previewed locally on the computer device, and the first video signal and the second video signal are adjusted in the live broadcasting process, and the effect of adjusting the beauty is adjusted, and so on, which is not limited in this embodiment.
And S102, responding to the preview operation, and displaying a first video signal from the first camera and a second video signal from the second camera.
In the computer device, the computer device may be provided with at least two cameras, which may be used to capture video signals.
In a specific implementation, the orientation of a part of the cameras is the same as that of the display screen, the cameras are also called front cameras, the cameras may be fixed in or above the display screen, or embedded in the computer device, and automatically pop up when the cameras are to be called, or fixed in a sliding cover, and the sliding cover is manually pushed out by an anchor user when the cameras are to be called.
Some cameras and display screens are oriented differently, for example, fixed on the back of the computer equipment, the camera is also called a rear camera.
For convenience of description, in this embodiment, one of the cameras is referred to as a first camera, a video signal collected by the first camera is referred to as a first video signal, the other camera is referred to as a second camera, and a video signal collected by the second camera is referred to as a second video signal.
The first camera may be oriented in the same direction as the display screen (e.g., a front camera) or in a different direction from the display screen (e.g., a rear camera), and the second camera may be oriented in the same direction as the display screen (e.g., a front camera) or in a different direction from the display screen (e.g., a rear camera), which is not limited in this embodiment.
In the live broadcast process, the anchor user can face one camera to the anchor user, interaction with audience users is facilitated, the other camera faces to the outside scenery, and the user can watch the scenery conveniently.
In response to the preview operation, the first video signal and the second video signal can be simultaneously displayed on the display screen according to a preset display mode.
The display modes may include a picture-in-picture mode (one video signal is displayed in full screen and the other is displayed in a small area) as shown in fig. 2A, a left-right split mode (one video signal is displayed on the left side and the other is displayed on the right side) as shown in fig. 2B, an up-down split mode (one video signal is displayed on the upper side and the other is displayed on the lower side) as shown in fig. 2C, and the like.
In order to make those skilled in the art better understand the present invention, a picture-in-picture mode is described as an example of a display mode in the present specification.
In one embodiment, parameters such as the position, size, hierarchical relationship and the like of the first window and the second window can be set according to the display mode, and the first window and the second window are created according to the parameters, and both the first window and the second window can be used for playing the video signal.
In Android, the first window and the second window may be SurfaceView (view), in iOS, the first window and the second window may be TableView, and so on.
Taking a SurfaceView as an example, the SurfaceView supports an OpenGL ES (OpenGL for embedded systems, a subset of OpenGL three-dimensional graphics interface) library, and can realize 2D and 3D effects. When creating the surface view, a surface holder.Callback interface is implemented, which can be used to monitor the state of the surface view, such as the change of the surface view, the creation of the surface view, the destruction of the surface view, etc., and some operations such as initialization or clearing can be performed in a corresponding method.
In one aspect, a first window is displayed for displaying a first video signal originating from a first camera.
In another aspect, a second window is displayed for displaying a second video signal originating from a second camera.
In another embodiment, parameters such as the position, size, hierarchical relationship, and the like of the first region and the second region may be set in accordance with the display mode, and the first video signal may be rendered to the first region and the second video signal may be rendered to the second region in accordance with the parameters, thereby generating the target video signal.
In addition, a window, such as SurfaceView, TableView, etc., is generated, which can be used to play the video signal.
And displaying the window, and displaying a target video signal in the window, wherein a first area in the target video signal is used for displaying a first video signal derived from the first camera, and a second area in the target video signal is used for displaying a second video signal derived from the second camera.
Of course, the above-mentioned manner of simultaneously displaying the first video signal and the second video signal is only an example, and when implementing this embodiment, other manners of simultaneously displaying the first video signal and the second video signal may be set according to practical situations, which is not limited in this embodiment. In addition to the above-mentioned manner of simultaneously displaying the first video signal and the second video signal, those skilled in the art may also adopt other manners of simultaneously displaying the first video signal and the second video signal according to actual needs, which is not limited in this embodiment.
And S103, receiving a selection operation.
And S104, responding to the selection operation, and displaying the first video signal or the second video signal in the selected state.
In general, in order to prevent an erroneous operation, the first video signal and the second video signal are in a non-selected state during display, and at this time, the screen adjustment operation for the first video signal or the second video signal is invalid.
In the live broadcasting process, according to the live broadcasting requirement, for example, if the picture of the first video signal or the second video signal is small, which causes the content to be difficult to see clearly, the content in the region where the first video signal and the second video signal overlap is important, which prevents the audience user from watching, and so on, as shown in fig. 3A, the selection operation may be triggered by long press, touch double click, and so on, and the first video signal or the second video signal located below the selection operation is in the selected state, at this time, the picture adjustment operation for the first video signal or the second video signal is effective.
In a case where the first video signal is displayed in the first window and the second video signal is displayed in the second window, when it is determined that the first video signal is selected, it is determined that the first window is in a selected state, at which time, a picture adjustment operation for the first video signal may be applied to the first window, or, when it is determined that the second video signal is selected, it is determined that the second window is in a selected state, at which time, a picture adjustment operation for the second video signal may be applied to the second window.
In the case where the first video signal is synthesized in the first area of the target video signal and the second video signal is synthesized in the second area of the target video signal, it is determined that the first area is in a selected state when it is determined that the first video signal is selected, and at this time, the picture-adjusting operation for the first video signal may be applied to the first area, or it is determined that the second area is in a selected state when it is determined that the second video signal is selected, and at this time, the picture-adjusting operation for the second video signal may be applied to the second area.
For the case where the first video signal and the second video signal under the selected operation overlap, the first video signal is in the selected state if the first video signal is on the second video signal, and the second video signal is in the selected state if the second video signal is on the first video signal.
As shown in fig. 3B, for the first video signal or the second video signal in the selected state, identification information may be displayed to identify that the first video signal or the second video signal is in the selected state, for example, a frame is displayed around the first video signal or the second video signal, a cross pattern is displayed in the first video signal or the second video signal, and so on.
And S105, receiving the picture adjusting operation.
And S106, displaying the adjustment of the first video signal or the second video signal in the selected state according to the picture adjustment operation.
After the first video signal or the second video signal is selected, picture adjusting operation is triggered through gestures, controls and the like, the selected first video signal or the selected second video signal is correspondingly adjusted, and therefore the adjustment is displayed on the display screen.
If the first video signal is displayed in the first window and the second video signal is displayed in the second window, at this time, the first window or the second window can be adjusted according to the picture adjusting operation so as to adjust the first video signal or the second video signal in the selected state.
If the first video signal is synthesized in the first area of the target video signal and the second video signal is synthesized in the second area of the target video signal, the first area or the second area can be adjusted according to the picture adjusting operation to adjust the first video signal or the second video signal in the selected state.
In order to make those skilled in the art better understand the present embodiment, the following uses gestures as a specific example to illustrate the adjustment of the first video signal or the second video signal in the present embodiment.
In this embodiment, the screen adjustment operation includes at least one of the following gestures:
1. drag gesture
The drag gesture may refer to a simultaneous sliding of the object (the first video signal or the second video signal) while being pressed.
If a drag gesture is detected in the touch screen, the first video signal or the second video signal in the selected state can be dragged according to the drag gesture.
For example, as shown in fig. 4A, a video signal in a small area in the pip mode is set as a first video signal, and a finger of a host user presses the first video signal and slides to the left, as shown in fig. 4B, a process that the display screen can display the first video signal and slide to the left is displayed, and when the host user lifts the finger, the first video signal stops sliding.
Further, a dragging direction of the dragging gesture can be determined, and the dragging gesture is mapped to a dragging distance in the dragging direction according to a certain functional relation, at this time, the first video signal or the second video signal in the selected state can be dragged in the dragging direction until the dragging distance is reached.
For the situation that the first video signal is displayed in the first window and the second video signal is displayed in the second window, the first window to which the first video signal in the selected state belongs can be dragged according to the dragging gesture to realize dragging of the first video signal, or the second window to which the second video signal in the selected state belongs can be dragged according to the dragging gesture to realize dragging of the second video signal.
For the situation that the first video signal is synthesized in the first area of the target video signal and the second video signal is synthesized in the second area of the target video signal, the first area corresponding to the first video signal in the selected state can be dragged according to the dragging gesture, the first video signal is drawn in the first area after dragging, and the dragging of the first video signal is realized, or the second area corresponding to the second video signal in the selected state can be dragged according to the dragging gesture, and the second video signal is drawn in the second area after dragging, and the dragging of the second video signal is realized.
2. Zoom gesture
The zoom gesture may refer to a gathering (zooming-out) or a spreading (zooming-in) of the plurality of touch points with the target (the first video signal or the second video signal) as a base point.
If a zoom gesture is detected in the touch screen, the first video signal or the second video signal in the selected state may be zoomed according to the zoom gesture.
For example, as shown in fig. 4C, a video signal in a small area in the pip mode is set as a first video signal, and two fingers of the anchor user press the first video signal and spread the first video signal outward, as shown in fig. 4D, the process of amplifying the first video signal can be displayed on the display screen, and the first video signal stops being amplified when the anchor user lifts the fingers.
Further, the scaling gesture may be mapped to a scaling according to a function, and at this time, the first video signal or the second video signal in the selected state may be scaled according to the scaling.
For the situation that the first video signal is displayed in the first window and the second video signal is displayed in the second window, the first window to which the first video signal in the selected state belongs may be zoomed according to the zoom gesture to achieve zooming of the first video signal, or the second window to which the second video signal in the selected state belongs may be zoomed according to the zoom gesture to achieve zooming of the second video signal.
For the case that the first video signal is synthesized in the first region of the target video signal and the second video signal is synthesized in the second region of the target video signal, the first region corresponding to the first video signal in the selected state may be zoomed according to the zoom gesture, the first video signal is drawn in the first region after the zoom, and the zoom of the first video signal is realized, or the second region corresponding to the second video signal in the selected state may be zoomed according to the zoom gesture, the second video signal is drawn in the second region after the zoom, and the zoom of the second video signal is realized.
3. Rotational gestures
The rotation gesture may refer to a rotation of the plurality of touch points with the object (the first video signal or the second video signal) as a base point.
If a rotation gesture is detected in the touch screen, the first video signal or the second video signal in the selected state may be rotated according to the rotation gesture.
For example, as shown in fig. 4E, the video signal in the small area in the pip mode is set as the first video signal, and two fingers of the anchor user press down the first video signal and rotate clockwise, as shown in fig. 4F, a process that the display screen can display the clockwise rotation of the first video signal is displayed, and the clockwise rotation of the first video signal is stopped when the anchor user lifts up the fingers.
Further, the rotation gesture may be mapped to a rotation angle according to a certain functional relation, and at this time, the first video signal or the second video signal in the selected state may be rotated according to the rotation angle.
For the situation that the first video signal is displayed in the first window and the second video signal is displayed in the second window, the first window to which the first video signal in the selected state belongs may be rotated according to the rotation gesture to realize the rotation of the first video signal, or the second window to which the second video signal in the selected state belongs may be rotated according to the rotation gesture to realize the rotation of the second video signal.
For the case that the first video signal is synthesized in the first area of the target video signal and the second video signal is synthesized in the second area of the target video signal, the first area corresponding to the first video signal in the selected state may be rotated according to the rotation gesture, and the first video signal may be drawn in the first area after scaling, so as to implement the rotation of the first video signal, or the second area corresponding to the second video signal in the selected state may be rotated according to the rotation gesture, and the second video signal may be drawn in the second area after rotation, so as to implement the rotation of the second video signal.
Of course, the above-mentioned screen adjustment operation is only an example, and when the present embodiment is implemented, other screen adjustment operations may be set according to actual situations, which is not limited in the present embodiment. In addition, besides the above-mentioned picture adjusting operation, a person skilled in the art may also adopt other picture adjusting operations according to actual needs, which is not limited in this embodiment.
In this embodiment, a live preview operation is received, a first video signal from a first camera and a second video signal from a second camera are displayed in response to the preview operation, a selection operation is received, the first video signal or the second video signal in a selected state is displayed in response to the selection operation, a picture adjustment operation is received, and an adjustment of the first video signal or the second video signal in the selected state is displayed according to the picture adjustment operation, on one hand, live broadcast is performed by using the two cameras simultaneously, video signals can be collected simultaneously for two objects, a host user can conveniently adjust the video signal and a scene, loss of a live broadcast picture is avoided, on the other hand, the host user can dynamically adjust the first video signal and the second video signal, and the display modes are fixed relative to a picture-in-picture mode, a top-bottom mode, a left-right mode and the like, the form of autonomic adjustment is various, and the flexibility is strong, and the anchor user can be better with the live content of the laminating of live picture, guarantees live effect.
Example two
Fig. 5 is a flowchart of a live broadcast method according to a second embodiment of the present invention, where the present embodiment further adds a processing operation of starting a camera based on the foregoing embodiment, and the method specifically includes the following steps:
s501, receiving live preview operation.
And S502, responding to the preview operation, and starting the first camera or the second camera.
The first camera is configured to acquire a first video signal conforming to a first video parameter (e.g., a first resolution, a first frame rate, a first focus parameter, etc.).
The second camera is used for acquiring a second video signal which accords with a second video parameter (such as a second resolution, a second frame rate, a second focusing parameter and the like).
S503, when the first camera or the second camera is started, starting the second camera or the first camera.
In this embodiment, when a host user triggers a preview operation of live broadcast, a live broadcast Application may call an Application Programming Interface (API) provided by an operating system, start a first camera using a preset first video parameter, and the first camera acquires a first video signal conforming to the first video parameter.
If the first camera is started first, the live broadcast application can call the API provided by the operating system when the first camera is started, the preset second video parameter is used for starting the second camera, and the second camera acquires a second video signal which is in accordance with the second video parameter.
If the second camera is started first, when the second camera is started, the live broadcast application can call the API provided by the operating system, the preset first video parameter is used for starting the first camera, and the first camera acquires the first video signal which is in accordance with the first video parameter.
For example, in Android, a camera authority is newly added to an Android main file, two surfeviews are set, a camera is acquired by a call method camera getnumberofcameras (), a camera preview is started by a call method startPreview (), a first camera preview is initialized, a first video parameter of the first camera is initialized, and a method setDisplayOrientation () is called to control a correct display direction of a first video signal, and a correct display of the first video signal is realized, an error is reported when an error is displayed, and likewise, a camera is acquired by a call method camera getnumberofcameras (), a second camera is started by a call method startPreview (), a second video parameter of the second camera is initialized, and a setDisplayOrientation () is called to control a correct display direction of the second video signal, and a correct display of the second video signal is realized, an error is reported when an error is displayed.
In a specific implementation, when two cameras are simultaneously started in the same application (live application), when a first camera is started, the authority of the first camera (i.e., the first camera or the second camera) is preempted, so that the first camera (i.e., the first camera or the second camera) can be normally started, and the authority of a second camera (i.e., the second camera or the first camera) is not preempted, so that the second camera (i.e., the second camera or the first camera) may be occupied by other applications and cause a start failure.
If so, the second camera (i.e., the second camera or the first camera) is turned off, the second camera (i.e., the second camera or the first camera) is released, and the second camera (i.e., the second camera or the first camera) is turned on.
If not, the second camera (namely the second camera or the first camera) is started.
In this embodiment, after the first camera is ensured to be started, the second camera is started, and when the second camera is started, the second camera is closed and then opened under the condition that the second camera is started, and the two conditions can ensure that the second camera is normally started.
And S504, displaying the first video signal from the first camera and the second video signal from the second camera.
And S505, receiving a selection operation.
And S506, responding to the selection operation, and displaying the first video signal or the second video signal in the selected state.
And S507, receiving picture adjusting operation.
And S508, displaying the adjustment of the first video signal or the second video signal in the selected state according to the picture adjustment operation.
EXAMPLE III
Fig. 6 is a flowchart of a live broadcast method according to a third embodiment of the present invention, where the present embodiment further adds a processing operation of starting a camera based on the foregoing embodiment, and the method specifically includes the following steps:
s601, receiving live preview operation.
And S602, responding to the preview operation, and displaying a first video signal from the first camera and a second video signal from the second camera.
When the first camera and the second camera are started, a first resolution of the first video signal collected by the first camera and a second resolution of the second video signal collected by the second camera can be set.
Since the hardware specifications of the first camera and the second camera are different, and the first resolution and the second resolution have different specifications, when the first video signal and the second video signal are switched, in order to keep the content and the image quality of the first video signal and the second video signal the same, and prevent the situations of missing content and image quality difference during switching, the first resolution of the first video signal and the second resolution of the second video signal are set to be the same in proportion, such as 4:3, 16:9, and the like.
If the first resolution is equal to the second resolution, the first video signal and the second video signal can be directly displayed.
If the first resolution is not equal to the second resolution, the first video signal or the second video signal may be scaled to make the first resolution equal to the second resolution before displaying the first video signal and the second video signal.
In addition, in the case that the first resolution is not equal to the second resolution, if the first resolution and the second resolution are at a close level, the close level can be set by a person skilled in the art and identified by a correlation relationship, such as 720P close to 1080P, 1080P close to 2K, and the like, and the first video signal and the second video signal can also be directly displayed, which is not limited in this embodiment.
And S603, receiving the live broadcast starting operation.
S604, responding to the starting operation, and distributing the first video signal and the second video signal in a specified live broadcast room.
In this embodiment, as shown in fig. 7, a main broadcast user clicks one of the controls on a UI of a live broadcast application by means of touch clicking or the like, a start operation of the live broadcast is triggered, the live broadcast is formally performed, and a first video signal and a second video signal are simultaneously released in a live broadcast room, so that an audience user can simultaneously watch the first video signal and the second video signal after entering the live broadcast room.
In a particular implementation, a computer device is equipped with a microphone (also known as a sound pick-up, microphone, etc.), which is a transducer that converts sound into an electrical signal.
Further, the microphone may be a microphone built in the mobile terminal, or may also be an external microphone connected to the mobile terminal in a wired or wireless (e.g., bluetooth) manner, which is not limited in the embodiment of the present invention.
The anchor user can make sound when the live broadcast is being held, for the reason such as explaining the operation of virtual scene, interacting with spectator, this moment, except first video signal and second video signal, live broadcast application can start the microphone, gathers the sound that the anchor user sent, forms audio signal.
Most of the collected Audio signals are original data such as PCM (Pulse Code Modulation), and the live broadcast application may call an encoder to perform Audio mixing processing on the Audio signals and encode the Audio signals into a specified Audio format, such as AAC (Advanced Audio Coding).
Most of the collected video signals (the first video signal and the second video signal) are YUV (Y represents brightness (Luma), U represents Chroma (Chroma), and V represents concentration (Chroma)) and other raw data, and the live broadcast application may call an encoder to encode the video signals into a specified video format, such as h.264 (digital video compression format) and the like.
Thereafter, the live application may call a packetizer to encapsulate the audio signal in the specified audio format and the Video signal in the specified Video format into a live data stream in the specified stream format, such as MP4(Moving Picture Experts Group 4), FLV (Flash Video, streaming media format), and so on.
Live broadcast application accessible RTSP (Real Time Streaming Protocol), RTMP (Real Time Messaging Protocol), HLS (HTTP Live Streaming, Streaming media transfer Protocol based on HTTP (Hypertext transfer Protocol)) etc. and send Live broadcast data stream to Live broadcast platform (Live broadcast server), and Live broadcast platform (Live broadcast server) discloses this Live broadcast room, makes audience user open the Live broadcast room in the customer end, receives this Live broadcast data stream and plays.
For the situation that the first video signal is displayed in the first window and the second video signal is displayed in the second window, the first video signal and the second video signal can be synthesized into the target video signal according to the parameters of the position, the size, the hierarchical relationship and the like of the first window and the second window, so that the target video signal is distributed in the specified live broadcast room.
In the case where the first video signal is synthesized in the first area of the target video signal and the second video signal is synthesized in the second area of the target video signal, the target video signal can be directly distributed in a designated live broadcast.
In addition, if the first camera and the second camera both perform complete image processing, the resource consumption will be 2 times that of the current single camera, and the pause rate of the live broadcast image may be increased.
Wherein the complexity of the first image processing is higher than the complexity of the second image processing.
For example, the first image processing is used for processing a human face, and may include operations of human face detection, human face deformation (including facial deformation), peeling and the like.
For another example, the second image processing is used for processing the color blocks, and may include operations of adjusting brightness, adjusting chromaticity, and the like.
For the situation that the first video signal is displayed in the first window and the second video signal is displayed in the second window, if the first camera (such as a front camera) and the display screen are in the same orientation and the second camera (such as a rear camera) and the display screen are in different orientations, the first image processing and the second image processing can be performed on the first video signal, and the second image processing can be performed on the second video signal.
If the orientation of the second camera (e.g., front camera) is the same as that of the display screen and the orientation of the first camera (e.g., rear camera) is different from that of the display screen, the first image processing and the second image processing may be performed on the second video signal, and the second image processing may be performed on the first video signal.
In the case where the first video signal is combined in a first area of the target video signal and the second video signal is combined in a second area of the target video signal, if the first camera (e.g., a front camera) and the display screen are oriented in the same direction and the second camera (e.g., a rear camera) and the display screen are oriented in different directions, the first image processing is performed on the first video signal and the second image processing is performed on the target video signal.
If the orientation of the second camera (such as a front camera) is the same as that of the display screen and the orientation of the first camera (such as a rear camera) is different from that of the display screen, the first image processing is performed on the second video signal, and the second image processing is performed on the target video signal.
S605, receiving the screen switching operation.
And S606, responding to the picture switching operation, and switching the first video signal and the second video signal.
In the live broadcasting process, as shown in fig. 8, according to the live broadcasting requirement, for example, the audience user wants to see clearly the scenery, the audience user watches the scenery, and the like, the anchor user clicks one of the controls on the UI of the live broadcasting application by means of touch clicking and the like, triggers the live broadcasting picture switching operation, and exchanges the first video signal and the second video signal.
The first window may be updated for a situation where the first video signal is displayed in the first window and the second video signal is displayed in the second window, the first window after the updating being used for displaying the second video signal, and the second window after the updating being used for displaying the first video signal.
The target video signal may be updated for a case where the first video signal is synthesized in a first area in the target video signal and the second video signal is synthesized in a second area in the target video signal, the first area after the update being used for displaying the second video signal and the second area after the update being used for displaying the first video signal.
Under the general condition, switching the camera can lead to two cameras all to close earlier and then reopen, leads to the video signal of camera to break off, and the picture is the black screen, then resumes normal display video signal again, influences the effect of live broadcast.
In this embodiment, an interactive mode is added, that is, two pictures are exchanged, that is, the first video signal and the second video signal are directly switched, and the camera is not directly switched, so that the first video signal and the second video signal are continuously generated, the phenomenon of black screen is avoided, and the live broadcast effect is ensured.
Example four
Fig. 9 is a schematic structural diagram of a live broadcast apparatus according to a fourth embodiment of the present invention, where the apparatus may specifically include the following modules:
a preview operation receiving module 901, configured to receive a live preview operation;
a video signal display module 902, configured to display, in response to the preview operation, a first video signal originating from a first camera and a second video signal originating from a second camera;
a selected operation receiving module 903, configured to receive a selected operation;
a video selection display module 904 for displaying the first video signal or the second video signal in a selected state in response to the selection operation;
a picture adjustment operation receiving module 905, configured to receive a picture adjustment operation;
a video adjusting and displaying module 906, configured to display an adjustment to the first video signal or the second video signal in the selected state according to the picture adjusting operation.
In one embodiment of the present invention, further comprising:
the device comprises a first starting module, a second starting module and a control module, wherein the first starting module is used for starting a first camera or a second camera, the first camera is used for collecting a first video signal, and the second camera is used for collecting a second video signal;
and the second starting module is used for starting the second camera or the first camera when the first camera or the second camera is started.
In one embodiment of the present invention, the second starting module comprises:
the occupation checking submodule is used for checking whether the second camera or the first camera is started or not; if yes, calling a restart submodule, and if not, calling an opening submodule;
the restarting submodule is used for closing the second camera or the first camera and opening the second camera or the first camera;
and the opening submodule is used for opening the second camera or the first camera.
In one embodiment of the present invention, the first resolution of the first video signal is proportional to the second resolution of the second video signal;
the device further comprises:
a video scaling module, configured to scale the first video signal or the second video signal if the first resolution is not equal to the second resolution, so that the first resolution is equal to the second resolution.
In one embodiment of the present invention, the video signal display module 902 comprises:
the first window display submodule is used for displaying a first window, and the first window is used for displaying a first video signal from a first camera;
and the second window display submodule is used for displaying a second window, and the second window is used for displaying a second video signal from the second camera.
In one embodiment of the present invention, the video signal display module 902 comprises:
and the target video signal display submodule is used for displaying a target video signal, a first area in the target video signal is used for displaying a first video signal from the first camera, and a second area in the target video signal is used for displaying a second video signal from the second camera.
In one embodiment of the invention, the first video signal is displayed in a first window and the second video signal is displayed in a second window;
the video adjustment display module 906 includes:
and the window adjusting submodule is used for adjusting the first window or the second window according to the picture adjusting operation so as to adjust the first video signal or the second video signal in a selected state.
In one embodiment of the present invention, the first video signal is synthesized in a first area in the target video signal, and the second video signal is synthesized in a second area in the target video signal;
the video adjustment display module 906 includes:
and the area adjusting submodule is used for adjusting the first area or the second area according to the picture adjusting operation so as to adjust the first video signal or the second video signal in a selected state.
In one embodiment of the present invention, the screen adjustment operation includes a drag gesture, a zoom gesture, and a rotation gesture;
the video adjustment display module 906 includes:
the video dragging submodule is used for dragging the first video signal or the second video signal in the selected state according to the dragging gesture;
alternatively, the first and second electrodes may be,
the video zooming submodule is used for zooming the first video signal or the second video signal in the selected state according to the zooming gesture;
alternatively, the first and second electrodes may be,
and the video rotation submodule is used for rotating the first video signal or the second video signal in the selected state according to the rotation gesture.
In one embodiment of the present invention, the first video signal is displayed in a first window, the second video signal is displayed in a second window, the apparatus further comprising:
the first processing module is used for performing first image processing and second image processing on the first video signal and performing second image processing on the second video signal if the first camera and the display screen have the same orientation and the second camera and the display screen have different orientations;
and the second processing module is used for performing first image processing and second image processing on the second video signal and performing second image processing on the first video signal if the orientation of the second camera is the same as that of the display screen and the orientation of the first camera is different from that of the display screen.
In one embodiment of the present invention, the first video signal is synthesized in a first area in the target video signal, and the second video signal is synthesized in a second area in the target video signal; the device further comprises:
the third processing module is used for performing first image processing on the first video signal and performing second image processing on the target video signal if the orientations of the first camera and the display screen are the same and the orientations of the second camera and the display screen are different;
and the fourth processing module is used for performing first image processing on the second video signal and performing second image processing on the target video signal if the orientations of the second camera and the display screen are the same and the orientations of the first camera and the display screen are different.
In one embodiment of the present invention, further comprising:
the starting operation receiving module is used for receiving the starting operation of live broadcasting;
and the video release module is used for responding to the starting operation and releasing the first video signal and the second video signal in a specified live broadcast room.
In one embodiment of the invention, the first video signal is displayed in a first window and the second video signal is displayed in a second window;
the video distribution module comprises:
a video synthesis sub-module for synthesizing the first video signal and the second video signal into a target video signal;
and the first synthesis and distribution submodule is used for distributing the target video signal in a specified live broadcast room.
In one embodiment of the present invention, the first video signal is synthesized in a first area in the target video signal, and the second video signal is synthesized in a second area in the target video signal;
the video distribution module comprises:
and the second synthesis and distribution submodule is used for distributing the target video signal in a specified live broadcast room.
In one embodiment of the present invention, further comprising:
the picture switching operation receiving module is used for receiving picture switching operation;
and the video switching module is used for responding to the picture switching operation and switching the first video signal and the second video signal.
In one embodiment of the invention, the first video signal is displayed in a first window and the second video signal is displayed in a second window;
the video switching module includes:
a first window updating submodule for updating the first window, the first window being used for displaying the second video signal;
and the second window updating submodule is used for updating the second window, and the second window is used for displaying the first video signal.
In one embodiment of the present invention, the first video signal is displayed in a first region in the target video signal, and the second video signal is displayed in a second region in the target video signal;
the video switching module includes:
and the target video signal updating submodule is used for updating the target video signal, the first area is used for displaying the second video signal, and the second area is used for displaying the first video signal.
The live broadcasting device provided by the embodiment of the invention can execute the live broadcasting method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
EXAMPLE five
Fig. 10 is a schematic structural diagram of a mobile terminal according to a fifth embodiment of the present invention, where the mobile terminal specifically includes:
a touch screen 1001 for receiving a preview operation of live broadcasting;
a display screen 1002 for displaying a first video signal derived from a first camera and a second video signal derived from a second camera in response to the preview operation;
the touch screen 1001 is further configured to receive a selection operation;
the display screen 1002 is further configured to display the first video signal or the second video signal in a selected state in response to the selection operation;
the touch screen 1001 is further configured to receive a screen adjustment operation;
the display screen 1002 is further configured to display an adjustment of the first video signal or the second video signal in the selected state according to the picture adjustment operation.
In one embodiment of the present invention, further comprising:
the camera comprises a first camera and a second camera, wherein the first camera is used for collecting a first video signal, and the second camera is used for collecting a second video signal;
the first camera is started when the second camera is started;
alternatively, the first and second electrodes may be,
and the second camera is started when the first camera is finished.
In one embodiment of the present invention, the first resolution of the first video signal is proportional to the second resolution of the second video signal, or the first resolution is equal to the second resolution.
In one embodiment of the present invention, the display screen 1002 is further configured to:
displaying a first window for displaying a first video signal originating from a first camera;
displaying a second window for displaying a second video signal originating from a second camera.
In one embodiment of the present invention, the display screen 1002 is further configured to:
and displaying a target video signal, wherein a first area in the target video signal is used for displaying a first video signal from a first camera, and a second area in the target video signal is used for displaying a second video signal from a second camera.
In one embodiment of the invention, the first video signal is displayed in a first window and the second video signal is displayed in a second window; the display screen 1002 is further configured to:
and adjusting the first window or the second window according to the picture adjusting operation so as to adjust the first video signal or the second video signal in the selected state.
In one embodiment of the present invention, the first video signal is synthesized in a first area in the target video signal, and the second video signal is synthesized in a second area in the target video signal; the display screen 1002 is further configured to:
and adjusting the first area or the second area according to the picture adjusting operation so as to adjust the first video signal or the second video signal in the selected state.
In one embodiment of the present invention, the screen adjustment operation includes a drag gesture, a zoom gesture, and a rotation gesture; the display screen 1002 is further configured to:
dragging the first video signal or the second video signal in the selected state according to the dragging gesture;
alternatively, the first and second electrodes may be,
zooming the first video signal or the second video signal in a selected state according to the zooming gesture;
alternatively, the first and second electrodes may be,
and rotating the first video signal or the second video signal in the selected state according to the rotation gesture.
In one embodiment of the invention, the system further comprises a processor;
the touch screen 1001 is further configured to receive a live broadcast start operation;
the processor is configured to distribute the first video signal and the second video signal in a designated live broadcast room in response to the start operation.
In one embodiment of the invention, the first video signal is displayed in a first window and the second video signal is displayed in a second window; the processor is further configured to:
synthesizing the first video signal and the second video signal into a target video signal;
and distributing the target video signal in a specified live broadcast room.
In one embodiment of the present invention, the first video signal is synthesized in a first area in the target video signal, and the second video signal is synthesized in a second area in the target video signal; the processor is further configured to:
and distributing the target video signal in a specified live broadcast room.
In one embodiment of the present invention,
the touch screen 1001 is further configured to receive a screen switching operation;
the display screen 1002 is further configured to switch the first video signal and the second video signal in response to the picture switching operation.
In one embodiment of the invention, the first video signal is displayed in a first window and the second video signal is displayed in a second window; the display screen 1002 is further configured to:
updating the first window, wherein the first window is used for displaying the second video signal;
updating the second window, the second window being used for displaying the first video signal.
In one embodiment of the present invention, the first video signal is displayed in a first region in the target video signal, and the second video signal is displayed in a second region in the target video signal;
the display screen 1002 is further configured to:
and updating the target video signal, wherein the first area is used for displaying the second video signal, and the second area is used for displaying the first video signal.
The mobile terminal provided by the embodiment of the invention can execute the live broadcast method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
EXAMPLE six
Fig. 11 is a schematic structural diagram of a computer device according to a sixth embodiment of the present invention. As shown in fig. 11, the computer apparatus includes a processor 1100, a memory 1101, a communication module 1102, an input device 1103, and an output device 1104; the number of the processors 1100 in the computer device may be one or more, and one processor 1100 is taken as an example in fig. 11; the processor 1100, the memory 1101, the communication module 1102, the input device 1103 and the output device 1104 in the computer apparatus may be connected by a bus or other means, and fig. 11 illustrates an example of connection by a bus.
The memory 1101 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as modules corresponding to the live broadcast method in the present embodiment (for example, a preview operation receiving module 901, a video signal display module 902, a selected operation receiving module 903, a video selected display module 904, a screen adjustment operation receiving module 905, and a video adjustment display module 906 in the live broadcast apparatus shown in fig. 9). The processor 1100 executes various functional applications of the computer device and data processing by executing software programs, instructions, and modules stored in the memory 1101, that is, implements the live broadcast method described above.
The memory 1101 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the computer device, and the like. Further, the memory 1101 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 1101 may further include memory located remotely from the processor 1100, which may be connected to a computer device through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
And the communication module 1102 is configured to establish connection with the display screen and implement data interaction with the display screen.
The input device 1103 may include a first camera 11031, a second camera 11032, a touch screen 11033, which may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the computer apparatus, and may also be a camera for acquiring images and a sound pickup apparatus for acquiring audio data.
The output device 1104 may include a display 11041, audio equipment such as speakers, etc.
The specific composition of the input device 1103 and the output device 1104 can be set according to actual conditions.
The processor 1100 executes various functional applications and data processing of the device by running software programs, instructions and modules stored in the memory 1101, that is, implements the above-described connected node control method of the electronic whiteboard.
The computer device provided by the embodiment of the present invention can execute the live broadcasting method provided by any embodiment of the present invention, and has corresponding functions and advantages.
EXAMPLE seven
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a live broadcast method, and the method includes:
receiving live preview operation;
displaying a first video signal derived from a first camera and a second video signal derived from a second camera in response to the preview operation;
receiving a selected operation;
displaying the first video signal or the second video signal in a selected state in response to the selection operation;
receiving picture adjusting operation;
and displaying the adjustment of the first video signal or the second video signal in the selected state according to the picture adjustment operation.
Of course, the computer program of the computer-readable storage medium provided in the embodiments of the present invention is not limited to the method operations described above, and may also perform related operations in a live broadcast method provided in any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiments of the live broadcast apparatus and the mobile terminal, each unit and each module included in the embodiments are only divided according to functional logic, but are not limited to the above division, as long as the corresponding function can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (21)

1. A live broadcast method, comprising:
receiving live preview operation;
displaying a first video signal derived from a first camera and a second video signal derived from a second camera in response to the preview operation;
receiving a selected operation;
displaying the first video signal or the second video signal in a selected state in response to the selection operation;
receiving picture adjusting operation;
and displaying the adjustment of the first video signal or the second video signal in the selected state according to the picture adjustment operation.
2. The method of claim 1, further comprising:
starting a first camera or a second camera, wherein the first camera is used for collecting a first video signal, and the second camera is used for collecting a second video signal;
and when the first camera or the second camera is started, starting the second camera or the first camera.
3. The method of claim 2, wherein the activating the second camera or the first camera comprises:
checking whether the second camera or the first camera is started;
if so, closing the second camera or the first camera, and opening the second camera or the first camera;
if not, the second camera or the first camera is started.
4. The method of claim 1, wherein the first resolution of the first video signal is proportional to the second resolution of the second video signal;
the method further comprises the following steps:
if the first resolution is not equal to the second resolution, scaling the first video signal or the second video signal to make the first resolution equal to the second resolution.
5. The method of claim 1, wherein displaying the first video signal from the first camera and the second video signal from the second camera comprises:
displaying a first window for displaying a first video signal originating from a first camera;
displaying a second window for displaying a second video signal originating from a second camera.
6. The method of claim 1, wherein displaying the first video signal from the first camera and the second video signal from the second camera comprises:
and displaying a target video signal, wherein a first area in the target video signal is used for displaying a first video signal from a first camera, and a second area in the target video signal is used for displaying a second video signal from a second camera.
7. The method of claim 1, wherein the first video signal is displayed in a first window and the second video signal is displayed in a second window;
the displaying the adjustment of the first video signal or the second video signal in the selected state according to the picture adjustment operation includes:
and adjusting the first window or the second window according to the picture adjusting operation so as to adjust the first video signal or the second video signal in the selected state.
8. The method of claim 1, wherein the first video signal is composited in a first region of a target video signal, and the second video signal is composited in a second region of the target video signal;
the displaying the adjustment of the first video signal or the second video signal in the selected state according to the picture adjustment operation includes:
and adjusting the first area or the second area according to the picture adjusting operation so as to adjust the first video signal or the second video signal in the selected state.
9. The method according to any one of claims 1-8, wherein the screen adjustment operation comprises a drag gesture, a zoom gesture, a rotate gesture;
the displaying the adjustment of the first video signal or the second video signal in the selected state according to the picture adjustment operation includes:
dragging the first video signal or the second video signal in the selected state according to the dragging gesture;
alternatively, the first and second electrodes may be,
zooming the first video signal or the second video signal in a selected state according to the zooming gesture;
alternatively, the first and second electrodes may be,
and rotating the first video signal or the second video signal in the selected state according to the rotation gesture.
10. The method of claim 1, wherein the first video signal is displayed in a first window and the second video signal is displayed in a second window, the method further comprising:
if the orientations of the first camera and the display screen are the same and the orientations of the second camera and the display screen are different, performing first image processing and second image processing on the first video signal, and performing second image processing on the second video signal;
and if the orientations of the second camera and the display screen are the same and the orientations of the first camera and the display screen are different, performing first image processing and second image processing on the second video signal, and performing second image processing on the first video signal.
11. The method of claim 1, wherein the first video signal is composited in a first region of a target video signal and the second video signal is composited in a second region of the target video signal, the method further comprising:
if the orientations of the first camera and the display screen are the same and the orientations of the second camera and the display screen are different, performing first image processing on the first video signal and performing second image processing on the target video signal;
and if the orientations of the second camera and the display screen are the same and the orientations of the first camera and the display screen are different, performing first image processing on the second video signal and performing second image processing on the target video signal.
12. The method of any of claims 1-11, further comprising:
receiving a live broadcast starting operation;
and responding to the starting operation, and distributing the first video signal and the second video signal in a specified live broadcast room.
13. The method of claim 12, wherein the first video signal is displayed in a first window and the second video signal is displayed in a second window;
the distributing the first video signal and the second video signal in a specified live broadcast room comprises:
synthesizing the first video signal and the second video signal into a target video signal;
and distributing the target video signal in a specified live broadcast room.
14. The method of claim 12, wherein the first video signal is composited in a first region of a target video signal, and the second video signal is composited in a second region of the target video signal;
the distributing the first video signal and the second video signal in a specified live broadcast room comprises:
and distributing the target video signal in a specified live broadcast room.
15. The method of claims 1-11, further comprising:
receiving a picture switching operation;
switching the first video signal and the second video signal in response to the picture switching operation.
16. The method of claim 15, wherein the first video signal is displayed in a first window and the second video signal is displayed in a second window;
the switching the first video signal and the second video signal includes:
updating the first window, wherein the first window is used for displaying the second video signal;
updating the second window, the second window being used for displaying the first video signal.
17. The method of claim 15, wherein the first video signal is displayed in a first region of the target video signal and the second video signal is displayed in a second region of the target video signal;
the switching the first video signal and the second video signal includes:
and updating the target video signal, wherein the first area is used for displaying the second video signal, and the second area is used for displaying the first video signal.
18. A live broadcast apparatus, comprising:
the preview operation receiving module is used for receiving the preview operation of live broadcast;
the video signal display module is used for responding to the preview operation and displaying a first video signal from the first camera and a second video signal from the second camera;
a selected operation receiving module for receiving selected operation;
the video selection display module is used for responding to the selection operation and displaying the first video signal or the second video signal in a selected state;
the picture adjusting operation receiving module is used for receiving picture adjusting operation;
and the video adjusting and displaying module is used for displaying the adjustment of the first video signal or the second video signal in the selected state according to the picture adjusting operation.
19. A mobile terminal, comprising:
the touch screen is used for receiving live preview operation;
the display screen is used for responding to the preview operation and displaying a first video signal from the first camera and a second video signal from the second camera;
the touch screen is also used for receiving a selected operation;
the display screen is also used for responding to the selection operation and displaying the first video signal or the second video signal in a selected state;
the touch screen is also used for receiving picture adjustment operation;
and the display screen is also used for displaying the adjustment of the first video signal or the second video signal in the selected state according to the picture adjustment operation.
20. A computer device, characterized in that the computer device comprises:
one or more processors;
a memory for storing one or more programs;
the first camera is used for acquiring a first video signal;
the second camera is used for acquiring a second video signal;
when executed by the one or more processors, cause the one or more processors to implement a live method as recited in any of claims 1-17.
21. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a live method according to any one of claims 1-17.
CN201911100568.XA 2019-11-12 2019-11-12 Live broadcast method and device, mobile terminal, computer equipment and storage medium Pending CN110784735A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911100568.XA CN110784735A (en) 2019-11-12 2019-11-12 Live broadcast method and device, mobile terminal, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911100568.XA CN110784735A (en) 2019-11-12 2019-11-12 Live broadcast method and device, mobile terminal, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110784735A true CN110784735A (en) 2020-02-11

Family

ID=69391351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911100568.XA Pending CN110784735A (en) 2019-11-12 2019-11-12 Live broadcast method and device, mobile terminal, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110784735A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111654715A (en) * 2020-06-08 2020-09-11 腾讯科技(深圳)有限公司 Live video processing method and device, electronic equipment and storage medium
CN111870956A (en) * 2020-08-21 2020-11-03 网易(杭州)网络有限公司 Method and device for split-screen display of game fighting, electronic equipment and storage medium
CN112637624A (en) * 2020-12-14 2021-04-09 广州繁星互娱信息科技有限公司 Live stream processing method, device, equipment and storage medium
CN112672174A (en) * 2020-12-11 2021-04-16 咪咕文化科技有限公司 Split-screen live broadcast method, acquisition equipment, playing equipment and storage medium
CN113596319A (en) * 2021-06-16 2021-11-02 荣耀终端有限公司 Picture-in-picture based image processing method, apparatus, storage medium, and program product
CN113727164A (en) * 2020-05-26 2021-11-30 百度在线网络技术(北京)有限公司 Live broadcast room entrance display method and device, electronic equipment and storage medium
CN114827755A (en) * 2022-04-15 2022-07-29 咪咕文化科技有限公司 Video playing method, system, device and storage medium
CN115174983A (en) * 2022-07-01 2022-10-11 北京达佳互联信息技术有限公司 Live broadcast processing method and device, electronic equipment and storage medium
CN115278278A (en) * 2022-07-01 2022-11-01 北京达佳互联信息技术有限公司 Page display method and device, electronic equipment and storage medium
CN115484390A (en) * 2021-06-16 2022-12-16 荣耀终端有限公司 Video shooting method and electronic equipment
WO2024124993A1 (en) * 2022-12-13 2024-06-20 荣耀终端有限公司 Recording setting method, electronic device and computer readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101365117A (en) * 2008-09-18 2009-02-11 中兴通讯股份有限公司 Method for customized screen splitting mode
WO2009030133A1 (en) * 2007-08-29 2009-03-12 Huawei Technologies Co., Ltd. A method, system and entity for realizing picture-in-picture video
CN106165430A (en) * 2016-06-29 2016-11-23 北京小米移动软件有限公司 Net cast method and device
CN107018334A (en) * 2017-03-31 2017-08-04 努比亚技术有限公司 A kind of applied program processing method and device based on dual camera
WO2018042175A1 (en) * 2016-09-02 2018-03-08 Russell Holmes Systems and methods for providing real-time composite video from multiple source devices
CN108449640A (en) * 2018-03-26 2018-08-24 广州虎牙信息科技有限公司 Live video output control method, device and storage medium, terminal
CN109525880A (en) * 2018-11-08 2019-03-26 北京微播视界科技有限公司 Synthetic method, device, equipment and the storage medium of video data
US20190306561A1 (en) * 2016-12-21 2019-10-03 Huawei Technologies Co., Ltd. Video Playing Method and Terminal Device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009030133A1 (en) * 2007-08-29 2009-03-12 Huawei Technologies Co., Ltd. A method, system and entity for realizing picture-in-picture video
CN101365117A (en) * 2008-09-18 2009-02-11 中兴通讯股份有限公司 Method for customized screen splitting mode
CN106165430A (en) * 2016-06-29 2016-11-23 北京小米移动软件有限公司 Net cast method and device
WO2018042175A1 (en) * 2016-09-02 2018-03-08 Russell Holmes Systems and methods for providing real-time composite video from multiple source devices
US20190306561A1 (en) * 2016-12-21 2019-10-03 Huawei Technologies Co., Ltd. Video Playing Method and Terminal Device
CN107018334A (en) * 2017-03-31 2017-08-04 努比亚技术有限公司 A kind of applied program processing method and device based on dual camera
CN108449640A (en) * 2018-03-26 2018-08-24 广州虎牙信息科技有限公司 Live video output control method, device and storage medium, terminal
CN109525880A (en) * 2018-11-08 2019-03-26 北京微播视界科技有限公司 Synthetic method, device, equipment and the storage medium of video data

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113727164A (en) * 2020-05-26 2021-11-30 百度在线网络技术(北京)有限公司 Live broadcast room entrance display method and device, electronic equipment and storage medium
CN113727164B (en) * 2020-05-26 2024-04-26 百度在线网络技术(北京)有限公司 Live broadcasting room entrance display method and device, electronic equipment and storage medium
CN111654715A (en) * 2020-06-08 2020-09-11 腾讯科技(深圳)有限公司 Live video processing method and device, electronic equipment and storage medium
CN111654715B (en) * 2020-06-08 2024-01-09 腾讯科技(深圳)有限公司 Live video processing method and device, electronic equipment and storage medium
CN111870956A (en) * 2020-08-21 2020-11-03 网易(杭州)网络有限公司 Method and device for split-screen display of game fighting, electronic equipment and storage medium
CN111870956B (en) * 2020-08-21 2024-01-26 网易(杭州)网络有限公司 Method and device for split screen display of game sightseeing, electronic equipment and storage medium
CN112672174B (en) * 2020-12-11 2023-07-07 咪咕文化科技有限公司 Split-screen live broadcast method, acquisition device, playing device and storage medium
CN112672174A (en) * 2020-12-11 2021-04-16 咪咕文化科技有限公司 Split-screen live broadcast method, acquisition equipment, playing equipment and storage medium
CN112637624A (en) * 2020-12-14 2021-04-09 广州繁星互娱信息科技有限公司 Live stream processing method, device, equipment and storage medium
CN113596319A (en) * 2021-06-16 2021-11-02 荣耀终端有限公司 Picture-in-picture based image processing method, apparatus, storage medium, and program product
WO2022262313A1 (en) * 2021-06-16 2022-12-22 荣耀终端有限公司 Picture-in-picture-based image processing method, device, storage medium, and program product
WO2022262549A1 (en) * 2021-06-16 2022-12-22 荣耀终端有限公司 Method for photographing video and electronic device
CN115484390A (en) * 2021-06-16 2022-12-16 荣耀终端有限公司 Video shooting method and electronic equipment
CN115484390B (en) * 2021-06-16 2023-12-19 荣耀终端有限公司 Video shooting method and electronic equipment
CN114827755A (en) * 2022-04-15 2022-07-29 咪咕文化科技有限公司 Video playing method, system, device and storage medium
CN115278278B (en) * 2022-07-01 2024-01-02 北京达佳互联信息技术有限公司 Page display method and device, electronic equipment and storage medium
CN115278278A (en) * 2022-07-01 2022-11-01 北京达佳互联信息技术有限公司 Page display method and device, electronic equipment and storage medium
CN115174983A (en) * 2022-07-01 2022-10-11 北京达佳互联信息技术有限公司 Live broadcast processing method and device, electronic equipment and storage medium
WO2024124993A1 (en) * 2022-12-13 2024-06-20 荣耀终端有限公司 Recording setting method, electronic device and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN110784735A (en) Live broadcast method and device, mobile terminal, computer equipment and storage medium
WO2021175055A1 (en) Video processing method and related device
US9172911B2 (en) Touch control of a camera at a remote video device
EP2446619B1 (en) Method and device for modifying a composite video signal layout
JP3385591B2 (en) Display control method for video conference system
RU2637469C2 (en) Method, device and system of implementing video conferencing calls based on unified communication
WO2017181599A1 (en) Method and device for displaying videos
CN111246270B (en) Method, device, equipment and storage medium for displaying bullet screen
WO2012037850A1 (en) Mobile terminal and method thereof for magnifying portion of remote image in video call
TW201143434A (en) In conference display adjustments
US9197856B1 (en) Video conferencing framing preview
US11539888B2 (en) Method and apparatus for processing video data
JP7111288B2 (en) Video processing method, apparatus and storage medium
US20120299812A1 (en) Apparatus and method for controlling data of external device in portable terminal
JP2020527883A5 (en)
WO2023134583A1 (en) Video recording method and apparatus, and electronic device
CN105100870A (en) Screenshot method and terminal equipment
CN114520876A (en) Time-delay shooting video recording method and device and electronic equipment
KR20110112686A (en) Video conference apparatus and method
WO2024041672A1 (en) Iptv service-based vr panoramic video playback method and system
CN112887653B (en) Information processing method and information processing device
CN115529498A (en) Live broadcast interaction method and related equipment
KR102404130B1 (en) Device for transmitting tele-presence image, device for receiving tele-presence image and system for providing tele-presence image
CN115250357A (en) Terminal device, video processing method and electronic device
CN111586465B (en) Operation interaction method, device and equipment for live broadcast room and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200211