WO2018000227A1 - Procédé et dispositif de diffusion vidéo - Google Patents

Procédé et dispositif de diffusion vidéo Download PDF

Info

Publication number
WO2018000227A1
WO2018000227A1 PCT/CN2016/087612 CN2016087612W WO2018000227A1 WO 2018000227 A1 WO2018000227 A1 WO 2018000227A1 CN 2016087612 W CN2016087612 W CN 2016087612W WO 2018000227 A1 WO2018000227 A1 WO 2018000227A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
video picture
picture
synthesizing
rear camera
Prior art date
Application number
PCT/CN2016/087612
Other languages
English (en)
Chinese (zh)
Inventor
李志刚
Original Assignee
北京小米移动软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京小米移动软件有限公司 filed Critical 北京小米移动软件有限公司
Priority to PCT/CN2016/087612 priority Critical patent/WO2018000227A1/fr
Priority to CN201680000673.4A priority patent/CN106165430B/zh
Publication of WO2018000227A1 publication Critical patent/WO2018000227A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server

Definitions

  • the present disclosure relates to the field of network technologies, and in particular, to a video live broadcast method and apparatus.
  • the live broadcast mode of the current mainstream live broadcast software is a live broadcast of an anchor, and multiple viewers watch the live broadcast mode in the live broadcast of the anchor.
  • the front camera of the mobile terminal is usually manually started to shoot, so that the viewer in the live broadcast can see the live broadcast of the anchor.
  • the anchor then manually starts the rear camera of the mobile terminal to shoot, so that the current live broadcast screen is switched from the anchor screen to the exterior scene, and the viewers in the live broadcast watch the live broadcast of the scene.
  • the present disclosure provides a video live broadcast method and apparatus.
  • a video live broadcast method for an anchor terminal, where the method includes:
  • the method before the acquiring the first video image captured by the front camera and the second video image captured by the rear camera, the method further includes:
  • the first startup option of the rear camera is displayed on the live video page; when the triggering operation of the first startup option is detected, the rear camera is activated Take a video shot; or,
  • the synthesizing the first video picture and the second video picture comprises:
  • the first video picture and the second video picture are combined according to the left and right split mode.
  • the synthesizing the first video picture and the second video picture comprises:
  • the first video picture and the second video picture are combined according to a split screen mode.
  • the synthesizing the first video picture and the second video picture comprises:
  • the first video picture and the second video picture are combined according to a picture-in-picture mode.
  • the method further includes:
  • the composite mode switching option is used to refer to a method for synthesizing the first video picture and the second video picture;
  • a video live broadcast apparatus for an anchor terminal, where the apparatus includes:
  • the acquisition module is configured to acquire a first video image captured by the front camera and a second video image captured by the rear camera;
  • a synthesis module configured to combine the first video picture and the second video picture to obtain a target video picture
  • a sending module configured to send the target video picture to a server, so that the server forwards the target video picture to the audience terminal.
  • the apparatus further includes:
  • the first display module is configured to display the first startup option of the rear camera on the live video page if only the front camera is currently in the shooting state;
  • the startup module is configured to: after detecting the triggering operation of the first startup option, start the rear camera to perform video frame shooting;
  • the first display module is further configured to display a second startup option of the front camera on the live video page if only the rear camera is currently in a shooting state;
  • the startup module is further configured to activate the front camera to perform video screen shooting after detecting a triggering operation of the second startup option.
  • the synthesizing module is configured to synthesize the first video picture and the second video picture in a left-right split mode.
  • the synthesizing module is configured to follow the up and down split mode The first video picture and the second video picture are combined.
  • the synthesizing module is configured to synthesize the first video picture and the second video picture in a picture-in-picture mode.
  • the apparatus further includes:
  • a second display module configured to display a synthetic mode switching option on the live video page, where the synthetic mode switching option is used to indicate a manner of synthesizing the first video image and the second video image;
  • the synthesizing module is further configured to: after detecting the triggering operation of the synthesizing mode switching instruction, perform the first video picture and the second video picture based on the synthesizing manner indicated by the synthesizing mode switching instruction Synthesize to get a new target video picture;
  • the sending module is further configured to send the new target video picture to the server, so that the server forwards the new target video picture to the audience terminal.
  • a video live broadcast apparatus including:
  • a memory for storing processor executable instructions
  • the processor is configured to: acquire a first video image captured by a front camera and a second video image captured by a rear camera; and perform the first video image and the second video image Synthesizing, obtaining a target video picture; transmitting the target video picture to a server, so that the server forwards the target video picture to the viewer terminal.
  • the anchor terminal can activate the front camera and the rear camera to perform video capture of the dual camera, and synthesize the first video image captured by the front camera and the second video image captured by the rear camera, so that the display covers the front and rear.
  • the target video picture of the picture taken by the camera and supports sending the target video picture to the audience terminal, so that the audience terminal also plays the target video picture synchronously, so that the display of the anchor picture and the exterior picture can be completed at the same time, overcoming the desire to display the host on the anchor. Not at the time Can show the location, but when you want to show the location, you can not show your own defects, the live broadcast effect is better, the user is sticky.
  • FIG. 1 is a flowchart of a video live broadcast method according to an exemplary embodiment.
  • FIG. 2A is a flowchart of a video live broadcast method according to an exemplary embodiment.
  • FIG. 2B is a schematic diagram of a video picture, according to an exemplary embodiment.
  • 2C is a schematic diagram of a video picture, according to an exemplary embodiment.
  • 2D is a schematic diagram of a video picture, according to an exemplary embodiment.
  • FIG. 2E is a schematic diagram of a video picture according to an exemplary embodiment.
  • FIG. 3 is a block diagram of a video live broadcast apparatus according to an exemplary embodiment.
  • FIG. 4 is a block diagram of a video live broadcast apparatus according to an exemplary embodiment.
  • FIG. 5 is a block diagram of a video live broadcast apparatus according to an exemplary embodiment.
  • FIG. 6 is a block diagram of a video live broadcast apparatus according to an exemplary embodiment.
  • FIG. 1 is a flowchart of a video live broadcast method according to an exemplary embodiment, as shown in FIG. 1 .
  • the method is used in an anchor terminal and includes the following steps.
  • step 101 a first video picture taken by the front camera and a second video picture taken by the rear camera are acquired.
  • step 102 the first video picture and the second video picture are combined to obtain a target video picture.
  • step 103 the target video picture is sent to the server to cause the server to forward the target video picture to the viewer terminal.
  • the anchor terminal can activate the front camera and the rear camera to perform video capture of the dual camera, and the first video image captured by the front camera and the second video image captured by the rear camera. Synthesizing, thereby displaying a target video picture covering the picture taken by the front and rear cameras, and supporting the target video picture to be sent to the audience terminal, so that the audience terminal also plays the target video picture synchronously, so that the display of the anchor picture and the exterior picture can be completed at the same time.
  • the anchor can't show the location when he wants to show himself, but can't show his flaws when he wants to show the location, the live broadcast is better, and the user is sticky.
  • the method before the acquiring the first video image captured by the front camera and the second video image captured by the rear camera, the method further includes:
  • the first startup option of the rear camera is displayed on the live video page; when the triggering operation of the first startup option is detected, the rear camera is activated Take a video shot; or,
  • the synthesizing the first video picture and the second video picture comprises: following the left and right split screen modes, the first video picture and the second path The video screen is synthesized.
  • the first video picture and the second video picture are The line synthesis comprises: synthesizing the first video picture and the second video picture according to a split screen mode.
  • the synthesizing the first video picture and the second video picture comprises: following the picture-in-picture mode, the first video picture and the second path The video screen is synthesized.
  • the method further includes:
  • a synthetic mode switching option Displaying, in the video live broadcast page, a synthetic mode switching option, where the synthetic mode switching option is used to indicate a manner of synthesizing the first video image and the second video image;
  • FIG. 2A is a flowchart of a video live broadcast method according to an exemplary embodiment, where an interactive host is a host broadcast terminal, a server, and a viewer terminal.
  • the anchor terminal refers to a mobile terminal
  • the audience terminal can refer to both a mobile terminal and a fixed terminal. Taking the anchor terminal to start the front camera for video screen shooting, and then starting the rear camera for video screen shooting, as shown in FIG. 2A, the following steps are included.
  • step 201 the anchor terminal plays the first video picture captured by the front camera on the live video page.
  • the anchor screen is the most wanted to see.
  • the anchor screen is also the most wanted to show it to everyone, so the importance of the exterior scene is generally lower than the anchor screen. of. Therefore, when the anchor user is using the anchor terminal for live broadcast, the preferred method is to start the front camera for shooting.
  • the anchor terminal can be a smart phone or a tablet computer, etc., the present disclosure The embodiment does not specifically limit this.
  • the live broadcast software on the anchor terminal can obtain the first video image captured by the front camera in real time and play it on the live video page.
  • the live broadcast software on the anchor terminal also uploads the first video image and the live broadcast identifier captured by the front camera to the server in real time, and the server determines to view the audience user identifier of the broadcast user live broadcast according to the live broadcast identifier. And sending the first video screen to the audience terminal according to the viewer user identifier. If the live broadcast software is also installed in the viewer terminal, the first video screen can be displayed synchronously with the anchor terminal, and the playback progress of the video screen is consistent.
  • the content captured by the front camera is collectively referred to as the first video image, and the content captured by the rear camera is similar to the second video image in the subsequent step.
  • the live broadcast identifier refers to the name or number of the live broadcast room, and can be generated by the server for the live broadcast room.
  • the viewer terminal can enter the live room by selecting the live room logo or entering the live room logo.
  • step 202 the anchor terminal displays the first startup option of the rear camera on the live video page; after detecting the trigger operation of the first startup option, the rear camera is activated to shoot.
  • the anchor user may need to display the external scene, such as showing the audience the amount of heavy rain outside, and the live broadcast of the accident scene in front of the scene.
  • a startup option of the rear camera is provided on the live video page.
  • the first startup option may specifically be a virtual button that can be clicked by the anchor user.
  • the anchor terminal detects that the anchor user clicks the first startup option, the live broadcast software actively calls the rear camera to perform video screen shooting.
  • the first startup option may be displayed on the video live page, except for any area for playing the video screen, which is not specifically limited in the embodiment of the present disclosure.
  • the above steps 201 and 202 describe how to restart the rear camera for video screen shooting only when the front camera is in the shooting state. It should be noted that if only the rear camera is currently in the shooting state, the second startup option of the front camera can also be displayed on the live video page. After the anchor terminal detects the triggering operation of the second startup option, the front camera is further activated. Video screen shooting. That is, when a camera is activated, another camera can be triggered by the startup option displayed on the live video page.
  • step 203 the anchor terminal synthesizes the first video picture captured by the front camera and the second video picture captured by the rear camera to obtain a target video picture.
  • the anchor screen and the exterior scene screen are captured in real time.
  • the live video software obtains the first video picture taken by the front camera and the second video picture taken by the rear camera, in order to display the anchor picture and the exterior picture at the same time, the first video picture and the second picture will be displayed.
  • the time-frequency picture is synthesized in the following manners, so that the target video picture as shown in FIG. 2B to FIG. 2D can be displayed on the anchor terminal.
  • the first video picture and the second video picture are combined according to the left and right split screen modes shown in FIG. 2B.
  • the left and right split screen mode refers to dividing the display area for playing the video picture on the live video page into two parts. One part is used to display the first video picture taken by the front camera, and the other part is used to display the second video picture taken by the rear camera. Limited to the size of the anchor terminal screen, the left and right split screen mode is usually used when the anchor terminal is in the landscape state.
  • the terminal is configured with a sensor, and according to the sensor information returned by the sensor, it can be determined whether the current posture of the anchor terminal is a horizontal screen state or a vertical screen state.
  • the anchor terminal may be configured with a magnetic field sensor, a gyro sensor, a six-axis orientation sensor, or a nine-axis rotation vector sensor, which is not limited in this embodiment of the present invention.
  • a method of synthesizing image frames one by one is usually adopted. For example, after the rear camera is started, the image frame that is captured by the front camera and currently being played is cut to a certain extent, and the first frame image captured by the rear camera is also trimmed to some extent, and then the cropped image is cut.
  • the two image frames are synthesized according to the left and right split screen modes, and the second frame image captured by the rear camera and the image captured by the front camera and the next frame are also synthesized according to the left and right split screen modes, and so on. This example does not specifically limit this.
  • the first video picture and the second video picture are combined according to the up and down split mode shown in FIG. 2C.
  • the upper and lower split screen mode refers to dividing the display area for playing the video screen on the live video page into two upper and lower parts. One part is used to display the first video picture taken by the front camera, and the other part is used to display the second video picture taken by the rear camera.
  • the up and down split mode is usually used when the anchor terminal is in the portrait state.
  • the image frame that is captured by the front camera and currently playing is clipped to a certain extent, and the first frame image captured by the rear camera is also trimmed to some extent, and the two cropped images are cut.
  • the image frames are synthesized according to the upper and lower split screen modes, and the second frame image captured by the rear camera and the image captured by the front camera and the next frame are also synthesized in the upper and lower split mode, and so on, the embodiment of the present disclosure This is not specifically limited.
  • the first video picture and the second video picture are combined according to the picture-in-picture mode shown in FIGS. 2D and 2E;
  • the picture-in-picture mode refers to dividing the display area for playing a video picture on the live video page into two parts. One part is contained in the area of the other part. Similar to the above two cases, one part is used to display the first video picture taken by the front camera, and the other part is used to display the second video picture taken by the rear camera.
  • the picture-in-picture mode is applicable when the anchor terminal is in the portrait or landscape mode.
  • the width and height of the image frame currently being captured by the front camera are compressed to a certain extent, and the width and height of the first frame image captured by the rear camera are subjected to a certain degree.
  • the compressed two image frames are synthesized according to the picture-in-picture mode, and the second frame image captured by the rear camera and the image captured by the front camera and the next frame are also synthesized in the picture-in-picture mode.
  • the embodiment of the present disclosure does not specifically limit this.
  • step 204 the anchor terminal plays the target video picture, and the target video picture and the live broadcast room The ID is sent to the server.
  • the live broadcast software of the anchor terminal can directly play the target video picture on the live video page of the anchor terminal. If the target video picture is synthesized according to the left and right split screen mode, the anchor terminal plays the first video picture in the first display area of the live video page and the second video area in the second display area, as shown in FIG. 2B. .
  • the first display area is located to the left of the second display area.
  • the anchor terminal will play the first video picture in the third display area of the live video page, and the second display area plays the second video picture as shown in FIG. 2C.
  • the third display area is located above the fourth display area.
  • the anchor terminal plays the first video picture in the fifth display area of the live video page, and the second display area plays the second video picture, as shown in FIG. 2D.
  • the sixth display area is located in the fifth display area.
  • the anchor terminal adopts the dual camera live broadcast mode of the front camera and the rear camera.
  • the anchor terminal sends the target video screen and the live broadcast to the server in real time. And identifying, so that the server determines to view all the audience users of the anchor live broadcast according to the live broadcast identifier, and sends the target video screen to all the audience users. For the detailed process, see step 205 below.
  • the embodiment of the present disclosure further provides a synthetic mode switching option on the live video broadcast page of the anchor terminal, and after the anchor user triggers the synthetic mode switching option, the synthetic mode switching instruction is triggered.
  • the synthesis mode switching instruction indicates a combination of the first video picture and the second video picture.
  • the composite mode switching option specifically includes two types, where the first switching option and the second switching option are used as an example, where the first switching option is used to instruct the anchor terminal to switch the first video picture and the second video picture.
  • the display area is synthesized.
  • the anchor terminal After the anchor terminal detects the triggering operation of the first switching option, from now on, the anchor terminal will take the first video picture taken by the front camera and the second picture taken by the rear camera based on the switched display area.
  • the road video picture is synthesized according to the initial split screen mode to obtain a new target video picture and to the viewer terminal. Send a new target video screen.
  • the anchor terminal plays the second video picture in the display area of the original playing first video picture, and plays the first video picture in the display area of the original playing second video picture.
  • the anchor terminal displays the first video screen in the sub-display area, and displays the second video screen in the main display area.
  • the area of the main display area is generally much larger than the area of the sub display area.
  • embodiments of the present disclosure also support the process of switching video pictures between different split screen modes.
  • a second switching option of the split screen mode is provided on the live video page of the anchor terminal.
  • the second switching option may be specifically composed of a drop-down list displaying a split screen mode and an OK button.
  • the anchor terminal After the anchor user selects a sub-list item in the drop-down list and clicks the OK button, the anchor terminal determines that the triggering operation of the second switching option is detected, and determines the split screen mode indicated by the sub-item as the target split screen mode; Initially, the anchor terminal synthesizes the first video picture and the second video picture according to the target split mode, obtains a new target video picture, and transmits a new target video picture to the viewer terminal.
  • each of the split screen modes may also be associated with a single switching option, which is not specifically limited in the embodiment of the present disclosure.
  • the anchor terminal will view the first video screen and the second time.
  • the frequency picture is synthesized by the up and down split mode shown in Fig. 2C, so that the picture shown in Fig. 2C is displayed on the anchor terminal.
  • step 205 after receiving the target video picture and the live broadcast identifier, the server determines the audience user identifier associated with the live broadcast identifier, and transmits the target video screen to the viewer terminal based on the viewer user identifier.
  • the server for a live broadcast room, the server generally stores a live broadcast standard.
  • the correspondence between the user identification and the viewer user identification is recorded to facilitate recording of the viewer user watching the live broadcast of the live video. Therefore, after receiving the identifier of the live broadcast, the server may determine the audience user identifier associated with the live broadcast identifier according to the corresponding relationship, and deliver the target video screen to all audience users in the live broadcast room.
  • step 206 after receiving the target video picture sent by the server, the audience terminal determines the split screen mode used when the target video picture is synthesized, and plays the target video picture on the live video page according to the split mode.
  • the audience terminal may be displayed on the live video page as shown in FIG. 2B.
  • the first display area plays the first video picture
  • the second display area plays the second video picture.
  • the first display area is located to the left of the second display area.
  • the viewer terminal plays the first video picture in the third display area of the live video page and the second video in the fourth display area, as shown in FIG. 2C. Picture.
  • the third display area is located above the fourth display area.
  • the viewer terminal plays the first video picture in the fifth display area of the live video page and the second video in the sixth display area as shown in FIG. 2D. Picture.
  • the sixth display area is located in the fifth display area.
  • the display mode of the target video screen by the audience terminal is the same as that of the anchor terminal, so that the live video displayed by the anchor terminal and the audience terminal is consistent.
  • the screen displayed on the left side of the anchor terminal is the same as the screen displayed on the left side of the viewer terminal. This assumes that the anchor user notifies the viewer user to pay attention to something that appears in the left picture, and there is no situation where the viewer user needs to find it on the right side of the screen.
  • the anchor terminal can activate the front camera and the rear camera to perform video capture of the dual camera, and the first video image captured by the front camera and the second video image captured by the rear camera.
  • the user is sticky.
  • FIG. 3 is a block diagram of a live video broadcast device for an anchor terminal, according to an exemplary embodiment.
  • the apparatus includes an acquisition module 301, a synthesis module 302, and a transmission module 303.
  • the obtaining module 301 is configured to acquire a first video image captured by the front camera and a second video image captured by the rear camera;
  • the synthesizing module 302 is configured to synthesize the first video picture and the second video picture to obtain a target video picture;
  • the sending module 303 is configured to send the target video picture to the server, so that the server forwards the target video picture to the audience terminal.
  • the apparatus further includes:
  • the first display module 304 is configured to display the first startup option of the rear camera on the live video page if only the front camera is currently in the shooting state;
  • the startup module 305 is configured to: after detecting the triggering operation of the first startup option, start the rear camera to perform video screen shooting;
  • the first display module 304 is further configured to display a second startup option of the front camera on the live video page if only the rear camera is currently in a shooting state;
  • the startup module 305 is further configured to, after detecting the triggering operation of the second startup option, activate the front camera to perform video screen shooting.
  • the synthesizing module 302 is configured to synthesize the first video picture and the second video picture according to a left and right split screen mode.
  • the synthesizing module 302 is configured to synthesize the first video picture and the second video picture according to a top and bottom split mode.
  • the synthesizing module 302 is configured to follow the picture-in-picture mode. The first video picture and the second video picture are combined.
  • the apparatus further includes:
  • the second display module 306 is configured to display a composite mode switching option on the live video page, where the synthetic mode switching option is used to indicate a manner of synthesizing the first video image and the second video image;
  • the synthesizing module 302 is further configured to, after detecting the triggering operation of the synthesizing mode switching instruction, the first video picture and the second video picture based on the synthesizing manner indicated by the synthesizing mode switching instruction Perform synthesis to obtain a new target video screen;
  • the sending module 303 is further configured to send the new target video picture to the server, so that the server forwards the new target video picture to the audience terminal.
  • the anchor terminal can activate the front camera and the rear camera to perform video capture of the dual camera, and the first video image captured by the front camera and the second video frame captured by the rear camera. Synthesizing, thereby displaying a target video picture covering the picture taken by the front and rear cameras, and supporting the target video picture to be sent to the audience terminal, so that the audience terminal also plays the target video picture synchronously, so that the display of the anchor picture and the exterior picture can be completed at the same time.
  • the anchor can't show the location when he wants to show himself, but can't show his flaws when he wants to show the location, the live broadcast is better, and the user is sticky.
  • FIG. 6 is a block diagram of a video broadcast device 600, according to an exemplary embodiment.
  • device 600 can be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
  • the apparatus 600 may include one or more of the following components: a processing component 602, a memory 604, a power component 606, a multimedia component 608, an audio component 610, an I/O (Input/Output) interface 612, Sensor component 614, and communication component 616.
  • Processing component 602 typically controls the overall operation of device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • Processing component 602 can include one or more processors 620 to execute instructions to perform all or part of the steps of the above described methods.
  • processing component 602 can include one or more modules to facilitate interaction between component 602 and other components.
  • processing component 602 can include a multimedia module to facilitate interaction between multimedia component 608 and processing component 602.
  • Memory 604 is configured to store various types of data to support operation at device 600. Examples of such data include instructions for any application or method operating on device 600, contact data, phone book data, messages, pictures, videos, and the like.
  • the memory 604 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as SRAM (Static Random Access Memory), EEPROM (Electrically-Erasable Programmable Read-Only Memory, Erasable Programmable Read Only Memory (EPROM), PROM (Programmable Read-Only Memory), ROM (Read-Only Memory, Read only memory), magnetic memory, flash memory, disk or optical disk.
  • SRAM Static Random Access Memory
  • EEPROM Electrically-Erasable Programmable Read-Only Memory
  • EPROM Erasable Programmable Read Only Memory
  • PROM Programmable Read-Only Memory
  • ROM Read-Only Memory, Read only memory
  • magnetic memory flash memory, disk or optical disk.
  • Power component 606 provides power to various components of device 600.
  • Power component 606 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 600.
  • the multimedia component 608 includes a screen between the device 600 and the user that provides an output interface.
  • the screen may include an LCD (Liquid Crystal Display) and a TP (Touch Panel). If the screen includes a touch panel, the screen can be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, slides, and gestures on the touch panel. The touch sensor may sense not only the boundary of the touch or sliding action, but also the duration and pressure associated with the touch or slide operation.
  • the multimedia component 608 includes a front camera and/or a rear camera. Front camera and/or rear camera when device 600 is in operating mode, such as shooting mode or video mode The image header can receive external multimedia data. Each front and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 610 is configured to output and/or input an audio signal.
  • the audio component 610 includes a MIC (Microphone) that is configured to receive an external audio signal when the device 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode.
  • the received audio signal may be further stored in memory 604 or transmitted via communication component 616.
  • audio component 610 also includes a speaker for outputting an audio signal.
  • the I/O interface 612 provides an interface between the processing component 602 and the peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to, a home button, a volume button, a start button, and a lock button.
  • Sensor assembly 614 includes one or more sensors for providing device 600 with a status assessment of various aspects.
  • sensor component 614 can detect an open/closed state of device 600, a relative positioning of components, such as a display and a keypad of device 600, and sensor component 614 can also detect a change in position of one component of device 600 or device 600, user The presence or absence of contact with device 600, device 600 orientation or acceleration/deceleration and temperature variation of device 600.
  • Sensor assembly 614 can include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor assembly 614 may also include a light sensor, such as a CMOS (Complementary Metal Oxide Semiconductor) or CCD (Charge-coupled Device) image sensor for use in imaging applications.
  • the sensor component 614 can also include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 616 is configured to facilitate wired or wireless communication between device 600 and other devices.
  • the device 600 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
  • communication component 616 receives broadcast signals or broadcast associated information from an external broadcast management system via a broadcast channel.
  • the communication component 616 further includes an NFC (Near Field Communication) module to facilitate short-range Communication.
  • the NFC module can be based on RFID (Radio Frequency Identification) technology, IrDA (Infra-red Data Association) technology, UWB (Ultra Wideband) technology, BT (Bluetooth) technology and Other technologies to achieve.
  • the device 600 may be configured by one or more ASICs (Application Specific Integrated Circuits), DSP (Digital Signal Processor), DSPD (Digital Signal Processor Device). Device), PLD (Programmable Logic Device), FPGA (Field Programmable Gate Array), controller, microcontroller, microprocessor or other electronic component implementation for performing the above method.
  • ASICs Application Specific Integrated Circuits
  • DSP Digital Signal Processor
  • DSPD Digital Signal Processor Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • controller microcontroller, microprocessor or other electronic component implementation for performing the above method.
  • non-transitory computer readable storage medium comprising instructions, such as a memory 604 comprising instructions executable by processor 620 of apparatus 600 to perform the above method.
  • the non-transitory computer readable storage medium may be a ROM, a RAM (Random Access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, and optical data. Storage devices, etc.
  • a non-transitory computer readable storage medium that, when executed by a processor of a mobile terminal, enables the mobile terminal to perform the video live broadcast method described above.

Abstract

La présente invention se rapporte au domaine de la technologie des réseaux. La présente invention porte sur un procédé et sur un dispositif de diffusion vidéo, le procédé consistant : à acquérir une première trame vidéo capturée par une caméra avant et une seconde trame vidéo capturée par une caméra arrière ; à synthétiser la première trame vidéo et la seconde trame vidéo pour acquérir une trame vidéo cible ; et à envoyer la trame vidéo cible à un serveur de sorte à permettre au serveur de transmettre la trame vidéo cible à un terminal de visualisation. Au moyen de la caméra avant et de la caméra arrière, à la fois un écran hôte et un écran d'environnement peuvent être représentés en même temps, surmontant le défaut selon lequel un hôte ne peut pas montrer l'environnement tout en voulant se montrer, ou ne peut pas se montrer tout en souhaitant montrer l'environnement, ce qui permet d'obtenir un effet de diffusion souhaitable et un taux de rétention d'utilisateur élevé.
PCT/CN2016/087612 2016-06-29 2016-06-29 Procédé et dispositif de diffusion vidéo WO2018000227A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2016/087612 WO2018000227A1 (fr) 2016-06-29 2016-06-29 Procédé et dispositif de diffusion vidéo
CN201680000673.4A CN106165430B (zh) 2016-06-29 2016-06-29 视频直播方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/087612 WO2018000227A1 (fr) 2016-06-29 2016-06-29 Procédé et dispositif de diffusion vidéo

Publications (1)

Publication Number Publication Date
WO2018000227A1 true WO2018000227A1 (fr) 2018-01-04

Family

ID=57341488

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/087612 WO2018000227A1 (fr) 2016-06-29 2016-06-29 Procédé et dispositif de diffusion vidéo

Country Status (2)

Country Link
CN (1) CN106165430B (fr)
WO (1) WO2018000227A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110719416A (zh) * 2019-09-30 2020-01-21 咪咕视讯科技有限公司 一种直播方法、通信设备及计算机可读存储介质

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106534695B (zh) * 2016-11-28 2021-10-22 宇龙计算机通信科技(深圳)有限公司 拍摄方法、拍摄装置和终端
CN106658215A (zh) * 2016-12-15 2017-05-10 北京小米移动软件有限公司 推送直播文件的方法及装置
CN106791448A (zh) * 2017-02-27 2017-05-31 努比亚技术有限公司 一种移动终端及拍摄方法
CN107071329A (zh) * 2017-02-27 2017-08-18 努比亚技术有限公司 视频通话过程中自动切换摄像头的方法及装置
CN107018334A (zh) * 2017-03-31 2017-08-04 努比亚技术有限公司 一种基于双摄像头的应用程序处理方法和装置
CN108881926A (zh) * 2017-05-15 2018-11-23 环达电脑(上海)有限公司 行车网络直播装置及方法
CN107197094A (zh) * 2017-05-23 2017-09-22 努比亚技术有限公司 一种摄像显示方法、终端及计算机可读存储介质
CN107360400B (zh) * 2017-07-27 2021-05-28 上海传英信息技术有限公司 一种用于智能设备摄像头的录像方法及录像装置
CN109413437A (zh) 2017-08-15 2019-03-01 深圳富泰宏精密工业有限公司 电子设备及传送视频流的方法
CN108111920A (zh) * 2017-12-11 2018-06-01 北京小米移动软件有限公司 视频信息处理方法及装置
JP7135472B2 (ja) * 2018-06-11 2022-09-13 カシオ計算機株式会社 表示制御装置、表示制御方法及び表示制御プログラム
CN111356000A (zh) * 2018-08-17 2020-06-30 北京达佳互联信息技术有限公司 视频合成方法、装置、设备及存储介质
CN110475015A (zh) * 2018-09-03 2019-11-19 王闯 一种手机前后摄像头同步工作双显示功能
CN110139064B (zh) * 2018-09-29 2021-10-01 广东小天才科技有限公司 一种可穿戴设备的视频通话方法及可穿戴设备
CN109688448A (zh) * 2018-11-26 2019-04-26 杨豫森 一种双视角摄像头直播系统和方法
CN112019906A (zh) * 2019-05-30 2020-12-01 上海哔哩哔哩科技有限公司 直播方法、计算机设备及可读存储介质
CN110809100A (zh) * 2019-10-30 2020-02-18 北京字节跳动网络技术有限公司 视频处理的方法及装置、终端和存储介质
CN110784674B (zh) * 2019-10-30 2022-03-15 北京字节跳动网络技术有限公司 视频处理的方法、装置、终端及存储介质
CN110784735A (zh) * 2019-11-12 2020-02-11 广州虎牙科技有限公司 一种直播方法、装置、移动终端、计算机设备和存储介质
CN111327823A (zh) * 2020-02-28 2020-06-23 深圳看到科技有限公司 视频生成方法、装置及对应的存储介质
CN111416952A (zh) * 2020-03-05 2020-07-14 深圳市多亲科技有限公司 一种移动视频直播方法、装置和移动终端
CN111327856A (zh) * 2020-03-18 2020-06-23 北京金和网络股份有限公司 一种五定视频采集与合成处理方法、装置和可读存储介质
CN113556482A (zh) * 2020-04-24 2021-10-26 阿里巴巴集团控股有限公司 一种基于多摄像头拍摄的视频处理方法、装置和系统
CN111698436A (zh) * 2020-06-22 2020-09-22 杭州晶一智能科技有限公司 一种可用于视频直播的油烟机及其控制方法
CN112291519B (zh) * 2020-10-21 2022-02-15 深圳慧源创新科技有限公司 显示画面切换方法、装置、电子设备和存储介质
CN114466131B (zh) * 2020-11-10 2022-12-23 荣耀终端有限公司 一种跨设备的拍摄方法及相关设备
CN112672174B (zh) * 2020-12-11 2023-07-07 咪咕文化科技有限公司 分屏直播方法、采集设备、播放设备及存储介质
CN113170225A (zh) * 2021-02-09 2021-07-23 百果园技术(新加坡)有限公司 画面窗口的显示方法、装置、终端及存储介质
CN113453022B (zh) * 2021-06-30 2023-05-16 康佳集团股份有限公司 一种图像显示方法、装置、电视机及存储介质
CN113573117A (zh) * 2021-07-15 2021-10-29 广州方硅信息技术有限公司 视频直播方法、装置及计算机设备
CN114143487A (zh) * 2021-12-15 2022-03-04 深圳市前海手绘科技文化有限公司 一种视频录制方法及装置
WO2023142959A1 (fr) * 2022-01-30 2023-08-03 华为技术有限公司 Procédé de photographie d'un système de photographie à caméras multiples, et dispositif, support de stockage et produit de programme
CN114979746B (zh) * 2022-05-13 2024-03-12 北京字跳网络技术有限公司 一种视频处理方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103856617A (zh) * 2012-12-03 2014-06-11 联想(北京)有限公司 一种拍照方法和用户终端
US20140240551A1 (en) * 2013-02-23 2014-08-28 Samsung Electronics Co., Ltd. Apparatus and method for synthesizing an image in a portable terminal equipped with a dual camera
CN104365088A (zh) * 2012-06-08 2015-02-18 三星电子株式会社 使用多个摄像头的多通道通信
CN105120172A (zh) * 2015-09-07 2015-12-02 青岛海信移动通信技术股份有限公司 一种移动终端前后置摄像头拍照方法及移动终端
CN105357542A (zh) * 2015-11-20 2016-02-24 广州华多网络科技有限公司 直播方法、装置及系统

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140133363A (ko) * 2013-05-10 2014-11-19 삼성전자주식회사 디스플레이 장치 및 이의 제어 방법
CN103945275B (zh) * 2014-03-28 2017-05-03 小米科技有限责任公司 图像录制控制方法、装置及移动终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104365088A (zh) * 2012-06-08 2015-02-18 三星电子株式会社 使用多个摄像头的多通道通信
CN103856617A (zh) * 2012-12-03 2014-06-11 联想(北京)有限公司 一种拍照方法和用户终端
US20140240551A1 (en) * 2013-02-23 2014-08-28 Samsung Electronics Co., Ltd. Apparatus and method for synthesizing an image in a portable terminal equipped with a dual camera
CN105120172A (zh) * 2015-09-07 2015-12-02 青岛海信移动通信技术股份有限公司 一种移动终端前后置摄像头拍照方法及移动终端
CN105357542A (zh) * 2015-11-20 2016-02-24 广州华多网络科技有限公司 直播方法、装置及系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110719416A (zh) * 2019-09-30 2020-01-21 咪咕视讯科技有限公司 一种直播方法、通信设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN106165430B (zh) 2019-11-08
CN106165430A (zh) 2016-11-23

Similar Documents

Publication Publication Date Title
WO2018000227A1 (fr) Procédé et dispositif de diffusion vidéo
WO2017181556A1 (fr) Procédé et dispositif de diffusion en continu en direct de jeu vidéo
CN111818359B (zh) 直播互动视频的处理方法、装置、电子设备及服务器
WO2017219347A1 (fr) Procédé, dispositif et système d'affichage de diffusion en direct
CN106791893B (zh) 视频直播方法及装置
EP3125530B1 (fr) Procédé et dispositif d'enregistrement vidéo
KR101680714B1 (ko) 실시간 동영상 제공 방법, 장치, 서버, 단말기기, 프로그램 및 기록매체
WO2017101485A1 (fr) Dispositif et procédé d'affichage vidéo
CN108419016B (zh) 拍摄方法、装置及终端
CN106506448B (zh) 直播显示方法、装置及终端
WO2017181551A1 (fr) Procédé et dispositif de traitement vidéo
US20210281909A1 (en) Method and apparatus for sharing video, and storage medium
CN106210496B (zh) 照片拍摄方法及装置
WO2017036038A1 (fr) Procédé et appareil de traitement d'effet vidéo et dispositif terminal
KR20170023885A (ko) 음성 또는 영상 통화 중 컨텍스트 정보 합성 및 전송 기법
JP2016535351A (ja) 動画情報共有方法、装置、プログラム、及び記録媒体
CN106028137A (zh) 直播处理方法及装置
US11539888B2 (en) Method and apparatus for processing video data
JP6385429B2 (ja) ストリーム・メディア・データを再生する方法および装置
CN109922252B (zh) 短视频的生成方法及装置、电子设备
WO2016045323A1 (fr) Procédé et dispositif de commande de présentation d'une image vidéo
WO2018018508A1 (fr) Procédé et appareil de commande de lecture
US20170054906A1 (en) Method and device for generating a panorama
WO2017024713A1 (fr) Procédé, appareil et terminal de réglage d'image vidéo
WO2018053722A1 (fr) Procédé et dispositif de capture de photo panoramique

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16906628

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16906628

Country of ref document: EP

Kind code of ref document: A1