CN110536075B - Video generation method and device - Google Patents

Video generation method and device Download PDF

Info

Publication number
CN110536075B
CN110536075B CN201910892709.XA CN201910892709A CN110536075B CN 110536075 B CN110536075 B CN 110536075B CN 201910892709 A CN201910892709 A CN 201910892709A CN 110536075 B CN110536075 B CN 110536075B
Authority
CN
China
Prior art keywords
terminal
video
communication link
user identity
synthesized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910892709.XA
Other languages
Chinese (zh)
Other versions
CN110536075A (en
Inventor
胡晨鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhangmen Science and Technology Co Ltd
Original Assignee
Shanghai Zhangmen Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhangmen Science and Technology Co Ltd filed Critical Shanghai Zhangmen Science and Technology Co Ltd
Priority to CN201910892709.XA priority Critical patent/CN110536075B/en
Publication of CN110536075A publication Critical patent/CN110536075A/en
Application granted granted Critical
Publication of CN110536075B publication Critical patent/CN110536075B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25808Management of client data
    • H04N21/25816Management of client data involving client authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video or multiplex stream to a specific local network, e.g. a IEEE 1394 or Bluetooth® network
    • H04N21/43637Adapting the video or multiplex stream to a specific local network, e.g. a IEEE 1394 or Bluetooth® network involving a wireless protocol, e.g. Bluetooth, RF or wireless LAN [IEEE 802.11]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone

Abstract

The embodiment of the application discloses a video generation method and device. One embodiment of the method comprises: sending a video request to at least one second terminal; in response to the determination that the first terminal is matched with the at least one second terminal, shooting a video, and receiving a video to be synthesized transmitted by the at least one second terminal, wherein the video to be synthesized conforms to the shooting parameters; and synthesizing the shot video and at least one video to be synthesized to obtain the panoramic video. The embodiment of the application can synthesize videos shot by all the terminals to realize the synthesis of the videos with a plurality of view field angles, so that a panoramic video with more comprehensive view field angles is manufactured. In addition, according to the embodiment of the application, by matching the first terminal with the second terminal, the shooting and synthesizing process that an unknown device adds a video is avoided, and the information safety of the device and the smooth operation of video synthesis are ensured.

Description

Video generation method and device
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to the technical field of internet, and especially relates to a video generation method and device.
Background
With the technical development of terminal equipment, more and more terminal equipment can realize the shooting function. Because various terminal devices have the advantage of being convenient to carry, with the popularization of the terminal devices, many users are gradually used to shoot videos by using the terminal devices,
however, the camera of the mobile terminal used by the user often covers only a small viewing angle, and the shooting viewing field is small.
Disclosure of Invention
The embodiment of the application provides a video generation method and device.
In a first aspect, an embodiment of the present application provides a video generation method, which is used for a first terminal, and the method includes: sending a video request to at least one second terminal, wherein the video request carries shooting parameters and a user identity corresponding to the first terminal, the distance between the position of the first terminal and the position of the at least one second terminal is smaller than a target distance, and the user identity corresponding to the first terminal is different from the user identity corresponding to the second terminal; in response to the fact that the first terminal is matched with the at least one second terminal, shooting videos and receiving videos to be synthesized transmitted by the at least one second terminal, wherein the videos to be synthesized conform to shooting parameters; and synthesizing the shot video and at least one video to be synthesized to obtain the panoramic video.
In a second aspect, an embodiment of the present application provides a video generation method, which is used for a second terminal, and the method includes: receiving a video request sent by a first terminal, wherein the video request carries shooting parameters and a user identity corresponding to the first terminal, the distance between the position of a second terminal and the position of the first terminal is smaller than a target distance, and the user identity corresponding to the first terminal is different from the user identity corresponding to the second terminal; determining whether the first terminal and the second terminal are matched; and if the matching is determined, in response to the detection of the shooting starting operation, shooting the video which accords with the shooting parameters and serves as the video to be synthesized, and transmitting the video to be synthesized to the first terminal so that the first terminal synthesizes the video shot by the first terminal with the video to be synthesized to obtain the panoramic video.
In a third aspect, an embodiment of the present application provides a video generating apparatus, configured to be used for a first terminal, where the apparatus includes: the sending unit is configured to send a video request to at least one second terminal, wherein the video request carries shooting parameters and a user identity corresponding to the first terminal, the distance between the position of the first terminal and the position of the at least one second terminal is smaller than a target distance, and the user identity corresponding to the first terminal is different from the user identity corresponding to the second terminal; the shooting unit is configured to shoot videos in response to the fact that the first terminal is matched with the at least one second terminal, and receive videos to be synthesized transmitted by the at least one second terminal, wherein the videos to be synthesized conform to shooting parameters; and the synthesizing unit is configured to synthesize the shot video and at least one video to be synthesized to obtain a panoramic video.
In a fourth aspect, an embodiment of the present application provides a video generating apparatus, configured to be used in a second terminal, where the apparatus includes: the receiving unit is configured to receive a video request sent by a first terminal, wherein the video request carries shooting parameters and a user identity corresponding to the first terminal, the distance between the position of a second terminal and the position of the first terminal is smaller than a target distance, and the user identity corresponding to the first terminal is different from the user identity corresponding to the second terminal; a determining unit configured to determine whether the first terminal and the second terminal are matched; and the transmission unit is configured to respond to the detection of shooting starting operation if matching is determined, shoot the video which accords with the shooting parameters and serve as the video to be synthesized, and transmit the video to be synthesized to the first terminal so that the first terminal synthesizes the video shot by the first terminal with the video to be synthesized to obtain the panoramic video.
In a fifth aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a storage device for storing one or more programs which, when executed by one or more processors, cause the one or more processors to implement a method as in any embodiment of the video generation method.
In a sixth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method as in any embodiment of the video generation method.
According to the video generation scheme provided by the embodiment of the application, a video request is sent to at least one second terminal, wherein the video request carries shooting parameters and a user identity corresponding to the first terminal, the distance between the position of the first terminal and the position of the at least one second terminal is smaller than a target distance, and the user identity corresponding to the first terminal is different from the user identity corresponding to the second terminal. And then, responding to the fact that the first terminal is matched with the at least one second terminal, shooting the video, and receiving the video to be synthesized transmitted by the at least one second terminal, wherein the video to be synthesized conforms to the shooting parameters. And finally, synthesizing the shot video and at least one video to be synthesized to obtain the panoramic video. The embodiment of the application can synthesize videos shot by all the terminals to realize the synthesis of the videos with a plurality of view field angles, so that a panoramic video with more comprehensive view field angles is manufactured. In addition, according to the embodiment of the application, by matching the first terminal with the second terminal, the shooting and synthesizing process that an unknown device adds a video is avoided, and the information safety of the device and the smooth operation of video synthesis are ensured.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram to which some embodiments of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a video generation method according to the present application;
FIG. 3 is a schematic diagram of an application scenario of a video generation method according to the present application;
FIG. 4 is a flow diagram of yet another embodiment of a video generation method according to the present application;
FIG. 5 is a schematic block diagram of one embodiment of a video generation apparatus according to the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing an electronic device of some embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that, in the present application, the embodiments and features of the embodiments may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the video generation method or video generation apparatus of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. Network 104 is the medium used to provide communication links between terminal devices 101, 102, 103 and server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
Each user may interact with a server 105 via a network 104 using terminal devices 101, 102, 103, respectively, to receive or send messages, etc. For example, user a may send a video request to terminal devices 102, 103 using terminal device 101. Various communication client applications, such as video applications, live applications, instant messaging tools, mailbox clients, social platform software, and the like, may be installed on the terminal devices 101, 102, and 103.
Here, the terminal devices 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a display screen, including but not limited to smart phones, tablet computers, e-book readers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server providing various services, such as a background server providing support for the terminal devices 101, 102, 103. The backend server can analyze and process data such as the video request and feed back a processing result (for example, information of other terminal devices receiving the video request) to the terminal device.
It should be noted that the video generation method provided in the embodiment of the present application may be executed by the terminal devices 101, 102, and 103, and accordingly, the video generation apparatus may be disposed in the terminal devices 101, 102, and 103.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a video generation method according to the present application is shown. The video generation method is used for a first terminal and comprises the following steps:
step 201, sending a video request to at least one second terminal, where the video request carries a shooting parameter and a user identity corresponding to the first terminal, a distance between a position of the first terminal and a position of the at least one second terminal is less than a target distance, and the user identity corresponding to the first terminal is different from the user identity corresponding to the second terminal.
In this embodiment, an execution subject of the video generation method (e.g., a terminal device shown in fig. 1) may transmit a video request to at least one second terminal. Here, the distance between the first terminal and the second terminal is small, so that videos shot by the first terminal and the second terminal can be well combined. In some embodiments, the shooting parameter is a parameter that can be supported by the hardware device to shoot the video corresponding to the video request, and may include, but is not limited to, a resolution of a camera, a bitrate of the video (including a bitrate of audio in the video), and/or a frame rate of the video.
The user identity corresponding to the terminal may include, but is not limited to, an identity of a login user corresponding to an account logged in at the terminal. The account here may be an account of any application or any platform. For example, the user id may be a user id registered on the target social platform. In this embodiment, the user identities corresponding to different terminals may be different.
In practice, the execution main body sends the video request to the second terminal, and the video request may be sent to the second terminal directly through a local area network (such as a mobile hotspot or bluetooth). The video request may carry information indicating each of the at least one second terminal, such as an identifier of the at least one second terminal.
In some optional implementation manners of this embodiment, the user identity corresponding to the first terminal and the user identity corresponding to the at least one second terminal are friend relationship identifiers on the target social platform, and the user identity corresponding to the first terminal and the user identity corresponding to the at least one second terminal are in the same social session on the target social platform; and step 201 may comprise: within the social session, a video request is initiated. In some embodiments, the user identity corresponding to the first terminal and the user identity corresponding to the at least one second terminal are in the same social session on the target social platform, but the user identity corresponding to the first terminal and the user identity corresponding to the at least one second terminal are not friend relation identities.
In these optional implementations, the user logged in at the first terminal and each user logged in at the at least one second terminal are in a friend relationship on the target social platform. Moreover, the user id corresponding to the first terminal may be in a social session with the user ids corresponding to different second terminals, that is, the user logged in the first terminal and the user logged in each second terminal are in a one-to-one social session. In addition, the user id corresponding to the first terminal may also be in a social session with multiple user ids (for example, all user ids) in the user id corresponding to the at least one second terminal. That is, the user who logs in at the first terminal may be in a social session with group chat with a plurality of users among the users who log in at least one second terminal.
In practice, the video request may be initiated in the one-to-one social session or the social session of the group chat. The video request may be a segment of text, or a trigger button for starting the second terminal to determine whether the first terminal and the second terminal are matched, or a trigger button for generating information for receiving the video request.
The realization modes can ensure that the video information is not leaked and the recording process is smoothly carried out by carrying out the related operation of video synthesis between the devices of the users which are known mutually, thereby avoiding that irrelevant personnel interrupt the shooting and synthesizing process of the video or acquire the information of the devices or the video.
In some optional implementations of this embodiment, a height difference between the first terminal and the at least one second terminal is less than or equal to the target value.
In these optional implementation manners, the height difference between the first terminal and the second terminal is smaller when the first terminal shoots the video, so that the size, the angle, the light and the like of an object participating in the synthesized video are prevented from being obviously different due to the larger difference of shooting positions by controlling the horizontal distance and the vertical height difference between the shooting devices, and the synthesized video has a stable picture effect which is closer to the shooting of the same terminal.
Step 202, in response to determining that the first terminal is matched with the at least one second terminal, shooting a video, and receiving a video to be synthesized transmitted by the at least one second terminal, wherein the video to be synthesized conforms to the shooting parameters.
In this embodiment, if it is determined that the first terminal matches each of the at least one second terminal, the execution subject may capture a video. In addition, the execution body may further receive a video to be synthesized transmitted by the at least one second terminal. The video to be synthesized can be shot by the second terminal, and the shooting time of the video shot by the first terminal and the shooting time of the video shot by the second terminal can be overlapped. In practice, the video shooting of the first terminal may be triggered by a shooting trigger operation performed by the user on the first terminal, or may be performed directly after the first terminal determines that the first terminal is matched with the at least one second terminal.
In particular, the executing entity may determine that the first terminal matches the at least one second terminal in a variety of ways. For example, the executing entity may determine that the first terminal matches with the at least one second terminal after receiving matching information sent by other electronic devices, such as the at least one second terminal. In addition, the execution subject may also obtain hardware data of at least one second terminal in the local or server, where the hardware data may be a machine model and the like. Then, the execution main body may determine whether the first terminal is capable of shooting the video corresponding to the video request, and specifically may directly determine locally or display the hardware data to a user for determination, so that a determination result may be obtained. And if the judgment result is that the shooting can be carried out, determining that the first terminal is matched with the at least one second terminal.
In some optional implementations of this embodiment, step 201 may include: sending a video request to at least one second terminal so that the at least one second terminal determines whether the first terminal passes verification and/or determines whether hardware parameters of the second terminal support shooting parameters or not based on a user identity corresponding to the first terminal; and capturing a video in response to determining that the first terminal matches the at least one second terminal in step 202 may include: and shooting the video in response to receiving the matching information, wherein the matching information indicates that the first terminal passes the verification and/or the hardware parameters of the second terminal support the shooting parameters.
In these alternative implementations, each second terminal may determine whether the first terminal and the second terminal match, and if the first terminal and the second terminal match, generate matching information indicating the matching relationship, and send the matching information to the first terminal. In this way, the first terminal may capture a video after receiving the matching information sent by the at least one second terminal.
Specifically, the second terminal may determine whether the first terminal passes the authentication based on the user identity corresponding to the first terminal, and in a case that the first terminal passes the authentication, determine the matching. Here, if the user identity corresponding to the first terminal and the user identity corresponding to the second terminal are in a friend relationship and/or in a social session, it may be determined that the first terminal is authenticated. And the second terminal can also determine matching under the condition that the hardware parameter of the second terminal supports the shooting parameter. Further, the second terminal may also determine the match in the case where it is determined that the authentication is passed and it is determined that the above-described photographing parameters are supported. The hardware parameter may be, for example, the resolution of the camera, and may further include a performance parameter of the processor, and the like.
The realization modes can judge whether the first terminal is matched with the second terminal from various aspects through the conditions of identity verification, hardware parameters and the like, thereby improving the judgment accuracy, avoiding the interruption of the shooting and synthesizing processes of the video by irrelevant personnel, and ensuring the higher quality of the video by limiting the hardware parameters.
And 203, synthesizing the shot video and at least one video to be synthesized to obtain a panoramic video.
In this embodiment, the executing body may synthesize the captured video and the received at least one video to be synthesized into a panoramic video. The panorama image covers a larger view field angle and contains more shooting contents. In practice, the execution subject described above may perform video composition in various ways. For example, the execution body may directly connect the videos end to end.
In some optional implementations of this embodiment, the synthesizing the captured video and the at least one video to be synthesized in step 203 may include: determining a shot video and a superposed video frame of each video to be synthesized in at least one video to be synthesized, wherein the superposed video frame is a video frame with the similarity larger than a preset threshold value; and synthesizing the shot video and at least one video to be synthesized by taking the superposed video frame as a reference.
In these alternative implementations, the executing entity may determine a video shot by the first terminal, and a video frame coinciding with each video in the at least one video to be synthesized. Then, the execution subject may synthesize the video captured by the first terminal and the video to be synthesized, with a superimposed video frame in the video to be synthesized as a reference, for each video to be synthesized.
Specifically, the executing body may search, for each video frame of the video shot by the first terminal, a video frame with the largest similarity and the similarity being greater than a similarity threshold value in the video to be synthesized, and use the video frame as a video shot by the first terminal and a corresponding coincident video frame in the video to be synthesized. In addition, the executing body may also perform the above search process in the video shot by the first terminal for each video frame in the video to be synthesized, so as to obtain a coincident video frame.
In practice, the execution body described above may be synthesized with the coincident video frames as references in a variety of ways. For example, the overlapped video frame in the video to be synthesized may be removed, and the video frames other than the overlapped video frame in the video to be synthesized may be connected to the video shot by the first terminal. In addition, the overlapped video frame in the video shot by the first terminal can be removed, and the video frame except the overlapped video frame in the video shot by the first terminal is connected with the video to be synthesized.
These implementations can accurately find the coincident video frames by similarity, thereby synthesizing an accurate panoramic video based on the coincident video frames.
In some optional application scenarios of these implementations, the synthesizing the captured video and the at least one video to be synthesized by using the overlapped video frame as a reference may include: and carrying out image fusion on every two corresponding superposed video frames in the shot video and the video to be synthesized.
In these optional application scenarios, the execution subject may perform image fusion on the overlapped video frame in the video shot by the first terminal and the overlapped video frame in the corresponding video to be synthesized.
The application scenes can enable the synthesized video to be more natural and smooth, and avoid hard-to-live connection at the video splicing part.
In some optional implementations of this embodiment, the method may further include: and publishing the panoramic video to a social space of the target social platform.
In these alternative implementations, the execution subject may post the synthesized panoramic video in the social space of the target social platform. The social space may be, for example, a social conversation, or may be a space for posting personal status information or comments, etc. In particular, panoramic video may be published to social spaces as posts or live video.
The implementation modes can release the panoramic video in the social space, so that users can share the synthesized video, and the application universality of the scheme is improved.
In some optional implementations of this embodiment, the method may further include: determining a bandwidth required for real-time transmission of a video shot using the shooting parameters; selecting a communication link conforming to the bandwidth from the candidate communication links as a target communication link, and sending target communication link information indicating the target communication link to at least one second terminal, wherein each second terminal has a corresponding target communication link; and receiving the video to be synthesized transmitted by at least one second terminal, wherein the video to be synthesized comprises: and receiving the video to be synthesized transmitted by at least one second terminal in real time through the target communication link.
In these alternative implementations, the executing entity may determine a bandwidth required for the second terminal to transmit the video in real time by using the shooting parameter, select a communication link conforming to the bandwidth from the candidate communication links as a target communication link, and generate target communication link information indicating the target communication link. Thereafter, the executing entity may transmit the target communication link information to the at least one second terminal. In particular, the first terminal may determine a target communication link for video transmission with each of the second terminals. In this way, each second terminal can transmit the video to be synthesized to the first terminal in real time through the corresponding target communication link.
The candidate communication link here refers to a communication link that the first terminal is able to detect, for example a local area network or a general packet radio service technology network (such as a 3G, 4G or 5G network).
The realization modes can determine the bandwidth required by the real-time transmission of the video through the shooting parameters, so that the second terminal can transmit the video in real time, the first terminal can synthesize the panoramic video in time conveniently, and the generation efficiency of the panoramic video is improved.
In some optional application scenarios of these implementations, the selecting a communication link conforming to the bandwidth from the candidate communication links as the target communication link may include: transmitting a communication link information request to at least one second terminal; receiving available communication link information returned by at least one second terminal, wherein the available communication link information is used for indicating available communication links of the second terminals; for each of the at least one second terminal, the available communication links of the first terminal and the second terminal are selected from the candidate communication links, and the communication link conforming to the bandwidth is used as the target communication link.
In these optional application scenarios, the execution main body may send a communication link information request to each second terminal to request the second terminal to return available communication link information indicating an available communication link of the second terminal. Then, the execution main body may receive the available communication link information returned by each second terminal, and determine a target communication link with each second terminal according to its own available communication link.
In particular, the target communication link for the second terminal is not only an available communication link for the second terminal, but also an available communication link for the first terminal. In addition, the bandwidth of the target communication link is greater than or equal to a previously determined bandwidth required for real-time transmission.
The implementation modes can integrate the conditions of the available communication links of the first terminal and the second terminal and the bandwidth conditions, select the communication link capable of supporting real-time transmission and ensure that the video can be transmitted smoothly.
In some optional cases of these application scenarios, selecting an available communication link from the candidate communication links as the first terminal and the second terminal, and taking the communication link conforming to the bandwidth as the target communication link, includes: determining available communication links of the first terminal and the second terminal from the candidate communication links, and taking the communication links which accord with the bandwidth as target candidate communication links; and selecting the communication link with the lowest corresponding value from the target candidate communication links as a target communication link.
In these alternative cases, the available communication links as the first terminal and the second terminal may be determined, and the communication link that meets the bandwidth requirement as the target candidate communication link. Then, when the number of the target candidate communication links is two or more, the communication link having the lowest value among the two communication links is set as the target communication link.
In particular, the value may be a value obtained by quantifying the value transmitted by the communication link. In practice, the value corresponding to the communication link may be expressed in terms of the price of the communication link used, and thus the execution subject may select the communication link with the lowest price. For example, if the usage price of the local area network is zero, the execution subject may use the local area network as the target communication link.
The executive body under the conditions can automatically select the communication link with low price for the user, thereby avoiding the consumption of cost caused by video transmission.
According to the method provided by the embodiment of the application, the videos shot by all the terminals are synthesized, and the synthesis of the videos with a plurality of view field angles is realized, so that a panoramic video with more comprehensive view field angles is manufactured. In addition, according to the embodiment of the application, by matching the first terminal with the second terminal, the shooting and synthesizing process that an unknown device adds a video is avoided, and the information safety of the device and the smooth operation of video synthesis are ensured.
With further reference to fig. 3, a flow 300 of yet another embodiment of a video generation method is shown. The process 300 of the video generation method is applied to the second terminal, and the method includes the following steps:
step 301, receiving a video request sent by a first terminal, wherein the video request carries shooting parameters and a user identity corresponding to the first terminal, a distance between a position of a second terminal and a position of the first terminal is smaller than a target distance, and the user identity corresponding to the first terminal is different from the user identity corresponding to the second terminal; step 302, determining whether the first terminal is matched with the second terminal; step 303, if the matching is determined, in response to the detection of the shooting start operation, shooting a video meeting the shooting parameters and serving as a video to be synthesized, and transmitting the video to be synthesized to the first terminal, so that the first terminal synthesizes the video shot by the first terminal with the video to be synthesized to obtain the panoramic video.
In this embodiment, an execution subject (e.g., a terminal device shown in fig. 1) on which the video generation method operates may receive a video request transmitted by a first terminal and determine whether the first terminal and a second terminal are matched. If a match is determined, a video that meets the shooting parameters described above may be shot as a video to be synthesized in the event that a shooting start operation by the user is detected. Therefore, the first terminal can synthesize the video shot by the first terminal and the at least one video to be synthesized to obtain the panoramic video.
In some optional implementation manners of this embodiment, the user identity corresponding to the first terminal and the user identity corresponding to the at least one second terminal are friend relationship identifiers on the target social platform, and the user identity corresponding to the first terminal and the user identity corresponding to the at least one second terminal are in the same social session on the target social platform; and receiving a video request sent by the first terminal, wherein the receiving may include: within a social session, a video request initiated by a first terminal is received.
In these alternative implementations, the execution subject may receive and display the video request after the first terminal initiates the video request in the social session.
The realization modes can ensure that the video information is not leaked and the recording process is smoothly carried out by transmitting the related information of video synthesis between the devices of the users which know each other, and avoid that irrelevant personnel interrupt the shooting and synthesizing process of the video or acquire the information of the video.
In some optional implementation manners of this embodiment, the video request further includes a user identity corresponding to the first terminal; and step 302, may include: determining whether the first terminal passes the verification or not based on the user identity corresponding to the first terminal; and/or determining whether the hardware parameter of the second terminal supports the photographing parameter.
In these alternative implementations, the execution subject may determine whether the first terminal passes the verification, and may further determine whether the hardware parameter of the second terminal supports the shooting parameter.
The realization modes can judge whether the first terminal is matched with the second terminal from various aspects through the authentication of the identity, the conditions of hardware parameters and the like, thereby improving the accuracy of judgment, avoiding the interruption of the shooting and synthesizing processes of the video by irrelevant personnel, and ensuring the higher quality of the video by limiting the hardware parameters.
In some optional application scenarios of these implementations, the determining whether the hardware parameter of the second terminal supports the shooting parameter may include: and if the first terminal is confirmed to pass the verification, determining whether the hardware parameters of the second terminal support the shooting parameters.
In these optional application scenarios, the execution subject may first determine whether the first terminal passes the verification, and if the first terminal passes the verification, then determine whether the hardware parameter of the second terminal supports the shooting parameter. And determining that the first terminal is matched with the second terminal under the condition that the hardware parameter of the second terminal supports the shooting parameter.
The application scenes can determine whether the hardware parameters of the first terminal are qualified or not under the condition that the first terminal is confirmed to pass the verification, so that invalid hardware parameter detection can be avoided on the equipment under the condition that the first terminal is equipment of an unknown person.
In some optional application scenarios of these implementations, the determining whether the first terminal passes the verification based on the user identity corresponding to the first terminal may include: and if the user identity corresponding to the first terminal and the user identity corresponding to the second terminal are determined to be friend relation identities and/or are in the same social session, determining that the first terminal passes the verification.
In these optional application scenarios, the execution subject may determine whether the user identities corresponding to the first terminal and the second terminal are friend identities and/or are in the same social session. The two user identification marks are friend relation marks, which means that registered users corresponding to the two user identification marks have friend relations on the target social contact platform.
The application scenes can automatically judge whether the user of the first terminal and the user of the second terminal know each other through the friend relationship and the social session, so that whether the first terminal passes the verification or not is accurately determined.
In some optional application scenarios of these implementation manners, the determining whether the first terminal passes the verification based on the user identity corresponding to the first terminal may include: displaying a user identity corresponding to a first terminal, wherein the user identity corresponding to the first terminal is a registered user identity of a target social contact platform; and determining that the first terminal passes the verification in response to detecting the verification passing operation corresponding to the displayed user identity.
In these optional application scenarios, the execution main body may display the user identity corresponding to the first terminal to the user of the second terminal, so that the user can determine whether the first terminal can pass the authentication through the user identity. And if the user considers that the first terminal corresponding to the user identity can pass the verification, performing verification passing operation. The verification passing operation corresponds to the displayed user identity, that is, the verification passing operation can indicate that the first terminal corresponding to the displayed user identity passes the verification, and specifically, the user identity and a button for receiving the verification passing operation can be displayed on the same page.
The application scenarios may display the user identity of the first terminal, so that the user of the second terminal determines the user identity and indicates whether the first terminal passes the verification. Therefore, the verification result can be more in line with the intention of the user.
In some optional implementation manners of this embodiment, if the matching is determined in step 303, the determining may include: and if the matching is determined, sending matching information so that the first terminal can shoot the video in response to the received matching information.
In these alternative implementations, the execution subject may send matching information, so that the first terminal determines that the first terminal matches the second terminal in response to receiving the matching information, thereby capturing the video. Specifically, the execution body may directly send the matching information to the first terminal. In addition, the execution body may send matching information to the server, so that the server forwards the matching information to the first terminal.
These implementations may accurately indicate to the first terminal that the first terminal matches the second terminal through the matching information.
In some optional implementations of this embodiment, the transmitting the video to be synthesized to the first terminal in step 303 may include: and transmitting the video to be synthesized to the first terminal in real time through a target communication link indicated by the target communication link information sent by the first terminal, wherein the second terminal has a corresponding target communication link.
The realization modes can carry out real-time transmission through the target communication link, thereby ensuring the timeliness of video synthesis and improving the video synthesis efficiency.
In some optional application scenarios of these implementations, the method may further include: in response to receiving a communication link information request sent by a first terminal, determining an available communication link of a second terminal; and generating available communication link information indicating available communication links, and returning the available communication link information to the first terminal, so that the first terminal selects available communication links which are the first terminal and the second terminal from the candidate communication links for each second terminal, and the communication link which meets the bandwidth required by real-time transmission of the video to be synthesized is used as a target communication link.
In these optional application scenarios, the execution main body may determine its available communication link after receiving the communication link information request sent by the first terminal, and generate available communication link information indicating the available communication link. The executing entity may then return the available communication link information to the first terminal. In this way, the first terminal can select a target communication link. At this time, the video to be synthesized is not generated, and the video to be synthesized refers to the video shot by the second terminal by using the shooting parameters
The application scenes can integrate the available communication link conditions and the bandwidth conditions of the first terminal and the second terminal, select the communication link capable of supporting real-time transmission and ensure that the video can be transmitted smoothly.
With further reference to fig. 4, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of a video generating apparatus, which corresponds to the embodiment of the method shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 4, the video generation apparatus 400 of the present embodiment is applied to a first terminal, and includes: a transmission unit 401, a shooting unit 402, and a synthesis unit 403. The sending unit 401 is configured to send a video request to at least one second terminal, where the video request carries a shooting parameter and a user identity corresponding to the first terminal, a distance between a position of the first terminal and a position of the at least one second terminal is smaller than a target distance, and the user identity corresponding to the first terminal is different from the user identity corresponding to the second terminal; a shooting unit 402 configured to shoot a video in response to determining that the first terminal is matched with the at least one second terminal, and receive a video to be synthesized transmitted by the at least one second terminal, wherein the video to be synthesized conforms to the shooting parameters; and a synthesizing unit 403 configured to synthesize the shot video and at least one video to be synthesized to obtain a panoramic video.
In some embodiments, the transmitting unit 401 of the video generating apparatus 400 may transmit a video request to at least one second terminal. Here, the distance between the first terminal and the second terminal is small so that videos shot by the two terminals can be well combined. The shooting parameters are parameters that can be supported by the hardware device to shoot the video corresponding to the video request, and may include resolution of a camera, bitrate of the video (including bitrate of audio in the video), and/or frame rate of the video, and so on.
In some embodiments, the photographing unit 402 may photograph the video if it is determined that the first terminal matches each of the at least one second terminal. In addition, the execution main body may further receive a video to be synthesized transmitted by the at least one second terminal. The video to be synthesized can be shot by the second terminal, and the shooting time of the video shot by the first terminal and the shooting time of the video shot by the second terminal can be overlapped. In practice, the video shooting of the first terminal may be triggered by a shooting trigger operation performed by the user on the first terminal, or may be performed directly after the first terminal determines that the first terminal is matched with the at least one second terminal.
In some embodiments, the composition unit 403 may compose the photographed video and the received at least one video to be composited into a panoramic video. The panorama image covers a larger view field angle and contains more shooting contents. In practice, the execution subject described above may perform video composition in various ways. For example, the execution body may directly connect the videos end to end.
With further reference to fig. 5, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of a video generating apparatus, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 5, the video generating apparatus 500 of the present embodiment is applied to a second terminal, and includes: a receiving unit 501, a determining unit 502 and a transmitting unit 503. The receiving unit 501 is configured to receive a video request sent by a first terminal, where the video request carries a shooting parameter and a user identity identifier corresponding to the first terminal, a distance between a position of a second terminal and a position of the first terminal is less than a target distance, and the user identity identifier corresponding to the first terminal is different from the user identity identifier corresponding to the second terminal; a determining unit 502 configured to determine whether the first terminal and the second terminal match; and if the matching is determined, in response to detecting the shooting start operation, the transmission unit 503 is configured to shoot the video meeting the shooting parameters and serve as the video to be synthesized, and transmit the video to be synthesized to the first terminal, so that the first terminal synthesizes the video shot by the first terminal with the video to be synthesized to obtain the panoramic video.
In some embodiments, the receiving unit 501, the determining unit 502, and the transmitting unit 503 may receive a video request sent by a first terminal and determine whether the first terminal and a second terminal match. If the matching is determined, the video meeting the shooting parameters can be shot as the video to be synthesized under the condition that the shooting starting operation performed by the user is detected. Therefore, the first terminal can synthesize the video shot by the first terminal and the at least one video to be synthesized to obtain the panoramic video.
As shown in fig. 6, the electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be alternatively implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure. It should be noted that the computer readable medium of the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a transmitting unit, a photographing unit, and a synthesizing unit. Where the names of these units do not in some cases constitute a limitation of the unit itself, for example, the sending unit may also be described as "unit sending a video request to at least one second terminal".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be separate and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: sending a video request to at least one second terminal, wherein the video request carries shooting parameters and a user identity corresponding to the first terminal, the distance between the position of the first terminal and the position of the at least one second terminal is smaller than a target distance, and the user identity corresponding to the first terminal is different from the user identity corresponding to the second terminal; in response to the fact that the first terminal is matched with the at least one second terminal, shooting videos and receiving videos to be synthesized transmitted by the at least one second terminal, wherein the videos to be synthesized conform to shooting parameters; and synthesizing the shot video and at least one video to be synthesized to obtain the panoramic video.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: receiving a video request sent by a first terminal, wherein the video request carries shooting parameters and a user identity corresponding to the first terminal, the distance between the position of a second terminal and the position of the first terminal is smaller than a target distance, and the user identity corresponding to the first terminal is different from the user identity corresponding to the second terminal; determining whether the first terminal and the second terminal are matched; and if the matching is determined, in response to the detection of the shooting starting operation, shooting the video which accords with the shooting parameters and serves as the video to be synthesized, and transmitting the video to be synthesized to the first terminal so that the first terminal synthesizes the video shot by the first terminal with the video to be synthesized to obtain the panoramic video.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (18)

1. A video generation method for a first terminal, the method comprising:
sending a video request to at least one second terminal so that the at least one second terminal determines whether the first terminal passes verification and/or determines whether hardware parameters of the second terminal support shooting parameters or not based on a user identity corresponding to the first terminal, wherein the video request carries the shooting parameters and the user identity corresponding to the first terminal, the distance between the position of the first terminal and the position of the at least one second terminal is smaller than a target distance, and the user identity corresponding to the first terminal is different from the user identity corresponding to the second terminal;
shooting a video in response to receiving matching information, wherein the matching information indicates that the first terminal supports the shooting parameters through verification and/or hardware parameters of the second terminal, and receiving a video to be synthesized transmitted by the at least one second terminal, wherein the video to be synthesized conforms to the shooting parameters;
synthesizing the shot video and at least one video to be synthesized to obtain a panoramic video;
the user identity corresponding to the first terminal and the user identity corresponding to the at least one second terminal are friend relationship identities on a target social platform, and the user identity corresponding to the first terminal and the user identity corresponding to the at least one second terminal are in the same social session on the target social platform.
2. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
wherein the sending a video request to the at least one second terminal comprises:
within the social session, a video request is initiated.
3. The method of claim 1, wherein the method further comprises:
and publishing the panoramic video to a social space of a target social platform.
4. The method of claim 1, wherein the method further comprises:
determining a bandwidth required for real-time transmission of a video shot by using the shooting parameters;
selecting a communication link conforming to the bandwidth from the candidate communication links as a target communication link, and sending target communication link information indicating the target communication link to the at least one second terminal, wherein each second terminal has a corresponding target communication link; and
the receiving of the video to be synthesized transmitted by the at least one second terminal includes:
and receiving the video to be synthesized transmitted by the at least one second terminal in real time through the target communication link.
5. The method of claim 4, wherein the selecting the communication link conforming to the bandwidth from the candidate communication links as the target communication link comprises:
sending a communication link information request to the at least one second terminal;
receiving available communication link information returned by the at least one second terminal, wherein the available communication link information is used for indicating available communication links of the second terminals;
and for each second terminal in the at least one second terminal, selecting an available communication link of the first terminal and the second terminal from the candidate communication links, and taking the communication link conforming to the bandwidth as the target communication link.
6. The method of claim 5, wherein the selecting the available communication links of the first terminal and the second terminal from the candidate communication links, and the communication link conforming to the bandwidth as the target communication link comprises:
determining available communication links of the first terminal and the second terminal from candidate communication links, and taking the communication link conforming to the bandwidth as a target candidate communication link;
and selecting the communication link with the lowest corresponding value from the target candidate communication links as the target communication link.
7. The method of claim 1, wherein the compositing the captured video with at least one of the videos to be composited comprises: determining a shot video and a superposed video frame of each video to be synthesized in the at least one video to be synthesized, wherein the superposed video frame is a video frame with the similarity greater than a preset threshold value;
and synthesizing the shot video and the at least one video to be synthesized by taking the superposed video frame as a reference.
8. The method according to claim 7, wherein the compositing the captured video with the at least one video to be composited with the reference to the coincident video frame comprises:
and carrying out image fusion on every two corresponding superposed video frames in the shot video and the video to be synthesized.
9. The method of claim 1, wherein a difference in height between the first terminal and the at least one second terminal is less than or equal to a target value.
10. A video generation method, for a second terminal, the method comprising:
receiving a video request sent by a first terminal, wherein the video request carries shooting parameters and a user identity corresponding to the first terminal, the distance between the position of a second terminal and the position of the first terminal is smaller than a target distance, and the user identity corresponding to the first terminal is different from the user identity corresponding to the second terminal;
determining whether the first terminal passes verification or not based on the user identity corresponding to the first terminal; and/or
Determining whether the hardware parameter of the second terminal supports the shooting parameter;
if the matching is determined, in response to the detection of shooting starting operation, shooting a video which accords with the shooting parameters and serves as a video to be synthesized, and transmitting the video to be synthesized to the first terminal so that the first terminal synthesizes the video shot by the first terminal with the video to be synthesized to obtain a panoramic video;
the user identity corresponding to the first terminal and the user identity corresponding to the at least one second terminal are friend relation identities on the target social platform, and the user identity corresponding to the first terminal and the user identity corresponding to the at least one second terminal are in the same social session on the target social platform.
11. The method of claim 10, wherein the receiving a video request sent by the first terminal comprises: and receiving a video request initiated by the first terminal in the social session.
12. The method of claim 10, wherein the determining whether the hardware parameters of the second terminal support the shooting parameters comprises:
and if the first terminal is confirmed to pass the verification, determining whether the hardware parameter of the second terminal supports the shooting parameter.
13. The method of claim 10, wherein the determining whether the first terminal is authenticated based on the user identity corresponding to the first terminal comprises:
displaying a user identity corresponding to the first terminal, wherein the user identity corresponding to the first terminal is a registered user identity of a target social platform;
and determining that the first terminal passes the verification in response to detecting the verification passing operation corresponding to the displayed user identity.
14. The method of claim 10, wherein the determining a match comprises:
and if the matching is determined, sending matching information so that the first terminal can respond to the received matching information to shoot a video.
15. The method of claim 10, wherein said transmitting the video to be composed to the first terminal comprises:
and transmitting the video to be synthesized to the first terminal in real time through a target communication link indicated by the target communication link information sent by the first terminal, wherein the second terminal has a corresponding target communication link.
16. The method of claim 15, wherein the method further comprises:
in response to receiving a communication link information request sent by the first terminal, determining an available communication link of the second terminal;
and generating available communication link information indicating the available communication links, and returning the available communication link information to the first terminal, so that the first terminal selects the available communication links of the first terminal and the second terminal from candidate communication links for each second terminal, and the communication link which meets the bandwidth required for transmitting the video to be synthesized in real time is taken as the target communication link.
17. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-16.
18. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1-16.
CN201910892709.XA 2019-09-20 2019-09-20 Video generation method and device Active CN110536075B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910892709.XA CN110536075B (en) 2019-09-20 2019-09-20 Video generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910892709.XA CN110536075B (en) 2019-09-20 2019-09-20 Video generation method and device

Publications (2)

Publication Number Publication Date
CN110536075A CN110536075A (en) 2019-12-03
CN110536075B true CN110536075B (en) 2023-02-21

Family

ID=68669337

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910892709.XA Active CN110536075B (en) 2019-09-20 2019-09-20 Video generation method and device

Country Status (1)

Country Link
CN (1) CN110536075B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111263093B (en) * 2020-01-22 2022-04-01 维沃移动通信有限公司 Video recording method and electronic equipment
CN111726536B (en) * 2020-07-03 2024-01-05 腾讯科技(深圳)有限公司 Video generation method, device, storage medium and computer equipment
CN112528049B (en) * 2020-12-17 2023-08-08 北京达佳互联信息技术有限公司 Video synthesis method, device, electronic equipment and computer readable storage medium
WO2022226745A1 (en) * 2021-04-26 2022-11-03 深圳市大疆创新科技有限公司 Photographing method, control apparatus, photographing device, and storage medium
CN113763136B (en) * 2021-11-09 2022-03-18 武汉星巡智能科技有限公司 Intelligent order generation method for video segmentation processing based on weight change of commodity area
CN116546309B (en) * 2023-07-04 2023-10-20 广州方图科技有限公司 Multi-person photographing method and device of self-service photographing equipment, electronic equipment and storage medium

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102340891A (en) * 2011-10-12 2012-02-01 中兴通讯股份有限公司 Service switching method and device for multimode terminal
CN103533239A (en) * 2013-09-30 2014-01-22 宇龙计算机通信科技(深圳)有限公司 Panoramic shooting method and system
CN104427289A (en) * 2013-09-02 2015-03-18 联想(北京)有限公司 Information processing method and electronic device
CN104735348A (en) * 2015-01-30 2015-06-24 深圳市中兴移动通信有限公司 Double-camera photographing method and system
CN104796610A (en) * 2015-04-20 2015-07-22 广东欧珀移动通信有限公司 Mobile terminal and camera sharing method, device and system thereof
CN105049727A (en) * 2015-08-13 2015-11-11 小米科技有限责任公司 Method, device and system for shooting panoramic image
CN105141978A (en) * 2015-08-07 2015-12-09 小米科技有限责任公司 Video access control method, video access control device and cloud server
CN105578113A (en) * 2016-02-02 2016-05-11 北京小米移动软件有限公司 Video communication method, device and system
CN105657325A (en) * 2016-02-02 2016-06-08 北京小米移动软件有限公司 Method, apparatus and system for video communication
WO2016141588A1 (en) * 2015-03-12 2016-09-15 华为技术有限公司 Data transmission method and apparatus, processor and mobile terminal
CN106507023A (en) * 2016-10-31 2017-03-15 北京小米移动软件有限公司 The method and device processed by audio frequency and video request
CN106657620A (en) * 2016-11-30 2017-05-10 努比亚技术有限公司 Picture synthesis method and device, and mobile terminal
CN107659769A (en) * 2017-09-07 2018-02-02 维沃移动通信有限公司 A kind of image pickup method, first terminal and second terminal
CN108259810A (en) * 2018-03-29 2018-07-06 上海掌门科技有限公司 A kind of method of video calling, equipment and computer storage media
CN109089168A (en) * 2018-10-10 2018-12-25 腾讯科技(深圳)有限公司 Video sharing method, apparatus, system and storage medium
CN109714607A (en) * 2017-10-26 2019-05-03 腾讯科技(深圳)有限公司 Broadcast multimedia plays the method for qualification, the method for obtaining multimedia qualification
CN110096245A (en) * 2012-12-04 2019-08-06 阿巴塔科技有限公司 Distributed Synergy user interface and application projection

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102340891A (en) * 2011-10-12 2012-02-01 中兴通讯股份有限公司 Service switching method and device for multimode terminal
CN110096245A (en) * 2012-12-04 2019-08-06 阿巴塔科技有限公司 Distributed Synergy user interface and application projection
CN104427289A (en) * 2013-09-02 2015-03-18 联想(北京)有限公司 Information processing method and electronic device
CN103533239A (en) * 2013-09-30 2014-01-22 宇龙计算机通信科技(深圳)有限公司 Panoramic shooting method and system
CN104735348A (en) * 2015-01-30 2015-06-24 深圳市中兴移动通信有限公司 Double-camera photographing method and system
WO2016141588A1 (en) * 2015-03-12 2016-09-15 华为技术有限公司 Data transmission method and apparatus, processor and mobile terminal
CN104796610A (en) * 2015-04-20 2015-07-22 广东欧珀移动通信有限公司 Mobile terminal and camera sharing method, device and system thereof
CN105141978A (en) * 2015-08-07 2015-12-09 小米科技有限责任公司 Video access control method, video access control device and cloud server
CN105049727A (en) * 2015-08-13 2015-11-11 小米科技有限责任公司 Method, device and system for shooting panoramic image
CN105657325A (en) * 2016-02-02 2016-06-08 北京小米移动软件有限公司 Method, apparatus and system for video communication
CN105578113A (en) * 2016-02-02 2016-05-11 北京小米移动软件有限公司 Video communication method, device and system
CN106507023A (en) * 2016-10-31 2017-03-15 北京小米移动软件有限公司 The method and device processed by audio frequency and video request
CN106657620A (en) * 2016-11-30 2017-05-10 努比亚技术有限公司 Picture synthesis method and device, and mobile terminal
CN107659769A (en) * 2017-09-07 2018-02-02 维沃移动通信有限公司 A kind of image pickup method, first terminal and second terminal
CN109714607A (en) * 2017-10-26 2019-05-03 腾讯科技(深圳)有限公司 Broadcast multimedia plays the method for qualification, the method for obtaining multimedia qualification
CN108259810A (en) * 2018-03-29 2018-07-06 上海掌门科技有限公司 A kind of method of video calling, equipment and computer storage media
CN109089168A (en) * 2018-10-10 2018-12-25 腾讯科技(深圳)有限公司 Video sharing method, apparatus, system and storage medium

Also Published As

Publication number Publication date
CN110536075A (en) 2019-12-03

Similar Documents

Publication Publication Date Title
CN110536075B (en) Video generation method and device
US10313288B2 (en) Photo sharing method and device
EP3170123B1 (en) System and method for setting focus of digital image based on social relationship
US9307194B2 (en) System and method for video call
EP4262214A1 (en) Screen projection method and apparatus, and electronic device and storage medium
CN105472771B (en) Wireless connection method and device
US20150085146A1 (en) Method and system for storing contact information in an image using a mobile device
US20140006513A1 (en) Adjacent person specifying apparatus, adjacent person specifying method, adjacent person specifying program, and adjacent person specifying system
JP2018503148A (en) Method and apparatus for video playback
US20140037157A1 (en) Adjacent person specifying apparatus, adjacent person specifying method, adjacent person specifying program, and adjacent person specifying system
CN113542902B (en) Video processing method and device, electronic equipment and storage medium
CN107959757B (en) User information processing method and device, APP server and terminal equipment
WO2018076358A1 (en) Multimedia information playback method and system, standardized server and broadcasting terminal
CN111368232A (en) Password sharing reflux method and device, electronic equipment and storage medium
CN107733874B (en) Information processing method, information processing device, computer equipment and storage medium
CN110619097A (en) Two-dimensional code generation method and device, electronic equipment and storage medium
CN109995543B (en) Method and apparatus for adding group members
CN105138285B (en) Sharing method, device and the equipment of photographed data
CN110673732A (en) Scene sharing method, device, system, electronic equipment and storage medium
CN113518198B (en) Session interface display method, conference interface display method, device and electronic equipment
CN111538899B (en) Resource information pushing method, equipment side and server side
CN111314627B (en) Method and apparatus for processing video frames
CN110635993B (en) Method and apparatus for synthesizing multimedia information
CN113901353A (en) Information display method, device and system, electronic equipment and server
CN109873823B (en) Verification method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant