WO2022048651A1 - 合拍方法、装置、电子设备及计算机可读存储介质 - Google Patents

合拍方法、装置、电子设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2022048651A1
WO2022048651A1 PCT/CN2021/116519 CN2021116519W WO2022048651A1 WO 2022048651 A1 WO2022048651 A1 WO 2022048651A1 CN 2021116519 W CN2021116519 W CN 2021116519W WO 2022048651 A1 WO2022048651 A1 WO 2022048651A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
terminal
target
background image
superimposed
Prior art date
Application number
PCT/CN2021/116519
Other languages
English (en)
French (fr)
Inventor
何畔龙
施磊
周辰漫
冯凡华
张志豪
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Priority to US18/044,062 priority Critical patent/US20230336684A1/en
Publication of WO2022048651A1 publication Critical patent/WO2022048651A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular, to a co-shooting method, apparatus, electronic device, and computer-readable storage medium.
  • an embodiment of the present disclosure provides a co-photographing method, the method includes: acquiring a first image collected by a first terminal and a second image collected by a second terminal; according to the first image and the second image , synthesizing a third image; in the third image, the first object in the first image and the second object in the second image are superimposed on the specified background image as foreground objects, and the first object in the second image is superimposed on the specified background image.
  • An object is the foreground object of the first image, and the second object is the foreground object of the second image.
  • an embodiment of the present disclosure provides an apparatus for co-shooting, the apparatus includes: an image acquisition module for acquiring a first image collected by a first terminal and a second image collected by a second terminal; an image synthesis module for From the first image and the second image, a third image is synthesized; in the third image, the first object in the first image and the second object in the second image are taken as foreground The objects are superimposed on the specified background image, the first object is the foreground object of the first image, and the second object is the foreground object of the second image.
  • embodiments of the present disclosure provide an electronic device, the electronic device comprising: one or more computer programs, wherein the one or more computer programs are stored in the memory and configured to be executed by The one or more processors execute, the one or more computer programs configured to: perform the method of the first aspect above.
  • an embodiment of the present disclosure provides a computer-readable storage medium on which a computer program is stored, and when the computer program is invoked and executed by a processor, the method described in the first aspect above is implemented.
  • a method, device, electronic device, and computer-readable storage medium for co-shooting provided by the embodiments of the present disclosure, by acquiring a first image collected by a first terminal and a second image collected by a second terminal, and then according to the first image and the second image image, and synthesize the third image, so that in the third image, the first object in the first image and the second object in the second image are superimposed on the specified background image as foreground objects, wherein the first object is the first object The foreground object of one image, and the second object is the foreground object of the second image.
  • the embodiment of the present disclosure acquires images collected by the first terminal and the second terminal, and superimposes the first object in the first image and the second object in the second image as foreground objects on a specified background through synthesis On the image, a third image is obtained, so that co-shooting can be achieved based on multiple terminals. Even if users are in different geographical locations, they can still achieve co-shooting in different places based on their own terminals, breaking through space constraints and enriching the social interaction of shooting. The gameplay improves the user's shooting experience.
  • the embodiments of the present disclosure also reduce the cost of video production, and can synthesize multi-end images in real time while shooting, without the need for post-compositing, thereby improving the creative efficiency.
  • FIG. 1 shows a schematic diagram of an implementation environment suitable for an embodiment of the present disclosure.
  • FIG. 2 shows a schematic flowchart of a matching method provided by an embodiment of the present disclosure.
  • FIG. 3 shows a schematic flowchart of a matching method provided by another embodiment of the present disclosure.
  • FIG. 4 shows a schematic diagram of a display interface provided by an exemplary embodiment of the present disclosure.
  • FIG. 5 shows another schematic diagram of a display interface provided by an exemplary embodiment of the present disclosure.
  • FIG. 6 shows a schematic flowchart of a matching method provided by another embodiment of the present disclosure.
  • FIG. 7 shows another schematic diagram of a display interface provided by an exemplary embodiment of the present disclosure.
  • FIG. 8 shows a schematic flowchart of a matching method provided by still another embodiment of the present disclosure.
  • FIG. 9 shows a block diagram of modules of a timing device provided by an embodiment of the present disclosure.
  • FIG. 10 shows a structural block diagram of an electronic device provided by an embodiment of the present disclosure.
  • the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
  • the term “based on” is “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • FIG. 1 shows a schematic diagram of an implementation environment applicable to an embodiment of the present disclosure.
  • the implementation environment may include: a first terminal 120 and a second terminal 140 . in:
  • the first terminal 120 and the second terminal 140 may be a mobile phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, a standard audio level for moving picture compression), MP4 (Moving Picture Experts Group Audio Layer IV, a standard audio for moving picture compression) Level 4) Players, wearable devices, in-vehicle devices, Augmented Reality (AR)/Virtual Reality (VR) devices, laptops, Ultra-Mobile Personal Computers (UMPC), netbooks , a personal digital assistant (Personal digital assistant, PDA) or a special camera (such as a single-lens reflex camera, a card camera) and the like.
  • MP3 player Moving Picture Experts Group Audio Layer III, a standard audio level for moving picture compression
  • MP4 Moving Picture Experts Group Audio Layer IV, a standard audio for moving picture compression
  • PDA Personal digital assistant
  • a special camera such as a single-lens reflex camera, a card camera
  • first terminal 120 and the second terminal 140 may be two terminals of the same type, or may be two terminals of different types, which are not limited in this embodiment of the present disclosure.
  • the first terminal 120 and the second terminal 140 respectively run a first client and a second client.
  • the first client and the second client may be client application software corresponding to the cooperative shooting platform, or may be other client application software with shooting functions, for example, may be short video platforms, social platforms, etc. that support image and co-shooting functions
  • the application software corresponding to the platform may be client application software corresponding to the cooperative shooting platform, or may be other client application software with shooting functions, for example, may be short video platforms, social platforms, etc. that support image and co-shooting functions.
  • the first terminal 120 and the second terminal 140 may also be directly connected through a wired network or a wireless network instead of a server, then the first terminal 120 and the second terminal 120 may send their respective collected images to each other, And synthesize the composite image locally and display it.
  • the first terminal 120 and the second terminal 140 may also be connected through a server communication, and the implementation environment involved in the embodiments of the present disclosure may further include a server 200, and the server 200 may be a traditional server or a
  • the cloud server can be a server, or a server cluster composed of several servers, or a cloud computing service center.
  • the first terminal 120 may be connected to the second terminal 140 through the server 200, and the server 200 may be connected to the first terminal 120 and the second terminal 140 respectively through a wired network or a wireless network to realize data exchange.
  • the embodiment of the present disclosure can be applied to the above-mentioned terminal or server.
  • the server 200 can synthesize images and send them to the first terminal 120 and the second terminal 140 for display; if the execution body is a terminal, such as the first terminal 120 and/or the second terminal 140, the terminal can synthesize the image according to the image collected by itself and the image collected by the other party.
  • only one terminal can synthesize and send the synthesized image to the other terminal, for example, the first terminal 120 synthesizes the image and sends the synthesized image to the second terminal 14; as another In an implementation manner, each terminal may also perform synthesis at the local end.
  • the first terminal 120 and/or the second terminal 140 may display the synthesized image.
  • FIG. 2 shows a schematic flowchart of a co-shooting method provided by an embodiment of the present disclosure, which can be applied to the above-mentioned terminal or server. Taking the execution subject as the first terminal as an example, the flow shown in FIG. 2 will be described in detail below.
  • the co-shooting method may include the following steps:
  • S110 Acquire the first image collected by the first terminal and the second image collected by the second terminal.
  • the first image may be an original image collected by the first terminal through the image collection device, or may be an image adjusted on the basis of the original image.
  • the image acquisition device may be a camera integrated in the first terminal, a front camera, or a rear camera; in addition, the image acquisition device may also be an external device connected to the first terminal in a wireless or wired manner, Such as an external camera, it is not limited here.
  • the second image is similar to the first image, please refer to the description of the first image, which will not be repeated here.
  • the first terminal may generate a matching timing request according to the obtained matching timing trigger instruction input by the user, and send the matching timing request to the second terminal, so that the second terminal responds to the matching timing request to the first terminal.
  • a terminal sends the second image.
  • the co-shooting request is used to initiate a co-shooting invitation, requesting other users to cooperate with the user of the first terminal to shoot an image or a video. If other users agree to the co-op request corresponding to the co-op request, they can confirm the co-op request through their terminal, and send the image captured by themselves to the first terminal in response to the co-op request.
  • the first terminal mentioned in the description of the present disclosure refers to the initiator of the co-op request
  • the mentioned second terminal refers to the receiver of the co-op request
  • the user of the first terminal is denoted as the first user
  • the first terminal detects the timing triggering instruction input by the first user, it can generate a timing request according to the timing triggering instruction, and send the timing request to at least one second terminal, and the second terminal can confirm the timing request and send the timing request to the first terminal.
  • the second image collected by the second terminal is sent to the first terminal, so that the first terminal can obtain the first image collected by the first terminal and the second image collected by the second terminal.
  • the co-op request may be sent to the server first, and then forwarded to the second terminal by the server, or directly sent to the second terminal, which is not limited in this embodiment of the present disclosure.
  • the number of second terminals may be one or more, that is, the first terminal may send a co-shooting request to one or more second terminals, and may also perform co-shooting with one or more second terminals at the same time, that is, co-shooting.
  • the second user may or may not be a contact of the first user.
  • the second user may be a user who has a friend relationship with the first user on the same application and platform, or may only unilaterally follow the first user, or The user who is unilaterally followed by the first user may also be any user who does not have a following or friend relationship with the first user, which is not limited in this embodiment of the present disclosure.
  • the screen of the first terminal may display a trigger control for the in-time request. If a trigger instruction for the trigger control, that is, the in-time trigger instruction is detected, the in-time request may be generated, and the first terminal may also display the in-time request.
  • the requested sending page, the sending page can display the information of multiple recipients to be selected.
  • the recipient information may include user information and/or platform information, wherein the user information may include at least one of a user's avatar and a user name, and the platform information may include at least one of a platform icon and a platform name, which is not limited herein.
  • the first terminal may send a co-op request to the terminal of the user corresponding to the selected user information, that is, the first terminal may invite the designated user
  • the terminals cooperate to shoot together, so as to invite the designated users corresponding to the designated terminals to co-shoot together;
  • the first terminal detects a selection operation on the platform information, the first terminal can send the co-shooting request to the server of the platform corresponding to the platform information, so that the platform's
  • the server sends the co-op request sent by the first terminal to a plurality of unspecified second terminals, thereby realizing any co-op.
  • S120 Synthesize a third image according to the first image and the second image.
  • the first object in the first image and the second object in the second image are superimposed on the specified background image as foreground objects , the first object is the foreground object of the first image, and the second object is the foreground object of the second image.
  • the specified background image may be a default image, or any image selected by the user, or may be a background image of the first image or a background image of the second image, which is not limited in this embodiment. Therefore, more shooting possibilities can be provided, so that the user can easily replace the shooting background, and the creation efficiency and interest can be improved.
  • the background image can be specified by the second terminal.
  • the second terminal can determine the specified background image according to the input of the second user, and send the corresponding background specifying instruction to the first terminal.
  • a terminal may determine the image indicated by the background designation instruction as the designated background image.
  • the background designation instruction can carry an image identification according to the input of the second user, and the image corresponding to the image identification can be used as the designated background image; the background designation instruction can also carry a terminal identification, then the terminal identification corresponding to the terminal identification The background image of the first image collected by the corresponding first terminal can be used as the designated background image.
  • the background image may be designated by the first terminal.
  • the first terminal may determine the background image of this co-shot according to the co-shot trigger instruction.
  • the first terminal may display one or more co-shot controls for triggering the co-shot trigger instruction.
  • different controls may correspond to different background images.
  • control 1 may use the default image as the specified background image
  • control 2 may use the background image of the terminal that triggers the co-shot request as the specified background image by default
  • control 3 may The background image is specified according to the user's selection... Then according to the co-shot control triggered by the user, the first terminal can obtain the corresponding co-shot trigger instruction, and can determine the current co-shot background image according to the co-shot trigger instruction.
  • the specified background image may be the fourth image selected by the user, wherein the user may be the first user or the second user, that is, the background image may be selected by any user.
  • the terminal can preset multiple images, such as various images taken on rooftops, downtowns, alleys, etc., then the user can select one or more images as the fourth image as needed, and use the fourth image as the fourth image.
  • the image is used as a designated background image for synthesis; of course, multiple images can also be obtained through the network for the user to select, which is not limited here.
  • the first terminal may perform image recognition processing on the first image to obtain the foreground object of the first image, that is, the first object, and perform image recognition processing on the second image to obtain the foreground object of the second image, that is, the first object.
  • the first terminal and/or the second terminal may collect images based on the green screen background, and the terminal may perform image recognition processing based on the images captured by the green screen background, extract the corresponding foreground objects, and superimpose them on the specified foreground objects.
  • a third image is synthesized to obtain a convenient background replacement, which enables video creation to break through the constraints of reality, enables users to fully develop their creativity, provides more creative possibilities and degrees of freedom, and is conducive to improving users’ experience. Creation quality.
  • the talented teenage group has many creative ideas (such as floating on water), but they are limited by their technical capabilities and cannot achieve it. They hope that a technology can break through the limitations of space and physical laws, to To realize various creative ideas, real-time communication and co-production can be performed based on the green screen background and the improved co-production method in this embodiment, and through real-time green screen image keying and background replacement, finally presenting ideas that break through the constraints of reality in each terminal In harmony, it provides users with more creative possibilities and degrees of freedom.
  • the specified background image may also be the background image of the first image or the background image of the second image.
  • the first object and the second object can be superimposed on the background image of an image collected by one terminal, for example, both user A and user B can take a photo together based on the image collected by user B's terminal.
  • image recognition processing may be performed only on the target image that is not specified as the background image to obtain the foreground object of the target image. , which is then superimposed on the specified background image.
  • image recognition processing can be performed only on the second image to obtain the second object of the second image, and the second object can be superimposed on the first image as a foreground object , the third image can be synthesized.
  • the specified background image is the background image of the second image
  • image recognition processing can be performed only on the first image to obtain the first object of the first image, and the first object can be superimposed on the first image as a foreground object.
  • the third image can be synthesized.
  • image recognition processing can still be performed on both the first image and the second image, and the respective foreground objects can be extracted. , obtain the first object and the second object, and then superimpose the first object and the second object as foreground objects on the background image of the first image or the second image.
  • the first terminal and the second terminal collect the first and second images in real time
  • the first terminal continues to acquire the images collected in real time by the first and second terminals, so as to perform the processing according to the first and second images.
  • Synthesizing, synthesizing the corresponding third image so that the images collected in real time by the first terminal and the second terminal can be synthesized in real time.
  • the third image may be displayed, and the third image synthesized in real time may be further displayed, so that what the user sees is what you get during the co-shooting process. Therefore, when a video is shot by the embodiment of the present disclosure, the user can watch while shooting, and a co-shot video can be obtained without synthesizing in a later stage, which greatly improves the video production efficiency.
  • the first terminal can enter the shooting platform application, start the camera, select at least one second terminal to send a co-shooting request, and after receiving the co-shooting request, the second terminal confirms the establishment of a connection with the first terminal, and also turns on the camera, The second terminal starts to transmit the second image collected by its own camera to the first terminal, and the first terminal also starts to transmit the first image collected by its own camera to the second terminal, then the first terminal can be seen on the respective terminals , the image collected by the second terminal, and finally captured to form a video and recorded.
  • real-time co-production of multiple terminals is realized, which not only provides a new social interaction shooting method of co-production, but also enables users to watch while shooting. the co-production efficiency.
  • the first image collected by the first terminal and the second image collected by the second terminal are obtained, and then a third image is synthesized according to the first image and the second image, so that in the third image,
  • the first object in the first image and the second object in the second image are superimposed on the specified background image as foreground objects, wherein the first object is the foreground object of the first image, and the second object is the foreground object of the second image. foreground object.
  • the embodiment of the present disclosure acquires images collected by the first terminal and the second terminal, and superimposes the first object in the first image and the second object in the second image as foreground objects on a specified background through synthesis On the image, a third image is obtained, so that co-shooting can be achieved based on multiple terminals. Even if users are in different geographical locations, they can still achieve co-shooting in different places based on their own terminals, breaking through space constraints and enriching the social interaction of shooting. The gameplay improves the user's shooting experience.
  • the embodiments of the present disclosure also reduce the cost of video production, and can synthesize multi-end images in real time while shooting, without the need for post-compositing, thereby improving the creative efficiency.
  • the co-production method provided by the embodiment of the present disclosure may also be executed on the server, and the server may acquire real-time data collected by the first terminal and the second terminal when the first terminal and at least one second terminal confirm that the co-production is in co-production. and after synthesizing the obtained first and second images to obtain a synthesized synthesized image, the synthesized image can be sent to the first terminal and at least one second terminal, so that the first terminal and at least one first terminal The second terminal displays the synthesized image obtained by this synthesis.
  • the server is still receiving the newly collected images sent by the first and second terminals, synthesizing the newly collected images to obtain a new synthetic image, and then sending the new synthetic image to the first terminal and the at least one second terminal. terminal, so that the first terminal and at least one second terminal display a new composite image, and so on and so forth, until the server receives an instruction to end the shooting, and ends the current shooting. Therefore, the server can perform real-time synthesis on the images collected by the first and second terminals in real time, and send the synthesized images obtained by real-time synthesis to the first and second terminals, so that the first and second terminals can perform real-time synthesis on the synthesized images.
  • Real-time display which realizes real-time co-production of multiple terminals, not only provides a new social interaction shooting method of co-production, but also allows users to watch while shooting, and the co-production video can be obtained after shooting, which can be synthesized without post-production. , so the co-production efficiency is improved.
  • the specified background image may be an image selected by the user, and the user's co-shooting can be based on any background, so that the user's co-shooting requirements can be met, and more abundant co-shooting can be achieved.
  • FIG. 3 shows a schematic flowchart of a matching method provided by another embodiment of the present disclosure.
  • the matching method may include:
  • S210 Acquire the first image collected by the first terminal and the second image collected by the second terminal.
  • the first terminal may generate a timing request according to the acquired timing trigger instruction input by the user, and send the timing request to the second terminal, so that the second terminal responds to the timing request to the first terminal The terminal sends the second image.
  • the first terminal When the first terminal needs to match the timing, it can send a timing request to the at least one second terminal.
  • the co-op request may be sent to at least one second terminal through the server, for example, the first terminal may send to the server a message carrying the selected second user according to the selection of the at least one second user by the first user The co-time request of the terminal identifier of the second terminal, so that the server sends the co-time request to the second terminal corresponding to each terminal identifier.
  • the co-op request can also be sent directly to the second terminal.
  • the first terminal can send the co-op request to the second terminal in the form of a link, a two-dimensional code or other forms.
  • the second terminal can obtain the co-op request by clicking on the link, scanning the two-dimensional code or other methods.
  • the first user needs to be in sync with at least one second user, then the first user can select at least one of the social software (application software corresponding to the social platform) installed on the first terminal, such as instant messaging software,
  • the second user clicks the corresponding control in the function menu, so that the first terminal obtains the corresponding in-time trigger instruction, the first terminal generates the in-time request according to the in-time trigger instruction, and sends the in-time request to the second terminal of the second user through the server. ask.
  • the second terminal can pop up a corresponding prompt message on its display interface, such as "The first user requests co-production with you, please select "Reject” or "Join”", and the second user can click to indicate "Join” button to confirm the co-production request.
  • the first terminal may be pre-installed with application software corresponding to the shooting platform, which is recorded as a shooting application, and the first user may also open the shooting application to start the co-shooting function.
  • the shooting interface of the shooting application may display a control for triggering the co-shot function, and a co-shot request can be generated according to the acquired co-shot trigger instruction corresponding to the control.
  • the first user can select at least one second user in the shooting application, so as to send a co-shooting request to the second terminal used by the second user; as another way, the first user can also generate After the co-production request is forwarded to other applications, the first terminal can jump to other applications, and send the co-production request to other users through other applications to invite co-production.
  • the specific confirmation method is similar to the above. Repeat.
  • S220 Determine a first superimposed position corresponding to the first object and a second superimposed position corresponding to the second object according to the determined target matching template.
  • the target co-shot template can be used to determine the superimposed position of the foreground object on the background image, and the first terminal can determine the first superimposed position corresponding to the first object and the second superimposed corresponding to the second object according to the determined target co-shot template Location.
  • the target in-beat template may be determined by a template selection instruction input by the user. Specifically, before step S220, the template selection instruction may be obtained, and according to the template selection instruction, the target may be determined from a plurality of candidate in-shoot templates. Match template.
  • the template selection instruction may be input by the first user of the first terminal, or may be sent by the second terminal.
  • the template selection instruction may be triggered when the first terminal generates a co-op request. For example, when the first user inputs the co-op triggering instruction based on the first terminal, and selects the target co-op template on which this co-op is based, then The generated co-op request may carry the template identifier of the target co-op template, and the second terminal may obtain the template identifier in the co-op request to determine the corresponding target co-op template, so as to compare the acquired first image and the second image based on the target co-op template. Images are composited.
  • the first terminal may display a shooting interface, and the shooting sister interface may display a template selection button.
  • a template selection page may be displayed, and the template selection page may display various types of , and one or more templates under each type can also be displayed.
  • the template selection page can display one or more candidate templates for matching. Then, according to the matching matching template selected by the user, that is, the target matching matching template, a corresponding matching matching triggering instruction can be obtained and a corresponding matching matching request can be generated, and the matching matching request can carry the template identification of the target matching matching template.
  • the first terminal when it obtains the triggering instruction of the co-op, it can display the co-op request generation page, and the co-op request generation page can display the template selection page or provide an entry for the first terminal to jump to the template selection page.
  • the selection page may display at least one candidate co-op template for the user to select, and then the first user can select the target co-op template based on this co-op when generating the co-op request.
  • FIG. 4 shows a schematic diagram of a display interface provided by an exemplary embodiment of the present disclosure.
  • the first terminal responds to the triggering of the template selection button Operation, the template selection page 410 is displayed, and the template selection page 410 displays a plurality of matching templates 411 .
  • the first terminal detects the triggering operation based on the template selection page 410, determines the in-beat template 411 corresponding to the triggering operation, then determines the corresponding in-beat template as the target in-beat template, and obtains the corresponding template selection instruction.
  • the template selection instruction is also sent by any one of the at least one second terminal to the first terminal after confirming the co-op request sent by any one of the first terminals and the at least one second terminal participating in the co-op. trigger.
  • at least one of the first terminal and the second terminal may display a template selection page, and obtain a template selection instruction based on the template selection page, so as to determine the target on which this co-op is based. Match template.
  • the first terminal may obtain the target co-op template selected by the user before generating the co-op request, so that the generated co-op request carries the template identifier of the selected target co-op template, and then sends the co-op request to the second terminal. terminal, so that the second terminal sends the second image to the first terminal in response to the co-shot request, and can perform subsequent synthesis based on the target co-shot template corresponding to the template identifier.
  • the first terminal may also send the template selection instruction and the co-op request separately. For example, after sending the co-op request, the first terminal displays a template selection page, prompting the user to select a template from multiple candidate co-op templates based on the template selection page. Select the target in-beat template.
  • the first object may be superimposed on the first superposition position of the background image, and the second object may be superimposed on the second superposition position of the background image, Composite the third image.
  • FIG. 5 shows another schematic diagram of a display interface provided by an exemplary embodiment of the present disclosure.
  • user A initiates a co-shooting request
  • users B, C, and D confirm the co-shooting request
  • the terminal obtains the images collected in real time by the respective terminals of users A, B, C, and D, and determines the target co-shooting template according to the template selection instruction , based on the target co-shot template, determine the superimposed positions corresponding to users A, B, C, and D as positions 501, 502, 503, and 504, respectively, and superimpose the foreground objects corresponding to users A, B, C, and D to positions 501, 502, respectively.
  • a third image is obtained, then when the terminal displays the third image, it can simultaneously display the foreground objects of the images collected by the terminals of users A, B, C, and D based on the target co-shot template.
  • the present disclosure is not limited to the above example, and the embodiment of the present disclosure may also support the synthesis of various in-beat templates, which is not exhaustive here for reasons of space.
  • two members of a men's ensemble are on vacation in Hainan and Thailand respectively during the National Day, but they want to co-produce a rabbit dance to participate in the challenge of related topics, which can be provided by the embodiments of the present disclosure. It can break through the limitation of space and complete the real-time co-production in different places, so that users participating in the co-production can ignore the geographical interval and enjoy the fun of joint shooting.
  • the server can determine the first superimposed position corresponding to the first object and the second superimposed position corresponding to the second object according to the determined target co-shot template, The first object is superimposed on the first superimposed position of the background image, the second object is superimposed on the second superimposed position of the background image, and the third image is synthesized, so that the foreground object is in
  • the overlapping positions on the background images can be different, so that a variety of co-shot images or videos under different layouts can be obtained, which enriches the social interaction gameplay of shooting, improves the creative efficiency, and also improves the variety and quality of creation. And at the same time improve the user experience of the co-production process.
  • the specified background image may be the background image of the first image or the background image of the second image
  • the first user of the first terminal and the second user of the second terminal may be based on the background of either party It allows users to break through space constraints and achieve convenient co-production anytime, anywhere.
  • FIG. 6, shows a schematic flowchart of a matching method provided by another embodiment of the present disclosure, and the method may include:
  • S310 Acquire the first image collected by the first terminal and the second image collected by the second terminal.
  • S320 Determine the target superposition position corresponding to the target object.
  • the entire first image can be used as the background image for this co-shoot, and only the second object can be extracted from the second image and used as The foreground object is superimposed on the first image, composing the third image.
  • the entire second image can be used as the background image for this co-shoot, only the first object is extracted from the first image, and Superimpose it on the second image as a foreground object, compositing the third image.
  • the target superposition position corresponding to the target object may be determined according to the determined target coincidence template.
  • the target co-shot template For the determination of the target co-shot template, reference may be made to the foregoing embodiment.
  • the co-shot background image is the first image collected by the first terminal.
  • the image is also the second image collected by the second terminal, and the foreground object of another image that is not used as the background image is determined as the target object, and the target overlay position of the target object on the background image can also be determined according to the configuration information.
  • the determined target co-shot template can determine which terminal captures the image to be used as the background image, which terminal captures the foreground object as the target object, superimposes the target object on the background image at which position, and extracts the target. The object superimposes it on the target superimposed position corresponding to the target object to synthesize the third image. Realize multi-user co-shooting based on images collected by any terminal.
  • the third image may be synthesized according to different typeset layouts according to different target co-shot templates.
  • the target superposition position corresponding to the target object may also be determined according to the image recognition result of the background image.
  • the background image in this embodiment is the background image of the first image or the second image
  • the image recognition of the background image can be performed when determining the foreground object and the background object of the first image and/or the second image, Then, the target superposition position corresponding to the target object can be determined according to the image recognition result.
  • the foreground object of the background image can be identified, and based on the image position of the identified foreground object, the distance in the preset direction of the foreground object can be calculated.
  • the image position at the preset distance of the foreground object is determined as the target overlay position.
  • the preset direction and the preset distance can be determined according to actual needs, can be preset by a program, or can be customized by a user, which is not limited here.
  • the preset direction can be a general direction such as horizontal right, vertical upward, horizontal left, vertical downward, etc., or can be a direction characterized by a specific deviation angle, such as the image position of the foreground object as For the reference point, the vertical upward direction is 0°, and the direction may be 30° clockwise, 80° clockwise, etc., which is not limited here.
  • the preset distance may be in units of pixels, such as 50 pixels or 20 pixels, which is not limited here.
  • the preset distance may increase with the increase of the resolution of the image, so Avoid that the distance between the first object and the second object is too far in the synthesized third image because the preset distance is too large when the image resolution is low, and the preset distance is too small when the image resolution is high.
  • the distance between the first object and the second object is relatively close in the synthesized third image, and there may even be a problem of mutual occlusion, so that the synthesis effect is better.
  • the target overlay position can be determined in the image area that does not contain the first object in the first image, or the first image
  • a position at a preset distance from the first object in the preset direction of the first object is determined as the target superposition position. For example, a position 60 pixels away from the geographic position of the first object in the 30° clockwise direction relative to the image position of the first object is determined as the target overlay position, and the second object is superimposed on the target overlay of the first image.
  • the third image is synthesized, and then on the final third image, the second object is displayed at a position that is 60 pixels away from the first object in the clockwise 30° direction.
  • FIG. 7 shows yet another schematic diagram of a display interface provided by an exemplary embodiment of the present disclosure.
  • user A initiates a co-shooting request
  • users B, C, and D confirm the co-shooting request.
  • This co-shooting designates the first image as the background image
  • the terminal obtains the real-time data collected by the respective terminals of users A, B, C, and D.
  • the terminal determines the target co-shot template according to the template selection instruction, determine the superimposed positions corresponding to users B, C, and D based on the target co-shot template as positions 702, 703, and 704, respectively, and superimpose the foreground objects corresponding to users B, C, and D respectively.
  • the terminal displays the third image, it can simultaneously display the foreground objects of the images collected by the terminals of users A, B, C, and D based on the target co-shot template, and , the foreground objects corresponding to users B, C, and D are respectively superimposed on the background image corresponding to user A.
  • the first image collected by the first terminal or the second image collected by the second terminal can be used as a background image, and another image that is not used as a background image can be used as a background image.
  • the first image collected by the first terminal or the second image collected by the second terminal can be used as a background image
  • another image that is not used as a background image can be used as a background image.
  • both the first user and the second user can input special effects input instructions based on their own terminals, so as to add special effects to the currently captured image, and further enrich the shooting gameplay.
  • FIG. 8 shows a schematic flowchart of a matching method provided by another embodiment of the present disclosure.
  • the method may include:
  • S410 Acquire the first image collected by the first terminal and the second image collected by the second terminal.
  • S430 Combine the third image according to the first image and the second image.
  • S340 Render the target special effect to the to-be-displayed position in the third image to obtain a third image after special effect processing.
  • the special effect input instruction includes a target special effect to be added and a to-be-displayed position corresponding to the target special effect. It should be noted that the special effect input instruction can be acquired before synthesizing the third image, or can be acquired after synthesizing the third image, that is, step S420 can be before step S430 or after step S430, which is not limited in this embodiment. .
  • a special effect addition button may also be displayed on the shooting interface of the terminal during the co-shooting process.
  • a trigger operation acting on the special effect addition button is detected, a special effect addition page may be displayed, and the special effect addition page may display multiple special effects.
  • the selected effect is determined as the target effect to be added.
  • the special effects may include adding rabbit ears, face changing effects, face-lifting effects and other special effects.
  • the aforementioned special effects may not include sound, may also be combined with sound effects, and may also include pure sound effects. This embodiment does not limit the types of special effects that can be added. In other embodiments, it can also be called a tool, and other similar filters, beauty, etc. can also be used as a type of special effects.
  • different special effects have different added attributes. For example, some special effects are local special effects, that is, they are only added to local positions. After selecting the target special effect to be added, the user needs to select a position on the shooting interface as The to-be-displayed position corresponding to the target special effect; for another example, if some special effects are global special effects, the user does not need to select a position, but directly uses the entire third image as the to-be-displayed position.
  • the third image after special effect processing can be displayed, so that the user can see the effect after adding special effects in real time, which can further improve the shooting quality.
  • the social interaction and the fun of co-production are conducive to improving the quality of shooting, and without the need for post-production additions, special effects can be easily added during the co-production process, and the creative efficiency can also be improved.
  • the special effect input instruction can also be obtained, and the target special effect can be rendered to the to-be-displayed position in the third image to obtain the rendered third image. Therefore, users can be allowed to add special effects during the co-production process to facilitate co-production, and the co-production implemented by the embodiments of the present disclosure is real-time co-production and real-time display, so that users participating in co-production can see the pictures after adding special effects in real time, which can further improve the performance.
  • the social interaction and co-production fun of shooting are conducive to improving the shooting quality, and no need for post-production additions, special effects can be easily added during the co-production process, and the creative efficiency can also be improved.
  • FIG. 9 is a block diagram of a co-shooting apparatus provided by an embodiment of the present disclosure.
  • the co-shooting apparatus 900 may be applied to a terminal or a server, and may specifically include: an image acquisition module 910 and an image synthesis module 930 , wherein:
  • an image acquisition module 910 configured to acquire the first image collected by the first terminal and the second image collected by the second terminal;
  • the image synthesizing module 920 is configured to synthesize a third image according to the first image and the second image; in the third image, the first object in the first image and the second object in the second image are superimposed as foreground objects. On the specified background image, the first object is the foreground object of the first image, and the second object is the foreground object of the second image.
  • the background image is a fourth image selected by the user.
  • the background image is the background image of the first image or the background image of the second image.
  • the image synthesis module 920 includes a position determination sub-module and an overlay synthesis sub-module, wherein:
  • a position determination submodule configured to determine a first superimposed position corresponding to the first object and a second superimposed position corresponding to the second object according to the determined target co-shot template
  • the superimposing and synthesizing submodule is used for synthesizing and superimposing the first object on the first superimposing position of the background image, superimposing the second object on the second superimposing position of the background image, and synthesizing the third image.
  • the image synthesis module 920 includes: a target position determination submodule and a target overlay synthesis submodule, wherein:
  • the target position determination sub-module is used to determine the target superposition position corresponding to the target object; if the background image is the background image of the first image, the target object is the second object, and if the background image is the background image of the second image, the target object is the first image. an object;
  • the target overlay synthesis sub-module is used for overlaying the target object on the target overlay position of the background image to synthesize the third image.
  • the target position determination submodule includes: a template determination unit or an identification determination unit, wherein:
  • a template determination unit used for determining the target overlapping position corresponding to the target object according to the determined target co-shot template
  • the identification and determination unit is configured to determine the target superposition position corresponding to the target object according to the image identification result of the background image.
  • the co-photographing apparatus 900 before synthesizing the third image according to the first image and the second image, the co-photographing apparatus 900 further includes: a template instruction acquisition module and a template determination module, wherein:
  • the template instruction acquisition module is used to acquire template selection instructions
  • the template determination module is used for determining a target co-shot template from a plurality of candidate co-shot templates according to the template selection instruction, and the target co-shot template is used to determine the superimposed position of the foreground object on the background image.
  • the co-shooting apparatus 900 before synthesizing the third image according to the first image and the second image, the co-shooting apparatus 900 further includes: a special effect instruction acquiring module and a special effect processing module, wherein:
  • the special effect instruction acquiring module is used to acquire the special effect input instruction, and the special effect input instruction includes the target special effect to be added and the to-be-displayed position corresponding to the target special effect;
  • the co-shooting device 900 further includes: a special effect processing module, configured to render the target special effect to the to-be-displayed position in the third image to obtain a special effect processed third image.
  • the co-shooting apparatus 900 further includes: a co-shooting request module for The obtained co-op trigger instruction input by the user generates a co-op request, and sends the co-op request to the second terminal, so that the second terminal sends the second image to the first terminal in response to the co-op request.
  • the synchronization device in the embodiment of the present disclosure can execute the synchronization method provided by the embodiment of the present disclosure, and the implementation principle is similar.
  • the steps in the matching method in for the detailed functional description of each module of the matching device, please refer to the description in the corresponding matching method shown above, and details are not repeated here.
  • FIG. 10 shows a structural block diagram of an electronic device 1000 suitable for implementing embodiments of the present disclosure.
  • the electronic devices in the embodiments of the present disclosure may include, but are not limited to, devices such as computers, mobile phones and other terminals, servers, and the like.
  • the electronic device shown in FIG. 10 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
  • the electronic device 1000 includes: a memory and a processor, wherein the processor here may be referred to as a processing device 1001 below, and the memory may include a read-only memory (ROM) 1002, a random access memory (RAM) 1003, and a storage device 1008 below. At least one of the following:
  • an electronic device 1000 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 1001, which may be loaded into random access according to a program stored in a read only memory (ROM) 1002 or from a storage device 1008 Various appropriate operations and processes are executed by the programs in the memory (RAM) 1003 . In the RAM 1003, various programs and data required for the operation of the electronic device 1000 are also stored.
  • the processing device 1001, the ROM 1002, and the RAM 1003 are connected to each other through a bus 1004.
  • An input/output (I/O) interface 1005 is also connected to the bus 1004 .
  • the following devices can be connected to the I/O interface 1005: input devices 1006 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration
  • An output device 1007 such as a computer
  • a storage device 1008 including, for example, a magnetic tape, a hard disk, etc.
  • the communication means 1009 may allow the electronic device 1000 to communicate wirelessly or by wire with other devices to exchange data. While FIG. 10 shows electronic device 1000 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer-readable storage medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication device 1009, or from the storage device 1008, or from the ROM 1002.
  • the processing apparatus 1001 the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • the above-mentioned computer-readable storage medium of the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above.
  • Computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon.
  • Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable storage medium, other than a computer-readable storage medium, that can be sent, propagated, or transmitted for use by or in connection with the instruction execution system, apparatus, or device. program.
  • Program code embodied on a computer-readable storage medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • the client and server can use any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol) to communicate, and can communicate with digital data in any form or medium Communication (eg, a communication network) interconnects.
  • HTTP HyperText Transfer Protocol
  • Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently known or future development network of.
  • the above-mentioned computer-readable storage medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable storage medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device is caused to perform the following steps: acquiring the first image captured by the first terminal and captured by the second terminal The second image of the On the background image, the first object is the foreground object of the first image, and the second object is the foreground object of the second image.
  • Computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • modules or units involved in the embodiments of the present disclosure may be implemented in software or hardware.
  • the name of the module or unit does not constitute a limitation of the unit itself under certain circumstances, for example, the display module can also be described as "a module for displaying a resource uploading interface".
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Systems on Chips (SOCs), Complex Programmable Logical Devices (CPLDs) and more.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs Systems on Chips
  • CPLDs Complex Programmable Logical Devices
  • a computer-readable storage medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • the computer-readable storage medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Computer-readable storage media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • a co-shooting method applied to a first terminal, the method includes: acquiring a first image collected by the first terminal and a second image collected by a second terminal; The first image and the second image are combined into a third image; in the third image, the first object in the first image and the second object in the second image are superimposed as foreground objects.
  • the first object is the foreground object of the first image
  • the second object is the foreground object of the second image.
  • the background image is a fourth image selected by the user.
  • the background image is a background image of the first image or a background image of the second image.
  • the synthesizing the third image according to the first image and the second image includes: determining the first superimposed position corresponding to the first object and the the second superimposed position corresponding to the second object; superimpose the first object on the first superimposed position of the background image, and superimpose the second object on the second superimposed position of the background image above, composite the third image.
  • synthesizing a third image according to the first image and the second image includes: determining a target superimposition position corresponding to the target object; if the background image is the background of the first image If the background image is the background image of the second image, the target object is the first object; superimpose the target object on the background image On the target overlay position of , composite the third image.
  • the determining that the target object is superimposed on the target superimposing position of the background image includes: determining the target superimposing position corresponding to the target object according to the determined target co-shot template; or The recognition result determines the target superposition position corresponding to the target object.
  • the method before synthesizing the third image according to the first image and the second image, the method further includes: acquiring a template selection instruction;
  • the target co-shot template is determined in the template, and the target co-shot template is used to determine the superimposed position of the foreground object superimposed on the background image.
  • the method before synthesizing the third image according to the first image and the second image, the method further includes: acquiring a special effect input instruction, where the special effect input instruction includes the target special effect to be added and the to-be-displayed position corresponding to the target special effect; after synthesizing the third image according to the first image and the second image, further comprising: rendering the target special effect to the third image in the third image At the position to be displayed, a third image after special effect processing is obtained.
  • the method further includes: according to the obtained The matching timing trigger instruction input by the user generates a matching timing request, and sends the matching timing request to the second terminal, so that the second terminal sends the second image to the first terminal in response to the matching timing request.
  • a co-photographing device including: an image acquisition module, configured to acquire a first image collected by a first terminal and a second image collected by a second terminal; an image synthesis module, configured with for synthesizing a third image based on the first image and the second image; in the third image, the first object in the first image and the second object in the second image are used as The foreground object is superimposed on the specified background image, the first object is the foreground object of the first image, and the second object is the foreground object of the second image.
  • the background image is a fourth image selected by the user.
  • the background image is a background image of the first image or a background image of the second image.
  • the image synthesis module includes: a position determination sub-module and a superimposed synthesis sub-module, wherein: a position determination sub-module is configured to determine the first superimposed position corresponding to the first object and a second superimposition position corresponding to the second object; a superposition synthesis sub-module for synthesizing and superimposing the first object on the first superposition position of the background image, and superimposing the second object on the On the second superimposed position of the background image, a third image is synthesized.
  • the image synthesis module includes: a target position determination submodule and a target overlay synthesis submodule, wherein: a target position determination submodule is used to determine the target overlay position corresponding to the target object; if the background image is of the first image The target object is the second object if the background image is the background image of the second image, and the target object is the first object if the background image is the background image of the second image; the target overlay synthesis sub-module is used to overlay the target object on the target overlay position of the background image to synthesize third image.
  • the target position determination sub-module includes: a template determination unit or a recognition determination unit, wherein: a template determination unit is used to determine the target superposition position corresponding to the target object according to the determined target matching template; It is used to determine the target superposition position corresponding to the target object according to the image recognition result of the background image.
  • the co-shooting apparatus before synthesizing the third image according to the first image and the second image, the co-shooting apparatus further includes: a template instruction acquisition module and a template determination module, wherein: a template instruction acquisition module for Obtaining a template selection instruction; a template determination module, configured to determine the target co-shot template from a plurality of candidate co-shot templates according to the template selection instruction, and the target co-shot template is used to determine that the foreground object is superimposed on the background image the overlay position on the .
  • the co-shooting apparatus before synthesizing the third image according to the first image and the second image, the co-shooting apparatus further includes: a special effect instruction acquisition module and a special effect processing module, wherein: the special effect instruction acquisition module is used for obtaining special effects instructions.
  • the special effect instruction acquisition module is used for obtaining special effects instructions.
  • Acquire a special effect input instruction where the special effect input instruction includes a target special effect to be added and a to-be-displayed position corresponding to the target special effect
  • the co-shooting device It also includes: a special effect processing module, configured to render the target special effect to the to-be-displayed position in the third image to obtain a third image after special effect processing.
  • the co-shooting device if the co-shooting device is applied to the first terminal, before acquiring the first image collected by the first terminal and the second image collected by the second terminal, the co-shooting device further includes: a co-shooting request module, which uses generating a matching timing request according to the obtained matching timing trigger instruction input by the user, and sending the matching timing request to the second terminal, so that the second terminal sends the matching timing request to the first terminal in response to the matching timing request.
  • a co-shooting request module which uses generating a matching timing request according to the obtained matching timing trigger instruction input by the user, and sending the matching timing request to the second terminal, so that the second terminal sends the matching timing request to the first terminal in response to the matching timing request.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本公开提供了一种合拍方法、装置、电子设备及计算机可读存储介质,涉及图像处理技术领域。所述方法包括:获取第一终端采集的第一图像和第二终端采集的第二图像;根据所述第一图像和所述第二图像,合成第三图像;在所述第三图像中,所述第一图像中的第一对象和所述第二图像中的第二对象被作为前景对象叠加在指定的背景图像上,所述第一对象为所述第一图像的前景对象,所述第二对象为所述第二图像的前景对象。本公开的实施能够使得双方甚至多方突破空间的限制,实现异地合拍。

Description

合拍方法、装置、电子设备及计算机可读存储介质
相关申请的交叉引用
本申请要求于2020年09月04日提交的,申请号为202010924413.4、发明名称为“合拍方法、装置、电子设备及计算机可读存储介质”的中国专利申请的优先权,该申请的全文通过引用结合在本申请中。
技术领域
本公开涉及图像处理技术领域,具体而言,本公开涉及一种合拍方法、装置、电子设备及计算机可读存储介质。
背景技术
随着移动互联网的发展以及移动终端的普及,越来越多的用户开始自发制作内容,并上传社交平台与他人分享。通常,内容制作者利用移动终端上的拍摄设备拍摄自己喜欢的图像、视频上传到社交平台上分享给其他用户。但是目前的合拍方式较为单一,缺乏互动性。
发明内容
提供该发明内容部分以便以简要的形式介绍构思,这些构思将在 后面的具体实施方式部分被详细描述。该发明内容部分并不旨在标识要求保护的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。
第一方面,本公开实施例提供了一种合拍方法,该方法包括:获取第一终端采集的第一图像和第二终端采集的第二图像;根据所述第一图像和所述第二图像,合成第三图像;在所述第三图像中,所述第一图像中的第一对象和所述第二图像中的第二对象被作为前景对象叠加在指定的背景图像上,所述第一对象为所述第一图像的前景对象,所述第二对象为所述第二图像的前景对象。
第二方面,本公开实施例提供了一种合拍装置,该装置包括:图像获取模块,用于获取第一终端采集的第一图像和第二终端采集的第二图像;图像合成模块,用于根据所述第一图像和所述第二图像,合成第三图像;在所述第三图像中,所述第一图像中的第一对象和所述第二图像中的第二对象被作为前景对象叠加在指定的背景图像上,所述第一对象为所述第一图像的前景对象,所述第二对象为所述第二图像的前景对象。
第三方面,本公开实施例提供了一种电子设备,所述电子设备包括:一个或多个计算机程序,其中,所述一个或多个计算机程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个计算机程序配置用于:执行如上述第一方面所述的方法。
第四方面,本公开实施例提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器调用执行时实现如上述 第一方面所述的方法。
本公开实施例提供的一种合拍方法、装置、电子设备及计算机可读存储介质,通过获取第一终端采集的第一图像和第二终端采集的第二图像,然后根据第一图像和第二图像,合成第三图像,使得在第三图像中,第一图像中的第一对象和第二图像中的第二对象被作为前景对象叠加在指定的背景图像上,其中,第一对象为第一图像的前景对象,第二对象为第二图像的前景对象。由此,本公开实施例获取第一终端和第二终端各自采集的图像,并通过合成,将第一图像中的第一对象和第二图像中的第二对象作为前景对象叠加在指定的背景图像上,得到第三图像,从而可以基于多个终端实现合拍,即便用户身处不同地理位置,也仍可各自基于自己的终端实现异地的合拍,突破了空间限制,并且丰富了拍摄的社交互动玩法,提升了用户的拍摄体验。另外,本公开实施例还降低了视频制作的成本,可在拍摄的同时实时合成多端的图像,而无需后期合成,提高了创作效率。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1示出了一种适用于本公开实施例的实施环境示意图。
图2示出了本公开一个实施例提供的合拍方法的流程示意图。
图3示出了本公开另一个实施例提供的合拍方法的流程示意图。
图4示出了本公开一个示例性实施例提供的一种显示界面示意图。
图5示出了本公开一个示例性实施例提供的另一种显示界面示意图。
图6示出了本公开又一个实施例提供的合拍方法的流程示意图。
图7示出了本公开一个示例性实施例提供的又一种显示界面示意图。
图8示出了本公开再一个实施例提供的合拍方法的流程示意图。
图9示出了本公开一个实施例提供的合拍装置的模块框图。
图10示出了本公开实施例提供的电子设备的结构框图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对装置、模块或单元进行区分,并非用于限定这些装置、模块或单元一定为不同的装置、模块或单元,也并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
下面以具体的实施例对本公开的技术方案以及本公开的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本公开的实施例进行描述。
请参阅图1,其示出了一种适用于本公开实施例所涉及的一种实施环境示意图,该实施环境可包括:第一终端120、第二终端140。其中:
第一终端120和第二终端140可以是手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio LayerⅢ,动态影像压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio LayerⅣ,动态影像压缩标准音频层面4)播放器、可穿戴设备、车载设备、增强现实(Augmented Reality,AR)/虚拟现实(Virtual Reality,VR)设备、笔记本电脑、超级移动个人计算机(Ultra-Mobile Personal Computer,UMPC)、上网本、个人数字助理(Personal digital assistant,PDA)或专门的照相机(例如单反相机、卡片式相机)等。本公开实施例对终端的具体类型不作限定。
另外,第一终端120和第二终端140可以是相同类型的两个终端,也可以是不同类型的两个终端,本公开实施例对此不作限定。
第一终端120和第二终端140中分别运行有第一客户端和第二客户端。第一客户端和第二客户端可以为合作拍摄平台对应的客户端应用软件、也可以为具有拍摄功能的其他客户端应用软件,例如,可以是短视频平台、社交平台等支持图像、合拍功能的平台所对应的应用软件。
在一些实施例中,第一终端120和第二终端140之间也可以不通过服务器而直接通过有线网络或者无线网络相连,则第一终端120和第二终端120可互相发送各自采集的图像,并在本地合成得到合成图像,并显示。
在另一些实施例中,第一终端120和第二终端140也可通过服务器通信连接,则本公开实施例所涉及的实施环境中还可包括服务器 200,服务器200可以是传统服务器,也可以是云端服务器,可以是一台服务器,或者由若干台服务器组成的服务器集群,或者是一个云计算服务中心。第一终端120可以通过服务器200与第二终端140相连,该服务器200可以通过有线网络或者无线网络分别与第一终端120和第二终端140相连,实现数据交互。
本公开实施例可应用于上述终端或服务器,若执行主体为服务器,则可由服务器200合成图像后发送至第一终端120和第二终端140进行显示;若执行主体为终端,如第一终端120和/或第二终端140,则终端可根据自身采集的图像和对方采集的图像来合成图像。作为一种实施方式,可以仅由其中一个终端合成,并发送合成得到的图像至另一终端,例如,由第一终端120合成图像,并将合成得到的图像发送至第二终端14;作为另一种实施方式,也可以是每个终端各自在本端进行合成。
并在一些实施例中,第一终端120和/或第二终端140可以对合成得到的图像进行显示。
下面将通过具体实施例对本公开实施例提供的合拍方法、装置、电子设备及计算机可读存储介质进行详细说明。
请参阅图2,图2示出了本公开一个实施例提供的合拍方法的流程示意图,可应用于上述终端或服务器。下面以执行主体为第一终端为例,针对图2所示的流程进行详细的阐述,该合拍方法可以包括以下步骤:
S110:获取第一终端采集的第一图像和第二终端采集的第二图像。
其中,第一图像可以是第一终端通过图像采集装置采集的原始图像,也可以是在原始图像基础上进行调整后的图像。该图像采集装置可以是集成于第一终端的摄像头,可以是前置摄像头,也可以是后置摄像头;另外,该图像采集装置也可以是与第一终端通过无线或有线方式连接的外部设备,如外接摄像头,在此不作限定。第二图像与第一图像类似,请参考对第一图像的描述,在此不再赘述。
在一些实施例中,在步骤S110之前,第一终端可根据获取到的用户输入的合拍触发指令生成合拍请求,并将合拍请求发送至第二终端,以使得第二终端响应于合拍请求向第一终端发送第二图像。其中,合拍请求用于发起合拍邀请,请求其它用户与第一终端的用户一起合作拍摄一张图像或一段视频。若其它用户同意合拍请求对应的合拍邀请,则可通过其终端对合拍请求进行确认,并响应于合拍请求向第一终端发送自己采集的图像。
为便于理解,在本公开的描述中所提及的第一终端表示合拍请求的发起方,所提及的第二终端表示合拍请求的接收方,并记第一终端的用户为第一用户,记第二终端的用户为第二用户。
若第一终端检测到第一用户输入的合拍触发指令,可根据合拍触发指令生成合拍请求,并将该合拍请求发送给至少一个第二终端,第二终端可对合拍请求进行确认,并将第二终端采集的第二图像发送至第一终端,使得第一终端可获取第一终端采集的第一图像和第二终端采集的第二图像。
第一终端发送合拍请求给至少一个第二终端时,可以先将合拍请 求发送给服务器,再由服务器转发给第二终端,也可直接发送给第二终端,本公开实施例对此不作限定。
其中,第二终端可以是一个或多个,即第一终端可向一个或多个第二终端发送合拍请求,也可同时与一个或多个第二终端进行合作拍摄,即合拍。另外,第二用户可以是第一用户的联系人、也可以不是,第二用户可以是与第一用户在同一应用、平台上具有好友关系的用户,也可以只是单方面关注第一用户、或单方面被第一用户关注的用户,还可以是与第一用户不存在关注、好友关系的任意用户,本公开实施例对此均不作限定。
在一些实施方式中,第一终端的屏幕可显示有合拍请求的触发控件,若检测到对该触发控件的触发指令即合拍触发指令,则可生成合拍请求,并第一终端还可显示有合拍请求的发送页面,发送页面可显示有多个待选择的接收方信息。其中,接收方信息可以包括用户信息和/或平台信息,其中,用户信息可包括用户的头像、用户名中至少一个,平台信息可包括平台图标、平台名中至少一个,在此不作限定。
在一些示例性的实施方式中,若第一终端检测到对用户信息的选择操作,则第一终端可将合拍请求发送给选定的用户信息对应的用户的终端,即第一终端可邀请指定终端一起合作拍摄,以邀请指定终端对应的指定用户一起合拍;若第一终端检测到对平台信息的选择操作,第一终端可将合拍请求发送给平台信息对应的平台的服务器,以使得平台的服务器将第一终端发出的合拍请求发送给不特定的多个第二终端,从而实现任意合拍。
S120:根据第一图像和第二图像,合成第三图像,在第三图像中,第一图像中的第一对象和第二图像中的第二对象被作为前景对象叠加在指定的背景图像上,第一对象为第一图像的前景对象,第二对象为第二图像的前景对象。
其中,指定的背景图像可以是默认的图像、也可以是用户选择的任意图像,还可以是第一图像的背景图像或第二图像的背景图像,本实施例对此不做限定。由此,可提供更多的拍摄可能性,使得用户可方便地替换拍摄背景,提高创作效率和趣味。
在一些实施方式中,背景图像可由第二终端指定,则作为一种实施方式,第二终端可根据第二用户的输入确定指定的背景图像,并向第一终端发送对应的背景指定指令,第一终端可将背景指定指令指示的图像确定为指定的背景图像。
其中,背景指定指令可根据第二用户的输入可携带图像标识,则可将图像标识对应的图像作为指定的背景图像;背景指定指令也可携带终端标识,则可将终端标识对应的终端所采集的图像的背景图像作为指定的背景图像,例如终端标识为“device1”,可将对应的第一终端所采集的第一图像的背景图像作为指定的背景图像。
在另一些实施方式中,背景图像可以是第一终端指定。作为一种实施方式,第一终端可根据合拍触发指令确定本次合拍的背景图像,例如,第一终端可显示有一个或多个用于触发合拍触发指令的合拍控件。其中,不同控件可对应不同的背景图像,比如,控件1可以是将默认的图像作为指定的背景图像,控件2可以是默认将触发合拍请求 的终端的背景图像作为指定的背景图像,控件3可以是根据用户选择来指定背景图像……则根据用户触发的合拍控件,第一终端可获取对应的合拍触发指令,根据合拍触发指令可确定本次合拍的背景图像。
在一些实施例中,指定的背景图像可以是用户选择的第四图像,其中,用户可以是第一用户,也可以是第二用户,即背景图像可由任意用户选择。另外,终端可预置有多张图像,例如在天台、闹市、巷子等各处拍摄的各种图像,则用户可根据需要从中选择一张或多张图像作为第四图像,并将该第四图像作为指定的背景图像供合成;当然,也可通过网络获取多张图像供用户选择,在此不作限定。
则在一些实施方式中,第一终端可以对第一图像进行图像识别处理,得到第一图像的前景对象即第一对象,对第二图像进行图像识别处理,得到第二图像的前景对象即第二对象,并将第一对象、第二对象作为前景对象叠加在指定的背景图像上,使得第三图像可同时包含第一终端、第二终端采集的图像,由此,可突破空间的限制,实现多终端合拍,并且由于合拍的背景图像可由用户自己选择,第一终端和第二终端之间可基于各种背景图像进行合拍,拓宽了创作的可能性,从而可满足用户多样化的需求,实现方便的背景替换。
另外,在一些实施方式中,第一终端和/或第二终端可基于绿幕背景采集图像,则终端可基于绿幕背景拍摄的图像进行图像识别处理,提取对应的前景对象,并叠加在指定的背景图像上,合成得到第三图像,从而实现方便的背景替换,使得视频创作可突破现实的约束,使得用户可充分发挥创意,提供了更多创作可能性和自由度,有利于提 高用户的创作质量。
在一个示例性的场景中,天才少年组合有很多有创造力的想法(比如轻功水上漂),但是他们受限于技术能力无法实现,他们希望有个技术能够突破空间和物理定律的限制,来实现各种创意,则可以基于绿幕背景,并通过本实施例提高的合拍方法进行实时沟通、合拍,经由实时的绿幕抠像和背景替换,使得最终在各终端呈现出突破现实约束的创意合拍,为用户提供了更多的创作可能性和自由度。
在另一些实施例中,指定的背景图像也可以是第一图像的背景图像或第二图像的背景图像。由此,使得第一对象和第二对象可叠加在一个终端采集的图像的背景图像上,例如,可实现用户A与用户B均基于用户B的终端所采集的图像上进行合拍。
在一些实施方式中,当指定的背景图像为第一图像的背景图像或第二图像的背景图像时,可仅对未被指定未背景图像的目标图像进行图像识别处理,得到目标图像的前景对象,再将其叠加在指定的背景图像上。例如,若指定的背景图像为第一图像的背景图像,则可仅对第二图像进行图像识别处理,得到第二图像的第二对象,并将第二对象作为前景对象叠加在第一图像上,则可合成得到第三图像。再如,若指定的背景图像为第二图像的背景图像,则可仅对第一图像进行图像识别处理,得到第一图像的第一对象,并将第一对象作为前景对象叠加在第一图像上,则可合成得到第三图像。
在另一些实施方式中,当指定的背景图像为第一图像的背景图像或第二图像的背景图像时,也仍可对第一图像、第二图像均进行图像 识别处理,提取各自的前景对象,得到第一对象和第二对象,然后将第一对象和第二对象作为前景对象叠加在第一图像或第二图像的背景图像上。
在一些实施例中,随着第一终端、第二终端实时采集第一、第二图像,第一终端一边持续获取第一、第二终端实时采集的图像,以便根据第一、第二图像进行合成,合成对应的第三图像,从而可对第一终端、第二终端实时采集的图像进行实时合成。并在一些实施方式中,得到合成的第三图像后,还可对第三图像进行显示,则还可进一步对实时合成的第三图像进行显示,使得合拍过程中用户所见即所得。从而在通过本公开实施例拍摄视频时,使得用户可边拍边看,无需后期再做合成即可得到合拍视频,大大提高视频制作效率。
在一个示例中,第一终端可进入拍摄平台应用,启动摄像头,选择至少一个第二终端发送合拍请求,第二终端收到合拍请求后,确认和第一终端建立连接,同时也打开了摄像头,第二终端开始把自己的摄像头采集的第二图像传输给第一终端,第一终端也开始把自己的摄像头采集的第一图像传输给第二终端,则在各自终端上可以看到第一终端、第二终端采集的图像,并最终拍摄形成一个视频录制下来。由此实现多终端的实时合拍,不仅提供了合拍这种新的社交互动拍摄玩法,而且还使得用户可边拍边看,拍摄结束后即可得到合拍的视频,可以无需经过后期合成,故提高了合拍效率。
本实施例提供的合拍方法,通过获取第一终端采集的第一图像和第二终端采集的第二图像,然后根据第一图像和第二图像,合成第三 图像,使得在第三图像中,第一图像中的第一对象和第二图像中的第二对象被作为前景对象叠加在指定的背景图像上,其中,第一对象为第一图像的前景对象,第二对象为第二图像的前景对象。由此,本公开实施例获取第一终端和第二终端各自采集的图像,并通过合成,将第一图像中的第一对象和第二图像中的第二对象作为前景对象叠加在指定的背景图像上,得到第三图像,从而可以基于多个终端实现合拍,即便用户身处不同地理位置,也仍可各自基于自己的终端实现异地的合拍,突破了空间限制,并且丰富了拍摄的社交互动玩法,提升了用户的拍摄体验。另外,本公开实施例还降低了视频制作的成本,可在拍摄的同时实时合成多端的图像,而无需后期合成,提高了创作效率。
在一些可能的实施方式中,本公开实施例提供的合拍方法也可以在服务器执行,则服务器可以在第一终端和至少一个第二终端确认进行合拍时,获取第一终端、第二终端实时采集的图像,并对得到的第一、第二图像进行合成处理得到合成后的合成图像后,可将该合成图像发送至第一终端与至少一个第二终端,以使得第一终端与至少一个第二终端对本次合成得到的合成图像进行显示。同时,服务器仍在接收第一、第二终端发送的新采集的图像,并对新采集的图像进行合成处理得到新的合成图像,再将新的合成图像发送至第一终端与至少一个第二终端,以使得第一终端与至少一个第二终端对新的合成图像进行显示,如此往复,直到服务器接收到结束拍摄指令,结束本次拍摄。从而服务器可对第一、第二终端实时采集的图像进行实时合成,并将 实时合成得到的合成图像发送给第一、第二终端,以使得第一、第二终端对实时合成的合成图像进行实时显示,由此实现多终端的实时合拍,不仅提供了合拍这种新的社交互动拍摄玩法,而且还使得用户可边拍边看,拍摄结束后即可得到合拍的视频,可以无需经过后期合成,故提高了合拍效率。
在一些实施例中,指定的背景图像可以为用户选择的图像,则用户合拍可基于任意背景,从而可满足用户的合拍需求,实现更丰富的合拍。具体跌,请参阅图3,图3示出了本公开另一个实施例提供的合拍方法的流程示意图,该合拍方法可以包括:
S210:获取第一终端采集的第一图像和第二终端采集的第二图像。
在一些实施例中,步骤S210之前,第一终端可根据获取到的用户输入的合拍触发指令生成合拍请求,并将合拍请求发送至第二终端,以使得第二终端响应于合拍请求向第一终端发送第二图像。
第一终端在需合拍时,可向至少一个第二终端发送合拍请求。在一些实施方式中,合拍请求可通过服务器发送给至少一个第二终端,例如,第一终端可根据第一用户对至少一个第二用户的选择,向服务器发送携带有所选择的第二用户的第二终端的终端标识的合拍请求,以使得服务器将合拍请求发送给各终端标识对应的第二终端。在另一些实施方式中,在一些实施方式中,合拍请求也可以直接发送给第二终端,例如,第一终端可以将合拍请求以链接、二维码或其它的形式发送给第二终端,第二用户点击链接、扫描二维码或其它方式可使第二终端获取到该合拍请求。
在一种示例中,第一用户需与至少一个第二用户进行合拍,那么第一用户可以在第一终端上安装的社交软件(社交平台对应的应用软件)如即时通讯软件中,选择至少一个第二用户,并通过点击功能菜单中对应合拍的控件,使得第一终端获取对应的合拍触发指令,第一终端根据该合拍触发指令生成合拍请求,通过服务器向第二用户的第二终端发送合拍请求。也可以先通过点击功能菜单中对应合拍的控件,使得第一终端获取对应的合拍触发指令,再生成合拍请求页面,配置好合拍请求好生成合拍请求,并发送至用户选择的至少一个第二用户的第二终端。第二终端接收到合拍请求后,可在其显示界面中弹出显示相应的提示信息如“第一用户请求与您一同合拍,请选择“拒绝”或“加入””,第二用户可通过点击表示“加入”的按钮,对该合拍请求进行确认。
在另一种示例中,第一终端可预先安装有拍摄平台对应的应用软件,记为拍摄应用,则第一用户也可以打开该拍摄应用启动合拍功能。例如,拍摄应用的拍摄界面可显示有触发合拍功能的控件,根据获取到对应该控件的合拍触发指令即可生成合拍请求。然后,作为一种方式,第一用户可以在拍摄应用中选择至少一个第二用户,从而向第二用户所使用的第二终端发送合拍请求;作为另一种方式,第一用户也可以在生成合拍请求后,将合拍请求转发至其它应用程序,则第一终端可跳转其它应用程序,并将合拍请求通过其它应用程序发送给其他用户来邀请合拍,具体确认方式与上述类似,在此不再赘述。
可以理解的是,上述仅为多个终端之间进行合拍的两种示例方式, 本公开并不限于上述示例。
S220:根据已确定的目标合拍模板确定第一对象对应的第一叠加位置和第二对象对应的第二叠加位置。
其中,目标合拍模板可用于确定前景对象叠加在背景图像上的叠加位置,则第一终端可根据已确定的目标合拍模板确定第一对象对应的第一叠加位置和第二对象对应的第二叠加位置。
在一些实施例中,目标合拍模板可以是由用户输入的模板选择指令确定,则具体地,在步骤S220之前,可获取模板选择指令,并根据模板选择指令,从多个候选合拍模板中确定目标合拍模板。其中,模板选择指令可以由第一终端的第一用户输入,也可以是第二终端发送的。
在一些实施例中,模板选择指令可以在第一终端生成合拍请求时触发,例如,第一用户可在基于第一终端输入合拍触发指令时,选择好本次合拍所基于的目标合拍模板,则生成的合拍请求可携带有目标合拍模板的模板标识,第二终端可获取合拍请求中的模板标识,来确定对应的目标合拍模板,从而基于目标合拍模板来对获取到的第一图像和第二图像进行合成。
作为一种实施方式,第一终端可显示拍摄界面,拍摄姐界面可显示有模板选择按钮,检测到对模板选择按钮的触发操作时,可显示模板选择页面,模板选择页面可显示有多种类型的模板,还可显示每种类型下的一个或多个模板,例如,模板选择页面可显示有一个或多个用于合拍的候选合拍模板。则可根据用户所选择的合拍模板即目标合 拍模板,可获取到对应的合拍触发指令并生成对应的合拍请求,则合拍请求可携带有目标合拍模板的模板标识。
作为另一种实施方式,第一终端获取到合拍触发指令时,可显示合拍请求生成页面,该合拍请求生成页面可显示模板选择页面或提供入口可供第一终端跳转至模板选择页面,模板选择页面可显示至少一个候选合拍模板供用户选择,则在生成合拍请求时第一用户可选择好本次合拍所基于的目标合拍模板。
在一个示例中,请参阅图4,其示出了本公开一个示例性实施例提供的一种显示界面示意图,如图4所示的显示界面中,第一终端响应于对模板选择按钮的触发操作,显示模板选择页面410,模板选择页面410显示有多个合拍模板411。第一终端检测基于模板选择页面410的触发操作,确定该触发操作对应的合拍模板411,则可确定对应的合拍模板为目标合拍模板,获取对应的模板选择指令。
在另一些实施例中,模板选择指令也在至少一个第二终端中任意一个终端对第一终端发出的合拍请求确认后,由参与合拍的第一终端和至少一个第二终端中的任意一个终端触发。例如,响应于至少一个终端对合拍请求的确认操作,第一终端、第二终端中的至少一个可显示模板选择页面,并基于模板选择页面获取模板选择指令,以确定本次合拍所基于的目标合拍模板。
在一些实施方式中,第一终端可以在生成合拍请求前,获取用户选择的目标合拍模板,使得生成的合拍请求携带有所选定的目标合拍模板的模板标识,再将合拍请求发送给第二终端,以使得第二终端响 应于合拍请求向第一终端发送第二图像,并可基于模板标识对应的目标合拍模板进行后续合成。
在另一些实施方式中,第一终端也可分开发送模板选择指令与合拍请求,例如,在发送合拍请求后,第一终端显示模板选择页面,提示用户基于模板选择页面从多个候选合拍模板中选择目标合拍模板。
S230:将第一对象叠加在背景图像的第一叠加位置上,将第二对象叠加在背景图像的第二叠加位置上,合成第三图像。
根据已确定的目标合拍模板确定第一叠加位置和第二叠加位置后,可将第一对象叠加在背景图像的第一叠加位置上,将第二对象叠加在背景图像的第二叠加位置上,合成第三图像。
在一个示例中,以指定的背景图像为用户选择的第四图像为例,请参阅图5,其示出了本公开一个示例性实施例提供的另一种显示界面示意图。如图5所示,用户A发起合拍请求,用户B、C、D对合拍请求进行确认,终端获取用户A、B、C、D各自终端实时采集的图像,并根据模板选择指令确定目标合拍模板,基于目标合拍模板确定用户A、B、C、D对应的叠加位置分别为位置501、502、503、504,并将用户A、B、C、D对应的前景对象分别叠加到位置501、502、503、504,最终得到第三图像,则终端对第三图像进行显示时,可基于目标合拍模板同时显示用户A、B、C、D的终端所采集的图像的前景对象,若每个用户的终端得到了第三图像,则用户A、B、C、D基于各自的终端可同时看到自己和对方的合拍。
需要说明的是,以上仅是一种示例,本公开并不限于上述示例, 本公开实施例还可支持各种合拍模板的合成,考虑篇幅原因,在此不再穷举。
在一个示例性的应用场景中,一男子天团的两位成员在国庆节时分别在海南和泰国度假,但他们想要合拍一段兔子舞参与相关话题的挑战,则可通过本公开实施例提供的合拍方法进行合拍,从而可突破空间的限制,完成异地的实时合拍,使得参与合拍的用户能无视地理间隔,享受共同拍摄的乐趣。
需要说明的是,本实施例中未详细描述的部分请参考前述实施例,在此不再赘述。
由此,通过本实施例提供的合拍方法,在前述实施例的基础上,服务器可根据已确定的目标合拍模板确定第一对象对应的第一叠加位置和第二对象对应的第二叠加位置,并将第一对象叠加在背景图像的第一叠加位置上,将第二对象叠加在背景图像的第二叠加位置上,合成第三图像,从而根据所确定的目标合拍模板的不同,前景对象在背景图像上的叠加位置可不同,从而可得到不同排版布局下的多样化的合拍图像或视频,丰富了拍摄的社交互动玩法,提高了创作效率的同时,还可提高创作的多样性和质量,并同时提升用户合拍过程的体验。
另外,在一些实施例中,指定的背景图像可以为第一图像的背景图像或第二图像的背景图像,则第一终端的第一用户和第二终端的第二用户可基于任一方的背景进行合拍,从而使得用户可突破空间限制,随时随地实现方便的合拍。具体地,请参阅图6,其示出了本公开又 一个实施例提供的合拍方法的流程示意图,该方法可包括:
S310:获取第一终端采集的第一图像和第二终端采集的第二图像。
S320:确定目标对象对应的目标叠加位置。
S330:将目标对象叠加在背景图像的目标叠加位置上,合成第三图像。
若背景图像为第一图像的背景图像则目标对象为第二对象,则可将第一图像整体作为本次合拍的背景图像,仅将第二对象从第二图像中提取出来,并将其作为前景对象叠加在第一图像上,合成第三图像。
同理,若背景图像为第二图像的背景图像则目标对象为第一对象,则可将第二图像整体作为本次合拍的背景图像,仅将第一对象从第一图像中提取出来,并将其作为前景对象叠加在第二图像上,合成第三图像。
其中,确定目标对象对应的目标叠加位置的方式可以有多种,本实施例对此不作限定。
在一些实施方式中,可以根据已确定的目标合拍模板确定目标对象对应的目标叠加位置。其中,目标合拍模板的确定可以参考前述实施例,则用户选择合拍所基于的目标合拍模板后,根据所确定的目标合拍模板的配置信息,可确定合拍的背景图像是第一终端采集的第一图像还是第二终端采集的第二图像,并将不作为背景图像的另一图像的前景对象确定为目标对象,并且根据配置信息还可确定目标对象在背景图像上的目标叠加位置,由此根据已确定的目标合拍模板可确定本次合拍要将哪个终端采集的图像作为背景图像,将哪个终端采集的 图像的前景对象作为目标对象,将目标对象叠加在背景图像上的哪个位置,并提取目标对象将其叠加至目标对象对应的目标叠加位置上,合成第三图像。实现基于任意终端采集的图像的多用户的合拍。
可以理解的是,目标合拍模板所指示的目标叠加位置不同,则根据不同的目标合拍模板,可按不同排版布局合成第三图像。
在另一些实施方式中,也可以根据对背景图像的图像识别结果确定目标对象对应的目标叠加位置。其中,由于本实施例中的背景图像为第一图像或第二图像的背景图像,则对背景图像的图像识别可在确定第一图像和/或第二图像的前景对象和背景对象时执行,然后再根据图像识别结果可确定目标对象对应的目标叠加位置。
作为一种实施方式,背景图像为整个第一图像或整个第二图像,则可识别出背景图像的前景对象,并基于识别得到的前景对象的图像位置,将在前景对象的预设方向上距离前景对象预设距离的图像位置确定为目标叠加位置。
其中,预设方向、预设距离均可根据实际需要确定,可程序预设,也可由用户自定义,在此不做限定。在一些示例中,预设方向可为水平向右、竖直向上、水平向左、竖直向下等大致方向,也可为用具体偏离角度来表征的方向,如以前景对象的图像位置为参考点,竖直向上为0°,则方向可以是顺时针30°方向、顺时针80°方向等,在此不作限定。在一些示例中,预设距离可以以像素为单位,例如可以是50像素、20像素,在此不做限定,在一定范围内,预设距离可随着图像的分辨率的提高而提高,从而避免由于在图像分辨率较低时的预 设距离过大导致第一对象和第二对象之间在合成的第三图像中距离较远,在图像分辨率较高时的预设距离过小导致第一对象和第二对象之间在合成的第三图像中距离较近,甚至可能出现互相遮挡的问题,使得合成效果更佳。
例如,若背景图像为第一图像的背景图像,则目标对象为第二对象,此时,可在第一图像中不包含第一对象的图像区域中确定目标叠加位置,也可在第一图像中,将在第一对象的预设方向上距离第一对象预设距离的位置确定为目标叠加位置。如,在相对于第一对象的图像位置、顺时针30°方向上距离该第一对象的地理位置60个像素的位置确定为目标叠加位置,并将第二对象叠加在第一图像的目标叠加位置上,合成第三图像,则最后第三图像上,第二对象位于第一对象的顺时针30°方向上距离60个像素的位置显示。
在又一些实施方式中,也可通过检测用户作用于背景图像上的操作,可确定该操作对应的位置为目标对象对应的目标叠加位置,进而将目标对象叠加在背景图像的目标叠加位置上,合成第三图像。
在一个示例中,以指定的背景图像为第一图像的背景图像为例,请参阅图7,其示出了本公开一个示例性实施例提供的又一种显示界面示意图。如图7所示,用户A发起合拍请求,用户B、C、D对合拍请求进行确认,本次合拍指定第一图像为背景图像,终端获取用户A、B、C、D各自终端实时采集的图像,并根据模板选择指令确定目标合拍模板,基于目标合拍模板确定用户B、C、D对应的叠加位置分别为位置702、703、704,并将用户B、C、D对应的前景对象分别 叠加到位置702、703、704,最终得到第三图像,则终端对第三图像进行显示时,可基于目标合拍模板同时显示用户A、B、C、D的终端所采集的图像的前景对象,并且,用户B、C、D对应的前景对象分别叠加在用户A对应的背景图像上。
需要说明的是,以上仅是一种示例,本公开并不限于上述示例,本公开实施例还可支持各种合拍模板的合成,考虑篇幅原因,在此不再穷举。
需要说明的是,本实施例中未详细描述的部分请参考前述实施例,在此不再赘述。
由此,通过本实施例提供的合拍方法,可基于第一终端采集的第一图像或基于第二终端采集的第二图像作为背景图像,并对其中不作为背景图像的另一图像进行前背景分离,得到相应的前景对象作为目标对象,并叠加在目标对象对应的目标叠加位置上,合成第三图像,实现基于任意合拍参与终端的背景图像实现合拍,使得类似合照等的合拍玩法不再受地理位置的限制。
另外,在一些实施例中,在合拍过程中,第一用户、第二用户均可基于自己的终端输入特效输入指令,以给当前拍摄的图像添加特效,进一步地丰富拍摄玩法,一边合作拍摄进行互动的同时,还可给自己或对方的拍摄图像添加特效,提高视频趣味性,而且操作简单实时,提高了视频创作者如每个参与合作拍摄的用户的创作效率和创作质量,提升了合作拍摄的灵活性和趣味性。具体地,请参阅图8,其示出了本公开再一个实施例提供的合拍方法的流程示意图,于本实施例 中,该方法可包括:
S410:获取第一终端采集的第一图像和第二终端采集的第二图像。
S420:获取特效输入指令。
S430:根据第一图像和第二图像,合成第三图像。
S340:将目标特效渲染至第三图像中的待显示位置,得到特效处理后的第三图像。
其中,特效输入指令包括待添加的目标特效以及该目标特效对应的待显示位置。需要说明的是,特效输入指令可以在合成第三图像之前获取,也可以在合成第三图像之后获取,即步骤S420可以在步骤S430之前,也可以在步骤S430之后,本实施例对此不作限定。
在一个示例中,终端在合拍过程中的拍摄界面还可显示特效添加按钮,当检测到作用于特效添加按钮的触发操作时,可显示特效添加页面,特效添加页面可显示有多个特效,将被选中的特效确定为待添加的目标特效。特效可以包括添加兔耳朵,换脸特效、瘦脸特效等特效,前述特效可不包含声音,也可结合声音特效,还可以有纯声音特效等,本实施例对可添加的特效类型不作限定,而特效在其他实施例中也可被称为工具,另类似滤镜、美颜等也可作为一类特效。
在一些实施方式中,不同的特效有不同的添加属性,例如,部分特效是局部特效,即仅添加于局部位置,则用户选中待添加的目标特效后,还需在拍摄界面上选择一个位置作为目标特效对应的待显示位置;再如,还有部分特效是全局特效,则用户无需选择位置,而直接将整个第三图像作为待显示位置。
进一步地,在一些实施例中,得到特效处理后的第三图像后,可对特效处理后的第三图像进行显示,使得用户可实时看到被添加特效后的效果,则可进一步提高拍摄的社交互动性、合拍趣味性,有利于提升拍摄质量,并且无需后期添加,合拍过程即可方便地添加特效,还可提高创作效率。
需要说明的是,本实施例中未详细描述的部分请参考前述实施例,在此不再赘述。
由此,通过本实施例提供的合拍方法,在合拍过程中,还可以获取特效输入指令,并将目标特效渲染至第三图像中的待显示位置,得到渲染后的第三图像。从而可在合拍过程中允许用户添加特效,方便地合拍,而且本公开实施例所实现的合拍是实时合拍实时显示,则参与合拍的用户可实时看到被添加特效后的画面,则可进一步提高拍摄的社交互动性、合拍趣味性,有利于提升拍摄质量,并且无需后期添加,合拍过程即可方便地添加特效,还可提高创作效率。
请参照图9,本公开一实施例提供的一种合拍装置的模块框图,该合拍装置900可应用于终端或服务器,具体可以包括:图像获取模块910、以及图像合成模块930,其中:
图像获取模块910,用于获取第一终端采集的第一图像和第二终端采集的第二图像;
图像合成模块920,用于根据第一图像和第二图像,合成第三图像;在第三图像中,第一图像中的第一对象和第二图像中的第二对象被作为前景对象叠加在指定的背景图像上,第一对象为第一图像的前 景对象,第二对象为第二图像的前景对象。
在一实施例中,背景图像为用户选择的第四图像。
在一实施例中,背景图像为第一图像的背景图像或第二图像的背景图像。
在一实施例中,图像合成模块920包括:位置确定子模块以及叠加合成子模块,其中:
位置确定子模块,用于根据已确定的目标合拍模板确定第一对象对应的第一叠加位置和第二对象对应的第二叠加位置;
叠加合成子模块,用于合成将第一对象叠加在背景图像的第一叠加位置上,将第二对象叠加在背景图像的第二叠加位置上,合成第三图像。
在一实施例中,图像合成模块920包括:目标位置确定子模块以及目标叠加合成子模块,其中:
目标位置确定子模块,用于确定目标对象对应的目标叠加位置;若背景图像为第一图像的背景图像则目标对象为第二对象,若背景图像为第二图像的背景图像则目标对象为第一对象;
目标叠加合成子模块,用于将目标对象叠加在背景图像的目标叠加位置上,合成第三图像。
在一实施例中,目标位置确定子模块包括:模板确定单元或者识别确定单元,其中:
模板确定单元,用于根据已确定的目标合拍模板确定目标对象对应的目标叠加位置;
识别确定单元,用于根据对背景图像的图像识别结果确定目标对象对应的目标叠加位置。
在一实施例中,根据第一图像和第二图像,合成第三图像之前,合拍装置900还包括:模板指令获取模块以及模板确定模块,其中:
模板指令获取模块,用于获取模板选择指令;
模板确定模块,用于根据模板选择指令,从多个候选合拍模板中确定目标合拍模板,目标合拍模板用于确定前景对象叠加在背景图像上的叠加位置。
在一实施例中,根据第一图像和第二图像,合成第三图像之前,合拍装置900还包括:特效指令获取模块以及特效处理模块,其中:
特效指令获取模块,用于获取特效输入指令,特效输入指令包括待添加的目标特效以及目标特效对应的待显示位置;
根据第一图像和第二图像,合成第三图像之后,合拍装置900还包括:特效处理模块,用于将目标特效渲染至第三图像中的待显示位置,得到特效处理后的第三图像。
在一实施例中,若合拍装置900应用于第一终端,则获取第一终端采集的第一图像和第二终端采集的第二图像之前,合拍装置900还包括:合拍请求模块,用于根据获取到的用户输入的合拍触发指令生成合拍请求,并将合拍请求发送至第二终端,以使得第二终端响应于合拍请求向第一终端发送第二图像。
本公开实施例的合拍装置可执行本公开的实施例所提供的合拍方法,其实现原理相类似,本公开各实施例中的合拍装置中的各模块 所执行的动作是与本公开各实施例中的合拍方法中的步骤相对应的,对于合拍装置的各模块的详细功能描述具体可以参见前文中所示的对应的合拍方法中的描述,此处不再赘述。
下面参考图10,其示出了适于用来实现本公开实施例的电子设备1000的结构框图。本公开实施例中的电子设备可以包括但不限于诸如计算机、手机等终端、服务器等的设备。图10示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
电子设备1000包括:存储器以及处理器,其中,这里的处理器可以称为下文的处理装置1001,存储器可以包括下文中的只读存储器(ROM)1002、随机访问存储器(RAM)1003以及存储装置1008中的至少一项,具体如下所示:
如图10所示,电子设备1000可以包括处理装置(例如中央处理器、图形处理器等)1001,其可以根据存储在只读存储器(ROM)1002中的程序或者从存储装置1008加载到随机访问存储器(RAM)1003中的程序而执行各种适当的动作和处理。在RAM 1003中,还存储有电子设备1000操作所需的各种程序和数据。处理装置1001、ROM 1002以及RAM 1003通过总线1004彼此相连。输入/输出(I/O)接口1005也连接至总线1004。
通常,以下装置可以连接至I/O接口1005:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置1006;包括例如液晶显示器(LCD)、扬声器、振动器等的输出 装置1007;包括例如磁带、硬盘等的存储装置1008;以及通信装置1009。通信装置1009可以允许电子设备1000与其他设备进行无线或有线通信以交换数据。虽然图10示出了具有各种装置的电子设备1000,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读存储介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置1009从网络上被下载和安装,或者从存储装置1008被安装,或者从ROM 1002被安装。在该计算机程序被处理装置1001执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读存储介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储 介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读存储介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读存储介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读存储介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读存储介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备执行以下步骤: 获取第一终端采集的第一图像和第二终端采集的第二图像;根据第一图像和第二图像,合成第三图像;在第三图像中,第一图像中的第一对象和第二图像中的第二对象被作为前景对象叠加在指定的背景图像上,第一对象为第一图像的前景对象,第二对象为第二图像的前景对象。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时 也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的模块或单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块或单元的名称在某种情况下并不构成对该单元本身的限定,例如,显示模块还可以被描述为“用于显示资源上传界面的模块”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,计算机可读存储介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。计算机可读存储介质可以是机器可读信号介质或机器可读储存介质。计算机可读存储介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存 储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,提供了一种合拍方法,应用于第一终端,该方法包括:获取第一终端采集的第一图像和第二终端采集的第二图像;根据所述第一图像和所述第二图像,合成第三图像;在所述第三图像中,所述第一图像中的第一对象和所述第二图像中的第二对象被作为前景对象叠加在指定的背景图像上,所述第一对象为所述第一图像的前景对象,所述第二对象为所述第二图像的前景对象。
在一实施例中,所述背景图像为用户选择的第四图像。
在一实施例中,所述背景图像为所述第一图像的背景图像或所述第二图像的背景图像。
在一实施例中,所述根据所述第一图像和所述第二图像,合成第三图像,包括:根据已确定的目标合拍模板确定所述第一对象对应的第一叠加位置和所述第二对象对应的第二叠加位置;将所述第一对象叠加在所述背景图像的所述第一叠加位置上,将所述第二对象叠加在所述背景图像的所述第二叠加位置上,合成第三图像。
在一实施例中,所述根据所述第一图像和所述第二图像,合成第三图像,包括:确定目标对象对应的目标叠加位置;若所述背景图像为所述第一图像的背景图像则所述目标对象为所述第二对象,若所述背景图像为所述第二图像的背景图像则所述目标对象为所述第一对象;将所述目标对象叠加在所述背景图像的目标叠加位置上,合成第三图像。
在一实施例中,所述确定目标对象被叠加在所述背景图像的目标叠加位置,包括:根据已确定的目标合拍模板确定目标对象对应的目标叠加位置;或者根据对所述背景图像的图像识别结果确定目标对象对应的目标叠加位置。
在一实施例中,所述根据所述第一图像和所述第二图像,合成第三图像之前,所述方法还包括:获取模板选择指令;根据所述模板选择指令,从多个候选合拍模板中确定所述目标合拍模板,所述目标合拍模板用于确定所述前景对象叠加在所述背景图像上的叠加位置。
在一实施例中,所述根据所述第一图像和所述第二图像,合成第三图像之前,所述方法还包括:获取特效输入指令,所述特效输入指令包括待添加的目标特效以及所述目标特效对应的待显示位置;所述根据所述第一图像和所述第二图像,合成第三图像之后,还包括:将所述目标特效渲染至所述第三图像中的所述待显示位置,得到特效处理后的第三图像。
在一实施例中,若所述合拍方法应用于第一终端,则所述获取第一终端采集的第一图像和第二终端采集的第二图像之前,所述方法还包括:根据获取到的用户输入的合拍触发指令生成合拍请求,并将所述合拍请求发送至第二终端,以使得所述第二终端响应于所述合拍请求向所述第一终端发送所述第二图像。
根据本公开的一个或多个实施例,提供了一种合拍装置,包括:图像获取模块,用于获取第一终端采集的第一图像和第二终端采集的第二图像;图像合成模块,用于根据所述第一图像和所述第二图像, 合成第三图像;在所述第三图像中,所述第一图像中的第一对象和所述第二图像中的第二对象被作为前景对象叠加在指定的背景图像上,所述第一对象为所述第一图像的前景对象,所述第二对象为所述第二图像的前景对象。
在一实施例中,所述背景图像为用户选择的第四图像。
在一实施例中,所述背景图像为所述第一图像的背景图像或所述第二图像的背景图像。
在一实施例中,图像合成模块包括:位置确定子模块以及叠加合成子模块,其中:位置确定子模块,用于根据已确定的目标合拍模板确定所述第一对象对应的第一叠加位置和所述第二对象对应的第二叠加位置;叠加合成子模块,用于合成将所述第一对象叠加在所述背景图像的所述第一叠加位置上,将所述第二对象叠加在所述背景图像的所述第二叠加位置上,合成第三图像。
在一实施例中,图像合成模块包括:目标位置确定子模块以及目标叠加合成子模块,其中:目标位置确定子模块,用于确定目标对象对应的目标叠加位置;若背景图像为第一图像的背景图像则目标对象为第二对象,若背景图像为第二图像的背景图像则目标对象为第一对象;目标叠加合成子模块,用于将目标对象叠加在背景图像的目标叠加位置上,合成第三图像。
在一实施例中,目标位置确定子模块包括:模板确定单元或者识别确定单元,其中:模板确定单元,用于根据已确定的目标合拍模板确定目标对象对应的目标叠加位置;识别确定单元,用于根据对背景 图像的图像识别结果确定目标对象对应的目标叠加位置。
在一实施例中,所述根据所述第一图像和所述第二图像,合成第三图像之前,合拍装置还包括:模板指令获取模块以及模板确定模块,其中:模板指令获取模块,用于获取模板选择指令;模板确定模块,用于根据所述模板选择指令,从多个候选合拍模板中确定所述目标合拍模板,所述目标合拍模板用于确定所述前景对象叠加在所述背景图像上的叠加位置。
在一实施例中,所述根据所述第一图像和所述第二图像,合成第三图像之前,合拍装置还包括:特效指令获取模块以及特效处理模块,其中:特效指令获取模块,用于获取特效输入指令,所述特效输入指令包括待添加的目标特效以及所述目标特效对应的待显示位置;所述根据所述第一图像和所述第二图像,合成第三图像之后,合拍装置还包括:特效处理模块,用于将所述目标特效渲染至所述第三图像中的所述待显示位置,得到特效处理后的第三图像。
在一实施例中,若所述合拍装置应用于第一终端,则所述获取第一终端采集的第一图像和第二终端采集的第二图像之前,合拍装置还包括:合拍请求模块,用于根据获取到的用户输入的合拍触发指令生成合拍请求,并将所述合拍请求发送至第二终端,以使得所述第二终端响应于所述合拍请求向所述第一终端发送所述第二图像。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离 上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。。

Claims (12)

  1. 一种合拍方法,其特征在于,所述方法包括:
    获取第一终端采集的第一图像和第二终端采集的第二图像;
    根据所述第一图像和所述第二图像,合成第三图像;在所述第三图像中,所述第一图像中的第一对象和所述第二图像中的第二对象被作为前景对象叠加在指定的背景图像上,所述第一对象为所述第一图像的前景对象,所述第二对象为所述第二图像的前景对象。
  2. 根据权利要求1所述的方法,其特征在于,所述背景图像为用户选择的第四图像。
  3. 根据权利要求1所述的方法,其特征在于,所述背景图像为所述第一图像的背景图像或所述第二图像的背景图像。
  4. 根据权利要求2所述的方法,其特征在于,所述根据所述第一图像和所述第二图像,合成第三图像,包括:
    根据已确定的目标合拍模板确定所述第一对象对应的第一叠加位置和所述第二对象对应的第二叠加位置;
    将所述第一对象叠加在所述背景图像的所述第一叠加位置上,将所述第二对象叠加在所述背景图像的所述第二叠加位置上,合成第三图像。
  5. 根据权利要求3所述的方法,其特征在于,所述根据所述第一图像和所述第二图像,合成第三图像,包括:
    确定目标对象对应的目标叠加位置;若所述背景图像为所述第一图像的背景图像则所述目标对象为所述第二对象,若所述背景图像为所述第二图像的背景图像则所述目标对象为所述第一对象;
    将所述目标对象叠加在所述背景图像的目标叠加位置上,合成第三图像。
  6. 根据权利要求5所述的方法,其特征在于,所述确定目标对象被叠加在所述背景图像的目标叠加位置,包括:
    根据已确定的目标合拍模板确定目标对象对应的目标叠加位置;或者
    根据对所述背景图像的图像识别结果确定目标对象对应的目标叠加位置。
  7. 根据权利要求4至6任一项所述的方法,其特征在于,所述根据所述第一图像和所述第二图像,合成第三图像之前,还包括:
    获取模板选择指令;
    根据所述模板选择指令,从多个候选合拍模板中确定所述目标合拍模板,所述目标合拍模板用于确定所述前景对象叠加在所述背景图像上的叠加位置。
  8. 根据权利要求1至3任一项所述的方法,其特征在于,所述方法还包括:
    获取特效输入指令,所述特效输入指令包括待添加的目标特效以及所述目标特效对应的待显示位置;
    所述根据所述第一图像和所述第二图像,合成第三图像之后,还包括:
    将所述目标特效渲染至所述第三图像中的所述待显示位置,得到特效处理后的第三图像。
  9. 根据权利要求1至3任一项所述的方法,其特征在于,应用于 所述第一终端,所述获取第一终端采集的第一图像和第二终端采集的第二图像之前,还包括:
    根据获取到的用户输入的合拍触发指令生成合拍请求,并将所述合拍请求发送至第二终端,以使得所述第二终端响应于所述合拍请求向所述第一终端发送所述第二图像。
  10. 一种合拍装置,其特征在于,所述装置包括:
    图像获取模块,用于获取第一终端采集的第一图像和第二终端采集的第二图像;
    图像合成模块,用于根据所述第一图像和所述第二图像,合成第三图像;在所述第三图像中,所述第一图像中的第一对象和所述第二图像中的第二对象被作为前景对象叠加在指定的背景图像上,所述第一对象为所述第一图像的前景对象,所述第二对象为所述第二图像的前景对象。
  11. 一种电子设备,其特征在于,包括:
    一个或多个处理器;
    存储器;
    一个或多个计算机程序,其中,所述一个或多个计算机程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个计算机程序配置用于:执行如权利要求1-9任一项所述的合拍方法。
  12. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质用于存储计算机程序,所述计算机程序被处理器调用执行如权利要求1-9任一项所述的合拍方法。
PCT/CN2021/116519 2020-09-04 2021-09-03 合拍方法、装置、电子设备及计算机可读存储介质 WO2022048651A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/044,062 US20230336684A1 (en) 2020-09-04 2021-09-03 Cooperative photographing method and apparatus, electronic device, and computer-readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010924413.4 2020-09-04
CN202010924413.4A CN112004034A (zh) 2020-09-04 2020-09-04 合拍方法、装置、电子设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2022048651A1 true WO2022048651A1 (zh) 2022-03-10

Family

ID=73468774

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/116519 WO2022048651A1 (zh) 2020-09-04 2021-09-03 合拍方法、装置、电子设备及计算机可读存储介质

Country Status (3)

Country Link
US (1) US20230336684A1 (zh)
CN (1) CN112004034A (zh)
WO (1) WO2022048651A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116546309A (zh) * 2023-07-04 2023-08-04 广州方图科技有限公司 自助拍照设备多人拍照方法、装置、电子设备及存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112004034A (zh) * 2020-09-04 2020-11-27 北京字节跳动网络技术有限公司 合拍方法、装置、电子设备及计算机可读存储介质
EP4270300A4 (en) * 2021-02-09 2024-07-17 Huawei Tech Co Ltd MULTI-PERSON CAPTURE METHOD AND ELECTRONIC DEVICE
CN113806306B (zh) 2021-08-04 2024-01-16 北京字跳网络技术有限公司 媒体文件处理方法、装置、设备、可读存储介质及产品
CN113727024B (zh) * 2021-08-30 2023-07-25 北京达佳互联信息技术有限公司 多媒体信息生成方法、装置、电子设备和存储介质
CN113946254B (zh) * 2021-11-01 2023-10-20 北京字跳网络技术有限公司 内容显示方法、装置、设备及介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10164435A (ja) * 1996-10-04 1998-06-19 Nippon Telegr & Teleph Corp <Ntt> 複数映像の時空間統合、管理方法及びその装置並びにそのプログラムを記録した記録媒体
JP2015012535A (ja) * 2013-07-01 2015-01-19 オリンパス株式会社 撮影機器及び撮影方法
CN107404617A (zh) * 2017-07-21 2017-11-28 努比亚技术有限公司 一种拍摄方法及终端、计算机存储介质
CN110111238A (zh) * 2019-04-24 2019-08-09 薄涛 图像处理方法、装置、设备及其存储介质
CN110602396A (zh) * 2019-09-11 2019-12-20 腾讯科技(深圳)有限公司 智能合影方法、装置、电子设备及存储介质
CN110992256A (zh) * 2019-12-17 2020-04-10 腾讯科技(深圳)有限公司 一种图像处理方法、装置、设备及存储介质
CN111050072A (zh) * 2019-12-24 2020-04-21 Oppo广东移动通信有限公司 一种异地合拍方法、设备以及存储介质
CN112004034A (zh) * 2020-09-04 2020-11-27 北京字节跳动网络技术有限公司 合拍方法、装置、电子设备及计算机可读存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111210397B (zh) * 2020-01-10 2021-09-10 口碑(上海)信息技术有限公司 图像处理方法、图像展示方法、装置及电子设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10164435A (ja) * 1996-10-04 1998-06-19 Nippon Telegr & Teleph Corp <Ntt> 複数映像の時空間統合、管理方法及びその装置並びにそのプログラムを記録した記録媒体
JP2015012535A (ja) * 2013-07-01 2015-01-19 オリンパス株式会社 撮影機器及び撮影方法
CN107404617A (zh) * 2017-07-21 2017-11-28 努比亚技术有限公司 一种拍摄方法及终端、计算机存储介质
CN110111238A (zh) * 2019-04-24 2019-08-09 薄涛 图像处理方法、装置、设备及其存储介质
CN110602396A (zh) * 2019-09-11 2019-12-20 腾讯科技(深圳)有限公司 智能合影方法、装置、电子设备及存储介质
CN110992256A (zh) * 2019-12-17 2020-04-10 腾讯科技(深圳)有限公司 一种图像处理方法、装置、设备及存储介质
CN111050072A (zh) * 2019-12-24 2020-04-21 Oppo广东移动通信有限公司 一种异地合拍方法、设备以及存储介质
CN112004034A (zh) * 2020-09-04 2020-11-27 北京字节跳动网络技术有限公司 合拍方法、装置、电子设备及计算机可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116546309A (zh) * 2023-07-04 2023-08-04 广州方图科技有限公司 自助拍照设备多人拍照方法、装置、电子设备及存储介质
CN116546309B (zh) * 2023-07-04 2023-10-20 广州方图科技有限公司 自助拍照设备多人拍照方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN112004034A (zh) 2020-11-27
US20230336684A1 (en) 2023-10-19

Similar Documents

Publication Publication Date Title
WO2022048651A1 (zh) 合拍方法、装置、电子设备及计算机可读存储介质
CN109167950B (zh) 视频录制方法、视频播放方法、装置、设备及存储介质
JP6756269B2 (ja) 通信端末、画像通信システム、通信方法、及びプログラム
WO2022042674A1 (zh) 拍摄方法、装置、电子设备及计算机可读存储介质
WO2018010682A1 (zh) 直播方法、直播数据流展示方法和终端
US20200045244A1 (en) Communication terminal, image data communication system, and communication method
CN112492339B (zh) 直播方法、装置、服务器、终端以及存储介质
CN111835531B (zh) 会话处理方法、装置、计算机设备及存储介质
CN112118477B (zh) 虚拟礼物展示方法、装置、设备以及存储介质
JP2022536182A (ja) データストリームを同期させるシステム及び方法
WO2022134684A1 (zh) 基于直播应用程序的互动方法、装置、设备及存储介质
WO2022170929A1 (zh) 多媒体信息处理方法、装置、电子设备和存储介质
CN111629151B (zh) 视频合拍方法、装置、电子设备及计算机可读介质
US10306128B2 (en) Synchronized media capturing for an interactive scene
CN113225483B (zh) 图像融合方法、装置、电子设备和存储介质
US11877092B2 (en) Communication management device, image communication system, communication management method, and recording medium
WO2023098011A1 (zh) 视频播放方法及电子设备
CN115002359A (zh) 视频处理方法、装置、电子设备及存储介质
CN113204671A (zh) 资源展示方法、装置、终端、服务器、介质及产品
CN113518198B (zh) 会话界面显示方法、会议界面显示方法、装置及电子设备
US20240089603A1 (en) Communication terminal, image communication system, and method of displaying image
CN113141538A (zh) 媒体资源播放方法、装置、终端、服务器及存储介质
CN114466145B (zh) 视频处理方法、装置、设备和存储介质
CN109842542B (zh) 即时会话方法及装置、电子设备、存储介质
CN113573154A (zh) 多媒体数据播放方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21863713

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21863713

Country of ref document: EP

Kind code of ref document: A1