WO2023030107A1 - 合拍方法、装置、电子设备及可读介质 - Google Patents

合拍方法、装置、电子设备及可读介质 Download PDF

Info

Publication number
WO2023030107A1
WO2023030107A1 PCT/CN2022/114379 CN2022114379W WO2023030107A1 WO 2023030107 A1 WO2023030107 A1 WO 2023030107A1 CN 2022114379 W CN2022114379 W CN 2022114379W WO 2023030107 A1 WO2023030107 A1 WO 2023030107A1
Authority
WO
WIPO (PCT)
Prior art keywords
instance
shooting
template material
user
background
Prior art date
Application number
PCT/CN2022/114379
Other languages
English (en)
French (fr)
Inventor
彭威
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023030107A1 publication Critical patent/WO2023030107A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Definitions

  • the present disclosure relates to the technical field of image processing, for example, to a co-shooting method, device, electronic equipment and readable medium.
  • the user’s shooting environment is complex and diverse, and it is difficult for the user to accurately interpret the original character, coupled with the different distances of the shooting equipment, different positions, and differences in the actions of each imitation, etc.
  • the synthesis of the original video and the content shot by the user brings a lot of trouble.
  • the difficulty For co-production applications, the characters or scenes that users can imitate are relatively simple, basically only involving facial expressions or a small amount of small head movement changes, if the movement range is large, or involves changes in body movements, it cannot be completed High-quality co-shooting results in the separation of the user's shooting content from the original material, unnatural transitions, etc., and the poor synthesis effect affects the user's experience.
  • the present disclosure provides a co-shooting method, device, electronic equipment and readable medium, so as to improve the consistency between co-shooting materials and template materials, and improve the accuracy of co-shooting.
  • the present disclosure provides a co-production method, including:
  • the co-shooting material including a second instance corresponding to the first instance
  • the present disclosure also provides a co-shooting device, including:
  • a contour extraction module configured to extract contour information of a first instance in a template material, the template material including the first instance and an instance background;
  • a material acquiring module configured to acquire co-shooting materials imported by the user based on the profile information, the co-shooting materials including a second instance corresponding to the first instance;
  • the co-shooting module is configured to add the second instance to the region corresponding to the first instance in the template material to obtain a co-shooting result, wherein the co-shooting result includes the second instance and the instance background.
  • the present disclosure also provides an electronic device, comprising:
  • processors one or more processors
  • a storage device configured to store one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the above-mentioned co-time method.
  • the present disclosure also provides a computer-readable medium on which a computer program is stored, wherein when the program is executed by a processor, the above-mentioned co-shooting method is realized.
  • FIG. 1 is a flow chart of a co-shooting method provided in Embodiment 1 of the present disclosure
  • FIG. 2 is a flow chart of a co-production method provided in Embodiment 2 of the present disclosure
  • Fig. 3 is a schematic diagram of a first example of a template material provided in Embodiment 2 of the present disclosure
  • Fig. 4 is a schematic diagram of outline information of a first example provided by Embodiment 2 of the present disclosure.
  • Fig. 5 is a schematic diagram of background complementation for the region removed from the first instance provided by Embodiment 2 of the present disclosure
  • FIG. 6 is a schematic diagram of a user shooting interface provided by Embodiment 2 of the present disclosure.
  • FIG. 7 is a schematic diagram of determining a template material provided by Embodiment 2 of the present disclosure.
  • FIG. 8 is a flow chart of a co-shooting method provided by Embodiment 3 of the present disclosure.
  • FIG. 9 is a schematic structural diagram of a co-shooting device provided in Embodiment 4 of the present disclosure.
  • FIG. 10 is a schematic diagram of a hardware structure of an electronic device provided by Embodiment 5 of the present disclosure.
  • the term “comprise” and its variations are open-ended, ie “including but not limited to”.
  • the term “based on” is “based at least in part on”.
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one further embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
  • FIG. 1 is a flow chart of a co-shooting method provided in Embodiment 1 of the present disclosure. This method is applicable to the situation where the user performs co-shooting according to the template material, by adding the examples in the co-shooting material to the template material, and synthesizing with the example background in the template material, so as to realize the imitation or interpretation of various scenes or plots.
  • the method can be executed by a co-shooting device, wherein the device can be implemented by software and/or hardware, and integrated on an electronic device.
  • the electronic device in this embodiment may be a device with image processing functions such as a computer, a notebook computer, a server, a tablet computer, or a smart phone.
  • a co-shooting method provided by Embodiment 1 of the present disclosure includes:
  • the template material may be an image or video for the user to refer to for imitation or interpretation, for example, it may be a famous painting, a classic movie clip, or an animation with special effects.
  • the template material used for co-production can be specified by the user, and the electronic device downloads it locally from the material library.
  • the template material includes a first instance and an instance background, wherein the first instance includes an object imitated or performed by the user, and the object will not be displayed in the co-production result, but will be replaced or covered by the content imitated or performed by the user, for example,
  • the first example can be a character in a movie clip, and can also include props held by the character, etc.; the instance background includes objects that do not require the user to imitate or perform, and can be displayed in the co-production result, such as the environment around the character in the movie clip, such as walls, Roads, rivers, etc.
  • the principle can be: detect and locate the bounding box for each instance in the template material, and for the bounding box where each instance is located, perform pixel-level foreground and foreground segmentation inside the bounding box, where the foreground is the instance , the background can be considered as the instance background.
  • the first instance may be one or more of the plurality of instances. The first instance may be determined by the electronic device according to the default configuration of the template material, or may be specified by the user.
  • the semantic segmentation algorithm is mainly applicable to the case where there is one first instance in the template material, and the instance segmentation algorithm is mainly applicable to the case where there are at least two first instances in the template material.
  • the contour information of the first instance can be extracted by semantic segmentation or instance segmentation algorithm.
  • Outline information is used to describe the position and shape of the first instance. For example, if the first instance in the template material is a dancing little girl, the outline information needs to indicate the position of the little girl in the template material and the dancing posture, so as to assist the user to adjust the shooting angle when using electronic devices to take pictures together , so that the user can correctly occupy the place in the captured picture, and guide the user to complete the same or similar actions as the little girl.
  • Contour information can be embodied in the form of text, lines, symbols, stick figures or auxiliary lines.
  • the outline information of the first instance is stored locally, and when the user shoots the co-shooting material, the outline information is read and displayed in a visual form in the user shooting interface, To guide the user to take a seat and complete the corresponding action.
  • the process of performing semantic segmentation or instance segmentation on the template material can be performed offline.
  • the co-shooting material can be an image or video taken by the user imitating or performing the first instance, wherein the content imitated or performed by the user is the second instance, and the second instance corresponds to the first instance, for example, it can be reflected in the second instance and the first instance An instance of the same or similar profile.
  • the co-shot material can be the material obtained by the user in real time according to the profile information of the first instance, or it can be the material that has been shot imported from the library, and the captured material includes the same or similar profile as the profile of the first instance instance of . On this basis, the consistency of the contours of the second instance and the first instance can be guaranteed, the usability of co-shooting materials can be improved, and it is also convenient to accurately synthesize the second instance and the background of the instance.
  • the co-shooting material may also include shooting background, that is, the environment where the user is in when shooting the co-shooting material.
  • shooting background that is, the environment where the user is in when shooting the co-shooting material.
  • the shooting picture can be used as the co-shooting material, the user dancing in the shooting picture is the second instance, and the bedroom environment in the shooting picture is the shooting background.
  • semantic segmentation or instance segmentation is required for the co-shooting material imported by the user to obtain the second instance and shooting background, wherein the second instance is used to replace or cover the first instance in the template material, so as to be consistent with the instance background Composite to achieve a coherent shot. If there is only one second instance (usually the captured user himself) in the co-shooting material, the contour of the second instance can be extracted by using the semantic segmentation algorithm, which can save the amount of calculation; there may also be multiple second instances in the co-shooting material , then using the instance segmentation algorithm, all the second instances in the co-production material can be identified.
  • the segmentation results of each second instance can be associated with the instance identifier, and according to the relative positions of multiple second instances
  • the relationship, and/or the profile information of each second instance can determine the first instance associated with each second instance in the template material, and on this basis, multi-user co-production can be realized.
  • the semantic segmentation or instance segmentation of co-shooting materials can be performed online, which is convenient for flexibly invoking related algorithms and using computing resources.
  • the second instance can be used to replace or cover the first instance in the template material, so that it can be combined with the instance background to obtain a matching result.
  • Adding the second instance to the area corresponding to the first instance can refer to removing the first instance from the template material (the area after removing the first instance can be empty or blank, or can be filled according to the texture characteristics of the instance background ), and then display the second instance on the area corresponding to the first instance.
  • the degree of integration between the second instance and the instance background is higher;
  • the consistency requirements of the outlines of the first instance and the second instance are relatively high, and it is necessary to ensure that the second instance can completely cover the first instance.
  • Adding the second instance to the area corresponding to the first instance may also be a synthesis process of the second instance and the instance background.
  • the process of adding the second instance to the template material can be performed online.
  • the area corresponding to the first instance includes the outline of the first instance.
  • the area corresponding to the first instance is larger than that of the first instance.
  • each second instance corresponds to According to which first instance in the template material, each second instance is added to the area corresponding to the corresponding first instance in the template material, so as to realize multi-user co-production.
  • a co-shooting method provided in this embodiment uses the outline information of the first instance to guide the user to import the co-shooting material, so as to improve the consistency between the co-shooting material and the profile of the instance in the template material, so as to realize the synthesis of the second instance and the background of the template material instance. Improve the accuracy of co-shooting; and due to the guidance of contour information, this method can be applied to more kinds of co-shooting scenes, even for the first example where the action is complex, the range of motion is large, or the body movement changes. It can improve the usability of co-production materials, and ensure the synthesis effect and the quality of co-production.
  • FIG. 2 is a flow chart of a co-shooting method provided in Embodiment 2 of the present disclosure.
  • the process of obtaining the co-shooting material and adding the second instance to the template material is described.
  • a co-shooting method provided by Embodiment 2 of the present disclosure includes:
  • Fig. 3 is a schematic diagram of a first example of a template material provided in Embodiment 2 of the present disclosure.
  • the template material can be an image or a video. If the template material is a video, it is necessary to extract the outline information of the first instance frame by frame. As shown in Figure 3, taking a frame of image in the template material as an example, the area surrounded by the white box contains the first instance, which is a character including the head and upper body; the background of the instance mainly includes the sea and railings .
  • Fig. 4 is a schematic diagram of outline information of a first example provided by Embodiment 2 of the present disclosure.
  • the first instance and the instance background are obtained by instance segmentation of the template material, where the black area is the area corresponding to the instance background in the template material, the white area is the area corresponding to the first instance, and the boundary line between the black and white areas It is the outline of the first example, and the outline information can be recorded or stored in the form of words, lines, symbols, stick figures, or auxiliary lines.
  • Outline guides are used to identify the position and shape of the first instance in the template material.
  • a contour guide is a line drawn around the outer edge of the first instance, which can be represented as a dashed or solid line.
  • the process of generating the contour auxiliary line may be: according to the instance segmentation result, sampling the points on the boundary line of the black and white areas in Fig. 4, and starting from a sampling point, according to Connect all sampling points in a clockwise or counterclockwise direction to obtain a contour auxiliary line.
  • the co-shot material is captured by the user according to the outline auxiliary line.
  • the contour auxiliary line is displayed at a specific position of the shooting screen in the user shooting interface, which is theoretically consistent with the position of the first instance in the template material, and errors in the setting range are allowed.
  • the first instance in Figure 3 is located in the middle right area of the template material, then in the user shooting interface, the contour auxiliary line is also located in the middle right area of the shooting screen, on this basis, the contour auxiliary line can guide the user to adjust
  • the shooting angle is such that the captured second instance (for example, the user himself) is located within the outline auxiliary line, so that the electronic device can quickly extract the second instance from the middle right area of the shooting screen for co-shooting.
  • Prompt information such as text, lines, symbols and/or stick figures can also be displayed in the user shooting interface, and the user starts to interpret and shoot according to these prompt information and outline auxiliary lines.
  • the captured frame is used as the co-shooting material.
  • the user shooting interface if the error between the contour of the second instance and the contour auxiliary line is within the set range, that is, the position and shape of the second instance are consistent with or close to the contour (or contour auxiliary line) of the first instance.
  • the user's shooting picture can correspond to the template material and meet the synthesis conditions.
  • the captured picture can be used as the co-shooting material; if the position and shape of the second instance are inconsistent with the outline (or outline auxiliary line) of the first instance or If the difference is too far, the shooting screen cannot be accurately corresponded to the template material, and the second instance and the background of the instance cannot be accurately connected or synthesized.
  • the prompt information can be used to guide the user to adjust the position and shape.
  • the error between the contour of the second instance and the contour auxiliary line is within the set range, it may mean that the number of pixels outside the contour auxiliary line in the second instance is lower than the first threshold, the contour of the second instance and the contour
  • the coincidence degree of the auxiliary lines is higher than the second threshold, the furthest distance between the contour of the second instance and the corresponding pixel of the contour auxiliary line is smaller than the third threshold, etc., which are not limited in this embodiment.
  • the method further includes: performing semantic segmentation or instance segmentation on the co-shoot material to obtain a second instance.
  • the co-shooting material is the shooting screen in the user shooting interface as an example
  • the second instance can be extracted from the shooting screen by using semantic segmentation or instance segmentation algorithm, and there may be one or more second instances. If there is only one second instance in the co-production material, use the semantic segmentation algorithm to extract the outline of the second instance, and then add it to the area corresponding to the first instance in the template material; if there are many co-production materials second instance, then using the instance segmentation algorithm, all the second instances in the co-shot material can be identified, in this case, each second instance can be used to replace or cover the associated first instance in the template material, in On this basis, multi-user co-production is realized.
  • the first instance is removed from the template material, and the first instance may be cut out by a cutout algorithm.
  • the region after removing the first instance may be vacant or blank, or may be filled according to the texture characteristics of the instance background.
  • after removing the first instance further comprising: performing background complementation on the vacant area after removing the first instance according to the image characteristics of the instance background.
  • Fig. 5 is a schematic diagram of performing background complementation on the region removed from the first instance provided by Embodiment 2 of the present disclosure.
  • the process of background completion can be to use the image features of the instance background to predict the features of the pixels in the vacant area through image inpainting or completion algorithm, and fill the vacant area accordingly.
  • the instance background mainly includes the features of the sea and railings. Accordingly, the texture of the sea and railings is filled in the vacant area after removing the first instance, and the filling content of the vacant area is basically aligned with the instance background. This improves the composition quality and ensures the visual coherence and consistency of the background.
  • the transition between the second instance and the instance background is more natural, and the synthesis effect is better.
  • the co-production material is shot by the user, while the template material is usually shot by a professional or a person familiar with video production. There are usually differences in the shooting conditions, color and style of the material between the co-production material and the template material, resulting in the second instance Compared with the example background, it is more abrupt and the transition is unnatural.
  • the color of the second instance is adjusted according to the image characteristics of the instance background, so that the composite material is visually more harmonious and natural.
  • the second instance in it is compared with the instance background of the corresponding frame in the template material frame by frame, and the color of the second instance is adjusted, for example, the color value of each pixel in the second instance may be adjusted.
  • adjusting the color of the second instance according to the image characteristics of the instance background may also be transferring the color of the instance background in the template material to the second instance.
  • the color tone, filter and/or special effects of the second instance may also be adjusted according to the image characteristics of the instance background, so that the second instance and the instance background blend more naturally.
  • the process of color migration can be performed online.
  • the spherical harmonic light model is also used to perform lighting rendering on the second instance.
  • This process can also be to migrate the ambient light in the template material to the second instance, to enhance the realism and three-dimensionality of the second instance in the co-shot footage.
  • the spherical harmonic modeling process can be performed offline.
  • the process of performing lighting rendering on the second instance may be performed online.
  • This embodiment does not limit the execution sequence of S250-S270.
  • the content displayed on the user shooting interface also includes the second instance and the shooting background, that is, the shooting screen when the user imitates or performs in the real shooting environment is displayed in the user shooting interface, and the second instance is to display On the real shooting background, the user can adjust the position according to the contour auxiliary line and complete the corresponding action.
  • the synthesis of the second instance and the instance background can be carried out after the shooting of the co-shooting material is completed, which can reduce the calculation amount of electronic equipment And central processing unit (Central Processing Unit, CPU) occupancy rate, relatively low to the performance requirement of electronic equipment.
  • CPU Central Processing Unit
  • the content displayed on the user shooting interface also includes the second instance and the instance background, that is, when the user is shooting the co-shooting material, the user shooting interface displays in real time a combined picture of the second instance and the instance background in the template material.
  • the user can preview the synthesized effect in real time, which is convenient for the user to flexibly adjust the shooting position and action.
  • the synthesis and shooting of the second instance and the background of the instance are carried out synchronously. Relatively high.
  • the template material can also be displayed in the user shooting interface, that is, in addition to displaying the above-mentioned shooting picture or composite picture, the template material can also be displayed synchronously, which is convenient for the user to compare.
  • FIG. 6 is a schematic diagram of a user shooting interface provided by Embodiment 2 of the present disclosure. As shown in Figure 6, in the user shooting interface, the upper part displays the template material, and the lower part displays the outline auxiliary line, and also includes the composite screen of the second instance displayed in real time and the instance background.
  • the auxiliary line can adjust its occupancy so that it is located in the outline auxiliary line (the area filled with white diagonal lines in Figure 6) and complete the corresponding actions.
  • FIG. 7 is a schematic diagram of determining a template material provided by Embodiment 2 of the present disclosure. As shown in Figure 7, multiple template materials are provided in the material library, which are different movie clips. The user can select a template material through the template selection interface, and enter the user shooting interface to complete the shooting of the co-production material.
  • the template material before extracting the outline information of the first instance in the template material, it further includes: identifying instances in the template material that support co-pacing; and determining at least one first instance from the instances that support co-pacing according to user selection information.
  • the first instance may be one or more of them.
  • the user may select the first instance through the instance selection interface.
  • the template material is a movie clip, and there are two characters in the clip. Then the user can only select one of the characters to perform, and the character identification of each character can be displayed in the instance selection interface, for example, each character is framed with a flashing frame. If the user clicks one of the characters, the flashing frames of other characters will disappear. , the blinking frame of the selected character becomes steady on, and the selected character is the first instance.
  • the user may also select two characters as the first instance. In this case, two users are required to act together, and each user acts as one of the characters.
  • the co-shooting method provided by this embodiment is to achieve a high degree of harmony between the second instance and the instance background by removing the first instance, using the outline auxiliary line to guide the user to occupy a place and complete corresponding actions, and performing color migration and light migration on the second instance. Quality Synthesis.
  • the user obtains the co-shooting material according to the guidance of the outline auxiliary line, and can flexibly adjust the shooting angle and action to ensure the high consistency between the co-shooting material and the template material, thereby improving the accuracy and efficiency of the synthesis; Remove the vacant area after the first instance to complete the background to ensure the visual coherence and consistency of the background and improve the quality of synthesis; adjust the color of the second instance according to the image characteristics of the instance background, so that the second instance is consistent with the instance The transition of the background is more natural; by using the spherical harmonic light model to perform lighting rendering on the second instance, the sense of reality and three-dimensionality of the second instance in the co-shot material is enhanced; by flexibly displaying the shooting screen or composite screen in the user shooting interface, the user can It can preview the synthesis effect in real time and adjust the shooting position and action flexibly, which can also meet the performance requirements of electronic equipment.
  • FIG. 8 is a flow chart of a co-shooting method provided by Embodiment 3 of the present disclosure.
  • this third embodiment on the basis of the foregoing embodiments, the case where there are multiple first instances and multiple second instances is described.
  • the number of the first instance is the same as the number of the second instance, and there are at least two.
  • the segmentation result of each first instance can be associated with the instance identifier, and the segmentation result of each second instance can also be associated with the instance identifier, and the first instance and the second instance with the same instance identifier are related to each other. According to instance identifiers, relative positional relationships between instances, and/or instance profile information, the first instance associated with each second instance in the template material can be determined, and multi-user co-production can be realized on this basis.
  • a co-shooting method provided by Embodiment 3 of the present disclosure includes:
  • At least two second instances may be obtained by performing instance segmentation on the co-shooting material.
  • the co-production material is performed jointly by multiple people, there are at least two second instances, and the at least two second instances are in one-to-one correspondence with the at least two first instances.
  • the contour information of each first instance in the template material can be obtained through the instance segmentation algorithm, and the contour information of each second instance in the co-shot material can also be obtained.
  • Multiple second instances are in one-to-one correspondence with multiple first instances, and by comparing the outline information of each first instance with each second instance, it can be determined which first instance each second instance is deduced, thereby determining The association relationship between the first instance and the second instance. For example, if there are two characters in the template material, character A is standing, and character B is sitting on a chair, the association relationship means that the standing user captured in the co-shooting material is acting as character A, and the sitting user is acting as character A. is Person B.
  • each second instance before adding each second instance to the area corresponding to the associated first instance, it further includes: removing each first instance from the template material; background completion.
  • the method further includes: performing color migration and/or illumination migration on each second instance in the co-shooting result.
  • the co-shot material can be shot by multiple people at the same time, or can be shot by one or more users in stages. For example, two people perform the template material at the same time, one person performs character A, and the other person performs character B, you can only shoot one co-production material, and add two second instances of it to the template material respectively; One person plays character A, and the first co-shoot material is obtained by shooting. After the first co-shoot material is shot, the second person performs character B, and the second co-shoot material is obtained.
  • each co-shoot material includes a second co-shoot material. instance, and the second instance corresponds to a first instance in the template material.
  • shooting in batches one user can also shoot in batches and decorate multiple angles, so as to enhance the flexibility and fun of co-shooting.
  • a co-production method provided in this embodiment according to the association relationship between the second instance and the first instance, multiple second instances can be added to the template material respectively, so as to realize multi-user co-production and improve the flexibility of co-production and fun, can meet the diverse needs of co-production.
  • users can experience the real movie atmosphere, and can also play with other objects on the same stage and have conversations across time and space, increasing the diversity and playability of co-production applications.
  • FIG. 9 is a schematic structural diagram of a co-shooting device in Embodiment 4 of the present disclosure. Please refer to the above-mentioned embodiments for the content that is not exhaustive in this embodiment.
  • the device includes: a contour extraction module 310, configured to extract the contour information of the first instance in the template material, the template material including the first instance and the instance background; a material acquisition module 320, configured to obtain The co-shooting material imported by the user based on the profile information, the co-shooting material includes a second instance corresponding to the first instance; the co-shooting module 330 is configured to add the second instance to the first instance in the template material A co-shooting result is obtained for an area corresponding to an instance, wherein the co-shooting result includes the second instance and the instance background.
  • the co-shooting device of this embodiment uses the outline information of the first instance to guide the user to import the co-shooting material, so as to improve the consistency of the co-shooting material and the instance outline in the template material, thereby realizing the synthesis of the second instance and the background of the template material instance, and improving the co-shooting accuracy. accuracy.
  • the material acquisition module 320 includes:
  • An auxiliary line generation unit configured to generate the contour auxiliary line of the first instance according to the contour information
  • an auxiliary line display unit configured to display the contour auxiliary line in the user shooting interface, so as to guide the user to follow the contour assistance Line shooting to obtain the co-shooting material.
  • the material acquisition module 320 also includes:
  • the material determining unit is configured to use the shot picture as the co-shooting material if the error between the outline of the second instance and the outline auxiliary line is within a set range in the user shooting interface.
  • the co-production module 330 is set to:
  • the device also includes:
  • the background completion module is configured to perform background completion on the vacant area after removing the first instance according to the image characteristics of the instance background.
  • the device also includes:
  • the segmentation module is configured to perform semantic segmentation or instance segmentation on the co-production material to obtain the second instance after obtaining the co-production material imported by the user based on the contour information.
  • the device also includes:
  • a color adjustment unit configured to adjust the color of the second instance according to the image characteristics of the instance background.
  • the device also includes: a lighting rendering module, which is set to:
  • the content displayed on the user shooting interface further includes the second instance and the shooting background; or, the content displayed on the user shooting interface further includes the second instance and the instance background.
  • the device also includes:
  • the example identification module is configured to identify instances in the template material that support co-patterning before the outline information of the first example in the extracted template material; the example determination module is configured to determine from the examples that support co-patterning according to user selection information At least one first instance.
  • the co-shooting module 330 includes: a relationship determination unit, configured to The contour information of the instance and the contour information of each second instance in the co-shooting material determine the association relationship between at least two first instances and at least two second instances; the instance adding unit is configured to add each second instance The instance is added to the region corresponding to the associated first instance in the template material.
  • the above co-shooting device can execute the co-shooting method provided by any embodiment of the present disclosure, and has corresponding functional modules and effects for executing the method.
  • FIG. 10 is a schematic diagram of a hardware structure of an electronic device provided by Embodiment 5 of the present disclosure.
  • FIG. 10 shows a schematic structural diagram of an electronic device 500 suitable for implementing the embodiments of the present disclosure.
  • the electronic device 500 in the embodiment of the present disclosure includes, but is not limited to, a computer, a notebook computer, a server, a tablet computer, or a smart phone, and other devices with an image processing function.
  • the electronic device 500 shown in FIG. 10 is only an example, and should not limit the functions and scope of use of the embodiments of the present disclosure.
  • an electronic device 500 may include one or more processing devices (such as a central processing unit, a graphics processing unit, etc.) Alternatively, a program loaded from the storage device 508 into the random access memory (Random Access Memory, RAM) 503 executes various appropriate actions and processes.
  • One or more processing devices 501 implement the flow data packet forwarding method provided in the present disclosure.
  • RAM 503 various programs and data necessary for the operation of the electronic device 500 are also stored.
  • the processing device 501, ROM 502, and RAM 503 are connected to each other through a bus 505.
  • An input/output (Input/Output, I/O) interface 504 is also connected to the bus 505 .
  • an input device 506 including, for example, a touch screen, a touchpad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, etc.; including, for example, a liquid crystal display (Liquid Crystal Display, LCD) , an output device 507 such as a speaker, a vibrator, etc.; a storage device 508 including, for example, a magnetic tape, a hard disk, etc., configured to store one or more programs; and a communication device 509.
  • the communication means 509 may allow the electronic device 500 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 10 shows electronic device 500 having various means, it is not required to implement or possess all of the means shown. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, where the computer program includes program codes for executing the methods shown in the flowcharts.
  • the computer program may be downloaded and installed from a network via communication means 509, or from storage means 508, or from ROM 502.
  • the processing device 501 When the computer program is executed by the processing device 501, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are performed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • a computer-readable storage medium is, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof.
  • Examples of computer readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, RAM, ROM, Erasable Programmable Read-Only Memory (EPROM) or flash memory), optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • the program code contained on the computer readable medium can be transmitted by any appropriate medium, including but not limited to: electric wire, optical cable, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the above.
  • the client and the server can communicate using any currently known or future network protocols such as Hypertext Transfer Protocol (HyperText Transfer Protocol, HTTP), and can communicate with digital data in any form or medium
  • the communication eg, communication network
  • Examples of communication networks include local area networks (Local Area Network, LAN), wide area networks (Wide Area Network, WAN), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently existing networks that are known or developed in the future.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: extracts the contour information of the first instance in the template material, and the template material includes the The first instance and the instance background; obtain the co-shooting material imported by the user based on the profile information, the co-shooting material includes a second instance corresponding to the first instance; add the second instance to the template material The region corresponding to the first instance is obtained to obtain a co-shooting result, wherein the co-shooting result includes the second instance and the instance background.
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a LAN or WAN, or it can be connected to an external computer (eg via the Internet using an Internet Service Provider).
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware.
  • the name of the unit does not constitute a limitation on the unit itself in one case, for example, the first obtaining unit may also be described as "a unit for obtaining at least two Internet Protocol addresses".
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (Field Programmable Gate Arrays, FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (Application Specific Standard Parts, ASSP), System on Chip (System on Chip, SOC), Complex Programmable Logic Device (Complex Programmable Logic Device, CPLD) and so on.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing. Examples of machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard drives, RAM, ROM, EPROM or flash memory, optical fibers, CD-ROMs, optical storage devices, magnetic storage devices, or Any suitable combination of the above.
  • Example 1 provides a co-production method, including:
  • the co-shooting material including a second instance corresponding to the first instance
  • Example 2 According to the method described in Example 1, the acquisition of the co-shooting material imported by the user based on the profile information includes:
  • the contour auxiliary line is displayed in the user shooting interface, so as to guide the user to obtain the co-shooting material by shooting according to the contour auxiliary line.
  • Example 3 According to the method described in Example 1, the guiding the user to shoot according to the contour auxiliary line to obtain the co-shooting material includes:
  • the captured picture is used as the co-shooting material.
  • Example 4 According to the method described in Example 1, adding the second instance to the region corresponding to the first instance in the template material includes:
  • Example 5 The method according to Example 1, further comprising:
  • Background completion is performed on the vacant area after removing the first instance according to the image features of the instance background.
  • Example 6 The method according to Example 1, further comprising:
  • Example 7 The method according to Example 1, further comprising:
  • the color of the second instance is adjusted according to the image characteristics of the instance background.
  • Example 8 The method according to Example 1, further comprising:
  • Example 9 According to the method described in Example 1, the content displayed on the user shooting interface further includes the second instance and the shooting background; or,
  • the content displayed on the user shooting interface also includes the second instance and the instance background.
  • Example 10 According to the method described in Example 1, before the extraction of the contour information of the first instance in the template material, it further includes:
  • At least one first instance is determined from the identified instances supporting co-pacing.
  • Example 11 According to the method described in Example 1, the number of the first instance is the same as the number of the second instance, and is at least two;
  • Adding the second instance to the area corresponding to the first instance in the template material includes:
  • Example 12 provides a co-shooting device, including:
  • a contour extraction module configured to extract contour information of a first instance in a template material, the template material including the first instance and an instance background;
  • a material acquiring module configured to acquire co-shooting materials imported by the user based on the profile information, the co-shooting materials including a second instance corresponding to the first instance;
  • the co-shooting module is configured to add the second instance to the region corresponding to the first instance in the template material to obtain a co-shooting result, wherein the co-shooting result includes the second instance and the instance background.
  • Example 13 provides an electronic device, comprising:
  • processors one or more processors
  • a storage device configured to store one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the co-pacing method as described in any one of examples 1-11.
  • Example 14 provides a computer-readable medium, on which a computer program is stored, and when the program is executed by a processor, the synchronization as described in any one of Examples 1-11 is realized. method.

Abstract

本文公开了一种合拍方法、装置、电子设备及可读介质。该合拍方法包括:提取模板素材中第一实例的轮廓信息,所述模板素材包括所述第一实例和实例背景;获取用户基于所述轮廓信息导入的合拍素材,所述合拍素材包括与所述第一实例对应的第二实例;将所述第二实例添加至所述模板素材中所述第一实例对应的区域,得到合拍结果,其中,所述合拍结果包括所述第二实例和所述实例背景。

Description

合拍方法、装置、电子设备及可读介质
本申请要求在2021年09月02日提交中国专利局、申请号为202111027906.9的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本公开涉及图像处理技术领域,例如涉及一种合拍方法、装置、电子设备及可读介质。
背景技术
随着社交、拍摄和特效处理软件等的发展,多种具有娱乐性和趣味性的应用大量涌现,用户可以通过拍摄照片或者视频片段,模仿一些经典照片或影视片段中的角色,复现一些特定的场景和情节,体验表演和演绎的乐趣,这种应用要求将用户的模仿内容与原照片或片段合成在一起,即合拍。例如,对于电影角色替换,用户可以为自己拍摄一段视频,并在拍摄过程中演绎电影角色的肢体动作、表情和语言等,然后将电影片段中的原角色替换为用户,使用户仿佛置身其中,使拍摄的结果更生动、更贴近电影片段。
但是,由于用户的拍摄环境复杂多样,且用户难以精准地演绎原角色,再加上拍摄设备的远近不同、位置不同、每次模仿的动作差异等,都给原视频与用户拍摄内容的合成带来了困难。合拍应用,可供用户模仿的角色或场景都较为简单,基本只涉及面部表情或少量头部的、小幅度的动作变化,如果运动幅度较大,或者是涉及到肢体动作的变化,则无法完成高质量的合拍,造成用户拍摄内容与原素材的割裂、过渡不自然等,合成效果较差,影响用户的使用体验。
发明内容
本公开提供了一种合拍方法、装置、电子设备及可读介质,以提高合拍素材与模板素材的一致性,提高合拍的准确性。
本公开提供一种合拍方法,包括:
提取模板素材中第一实例的轮廓信息,所述模板素材包括所述第一实例和实例背景;
获取用户基于所述轮廓信息导入的合拍素材,所述合拍素材包括与所述第一实例对应的第二实例;
将所述第二实例添加至所述模板素材中所述第一实例对应的区域,得到合拍结果,其中,所述合拍结果包括所述第二实例和所述实例背景。
本公开还提供了一种合拍装置,包括:
轮廓提取模块,设置为提取模板素材中第一实例的轮廓信息,所述模板素材包括所述第一实例和实例背景;
素材获取模块,设置为获取用户基于所述轮廓信息导入的合拍素材,所述合拍素材包括与所述第一实例对应的第二实例;
合拍模块,设置为将所述第二实例添加至所述模板素材中所述第一实例对应的区域,得到合拍结果,其中,所述合拍结果包括所述第二实例和所述实例背景。
本公开还提供了一种电子设备,包括:
一个或多个处理器;
存储装置,设置为存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如上所述的合拍方法。
本公开还提供了一种计算机可读介质,其上存储有计算机程序,其中,该程序被处理器执行时实现如上所述的合拍方法。
附图说明
图1是本公开实施例一提供的一种合拍方法的流程图;
图2是本公开实施例二提供的一种合拍方法的流程图;
图3是本公开实施例二提供的一种模板素材中的第一实例的示意图;
图4是本公开实施例二提供的一种第一实例的轮廓信息的示意图;
图5是本公开实施例二提供的一种对去除第一实例的区域进行背景补全的示意图;
图6是本公开实施例二提供的一种用户拍摄界面的示意图;
图7是本公开实施例二提供的一种确定模板素材的示意图;
图8是本公开实施例三提供的一种合拍方法的流程图;
图9是本公开实施例四提供的一种合拍装置的结构示意图;
图10是本公开实施例五提供的一种电子设备的硬件结构示意图。
具体实施方式
下面将参照附图描述本公开的实施例。虽然附图中显示了本公开的一些实施例,然而本公开可以通过多种形式来实现,提供这些实施例是为了理解本公开。本公开的附图及实施例仅用于示例性作用。
本公开的方法实施方式中记载的多个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
下述实施例中,每个实施例中同时提供了可选特征和示例,实施例中记载的多个特征可进行组合,形成多个可选方案,不应将每个编号的实施例仅视为一个技术方案。此外,在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。
实施例一
图1是本公开实施例一提供的一种合拍方法的流程图。该方法可适用于用户按照模板素材进行合拍的情况,通过将合拍素材中的实例添加至模板素材中,与模板素材中的实例背景进行合成,从而实现对多种场景或情节的模仿或演绎。该方法可以由合拍装置来执行,其中该装置可由软件和/或硬件实现,并集成在电子设备上。本实施例中的电子设备可以是计算机、笔记本电脑、服务器、平板电脑或智能手机等具有图像处理功能的设备。
如图1所示,本公开实施例一提供的一种合拍方法,包括:
S110、提取模板素材中第一实例的轮廓信息,所述模板素材包括所述第一实例和实例背景。
本实施例中,模板素材可以为供用户参考以进行模仿或演绎的图像或视频,例如可以是名画、经典电影片段或特效动画等。用于合拍的模板素材可以由用户指定,并由电子设备从素材库中下载到本地。模板素材包括第一实例和实例背景,其中,第一实例包括被用户模仿或演绎的对象,该对象不会显示在合拍结果中,而是会被用户模仿或演绎的内容替换或覆盖,例如,第一实例可以是电影片段中的人物,也可以包括人物手持的道具等;实例背景包括不需要用户模仿或演绎对象,可以显示在合拍结果中,例如电影片段中人物周围的环境,如墙壁、道路、河流等。
模板素材中可以有多个实例,通过语义分割算法或实例分割算法可以识别出模板素材中所有实例并进行分割。其原理可以为:对模板素材中的每个实例进行边界框的检测定位,并且对于每个实例所在的边界框,对该边界框的内部进行像素级的前后景分割,其中,前景即为实例,后景可以认为是实例背景。第一实例可以为多个实例中的一个或多个。第一实例可以由电子设备根据模板素材的默认配置确定,也可以由用户指定。语义分割算法主要适用于模板素材中有一个第一实例的情况,实例分割算法主要适用于模板素材中有至少两个第一实例的情况。
通过语义分割或实例分割算法可以提取第一实例的轮廓信息。轮廓信息用于描述第一实例的位置和形态。例如,在模板素材中的第一实例是一个正在跳舞的小女孩,则轮廓信息需要指示出该小女孩在模板素材中的位置以及跳舞的姿势,以辅助用户调整使用电子设备合拍时的拍摄角度,使用户在拍摄的画面中正确占位,并引导用户完成与该小女孩相同或相近的动作。轮廓信息可以通过文字、线条、符号、简笔画或辅助线等形式体现。
一实施例中,在确定模板素材中的第一实例后,将第一实例的轮廓信息储存在本地,在用户拍摄合拍素材时,读取轮廓信息并以可视化的形式显示在用户拍摄界面中,以引导用户占位并完成相应动作。其中,由于模板素材已下载至电子设备本地,对模板素材进行语义分割或实例分割的过程可以离线进行。
S120、获取用户基于所述轮廓信息导入的合拍素材,所述合拍素材包括与所述第一实例对应的第二实例。
合拍素材可以为用户模仿或演绎第一实例而拍摄的图像或视频,其中,用户模仿或演绎的内容为第二实例,第二实例与第一实例相对应,例如可以体现在第二实例与第一实例的轮廓相同或相似。合拍素材可以是用户根据第一实例的轮廓信息实时拍摄得到的素材,也可以是从图库中导入的已完成拍摄的素材,该已完成拍摄的素材中包括与第一实例的轮廓相同或相似轮廓的实例。在此基 础上可以保证第二实例与第一实例轮廓的一致性,提高合拍素材的可用性,也便于将第二实例与实例背景准确地合成。
一实施例中,合拍素材还可以包括拍摄背景,即用户拍摄合拍素材时所处的环境。例如,用户在卧室模仿模板素材中的小女孩跳舞并进行拍摄,则可以将拍摄画面作为合拍素材,拍摄画面中跳舞的用户即为第二实例,拍摄画面中卧室环境即为拍摄背景。
本实施例中,对于用户导入的合拍素材需要进行语义分割或实例分割,以得到第二实例和拍摄背景,其中,第二实例用于替换或覆盖模板素材中的第一实例,从而与实例背景进行合成以实现合拍。如果合拍素材中仅有一个第二实例(通常为拍摄到的用户本人),则利用语义分割算法提取第二实例的轮廓即可,可节省计算量;合拍素材中也可能有多个第二实例,则利用实例分割算法,可以识别出合拍素材中所有第二实例,这种情况下,可以将每个第二实例的分割结果和实例标识关联起来,根据多个第二实例之间的相对位置关系,和/或每个第二实例的轮廓信息,可以确定每个第二实例在模板素材中关联的第一实例,在此基础上实现多用户的合拍。
一实施例中,由于用户拍摄合拍素材的风格多变、不确定性较强,对合拍素材的语义分割或实例分割可以在线进行,便于灵活调用相关算法和使用计算资源。
S130、将所述第二实例添加至所述模板素材中所述第一实例对应的区域,得到合拍结果,其中,所述合拍结果包括所述第二实例和所述实例背景。
第二实例可用于替换或者覆盖模板素材中的第一实例,从而与实例背景合成得到合拍结果。将第二实例添加至第一实例对应的区域,可以指将第一实例从模板素材中去除(去除第一实例后的区域可以为空缺状态或者为空白,也可以根据实例背景的纹理特征进行填充),然后将第二实例显示在第一实例对应的区域上,这种情况下第二实例与实例背景的融合度更高;也可以指用第二实例覆盖第一实例,这种情况下对第一实例和第二实例轮廓的一致性要求较高,需要保证第二实例能够完全覆盖第一实例。将第二实例添加至第一实例对应的区域,也可以为第二实例与实例背景的合成过程。
一实施例中,将第二实例添加至模板素材中的过程可以在线进行。
一实施例中,第一实例对应的区域包含了第一实例的轮廓。这种情况下,第一实例对应的区域相对于第一实例较大,在用第二实例替换或覆盖第一实例时,是将第二实例添加至较大的区域,可以避免第二实例与实例背景之间有未 衔接的地方,使第二实例更自然的存在于实例背景之上,使得替换后的边缘过渡更加自然,提高用户对合拍结果的视觉效果。
一实施例中,如果合拍素材中有多个第二实例,则根据多个第二实例之间的相对位置关系,和/或每个第二实例的轮廓信息,可以确定每个第二实例对应于模板素材中的哪个第一实例,据此将每个第二实例添加至模板素材中相应的第一实例对应的区域,实现多用户的合拍。
本实施例提供的一种合拍方法,利用第一实例的轮廓信息引导用户导入合拍素材,以提高合拍素材与模板素材中实例轮廓的一致性,从而实现第二实例与模板素材实例背景的合成,提高合拍的准确性;并且由于轮廓信息的引导,该方法可适用于更多样的合拍的场景,即便是对于第一实例动作复杂、运动幅度较大、或者是肢体动作发生变化的情况,都可以提高合拍素材的可用性,保证合成效果以及合拍的质量。
实施例二
图2是本公开实施例二提供的一种合拍方法的流程图。本实施例二在上述实施例的基础上,对获取合拍素材以及将第二实例添加至模板素材的过程进行说明。
如图2所示,本公开实施例二提供的一种合拍方法,包括:
S210、提取模板素材中第一实例的轮廓信息,模板素材包括第一实例和实例背景。
图3是本公开实施例二提供的一种模板素材中的第一实例的示意图。模板素材可以为图像,也可以为视频,如果模板素材为视频,则需要逐帧提取第一实例的轮廓信息。如图3所示,以模板素材中的一帧图像为例,白色方框圈出的区域内包含第一实例,第一实例为一个包括头部和上半身的人物;实例背景主要包括大海和栏杆。
图4是本公开实施例二提供的一种第一实例的轮廓信息的示意图。如图4所示,通过对模板素材进行实例分割得到第一实例和实例背景,其中,黑色区域为模板素材中实例背景对应的区域,白色区域为第一实例对应的区域,黑白区域的交界线即为第一实例的轮廓,轮廓信息可以通过文字、线条、符号、简笔画或辅助线等形式记录或存储。
S220、根据轮廓信息生成第一实例的轮廓辅助线。
轮廓辅助线用于标识模板素材中第一实例的位置和形态。轮廓辅助线是绘制在第一实例外边缘周围的一条线,可以用虚线或者实线表示。示例性的,对于图4中的第一实例,生成轮廓辅助线的过程可以为:根据实例分割结果,对图4中黑白区域的交界线上的点进行采样,并从一个采样点开始,按照顺时针或逆时针的方向依次连接所有采样点,得到一条轮廓辅助线。
S230、在用户拍摄界面中显示轮廓辅助线,以引导用户按照轮廓辅助线拍摄得到所述合拍素材。
本实施例中,合拍素材由用户按照轮廓辅助线拍摄得到。轮廓辅助线显示在用户拍摄界面中拍摄画面的特定位置,该位置理论上与第一实例在模板素材中的位置一致,允许存在设定范围的误差。例如,图3中第一实例位于模板素材的中间靠右区域,则在用户拍摄界面中,轮廓辅助线也位于拍摄画面的中间靠右区域,在此基础上,通过轮廓辅助线可以引导用户调整拍摄角度,使拍摄到的第二实例(例如为用户本人)位于轮廓辅助线内,使得电子设备可以快速从拍摄画面的中间靠右区域提取出第二实例用于合拍。用户拍摄界面中还可以显示文字、线条、符号和/或简笔画等提示信息,用户根据这些提示信息和轮廓辅助线开始进行演绎和拍摄。
一实施例中,若在用户拍摄界面中,第二实例的轮廓与轮廓辅助线之间的误差在设定范围内,则将拍摄的画面作为合拍素材。
在用户拍摄界面中,第二实例的轮廓与轮廓辅助线之间的误差在设定范围内,即第二实例的位置和形态与第一实例的轮廓(或轮廓辅助线)一致或接近,说明用户的拍摄画面能够与模板素材对应,具备合成条件,这种情况下,可以将拍摄的画面作为合拍素材;如果第二实例的位置和形态与第一实例的轮廓(或轮廓辅助线)不一致或相差太远,则拍摄画面与模板素材无法准确对应,第二实例与实例背景无法准确地衔接或合成,可以通过提示信息引导用户调整位置和形态。其中,第二实例的轮廓与轮廓辅助线之间的误差在设定范围内,可以指第二实例中在轮廓辅助线之外的像素点数量低于第一阈值、第二实例的轮廓与轮廓辅助线的重合度高于第二阈值、第二实例的轮廓与轮廓辅助线相应像素点之间的最远距离小于第三阈值等,本实施例对此不做限定。
一实施例中,在获取用户基于轮廓信息导入的合拍素材之后,还包括:对合拍素材进行语义分割或实例分割,以获得第二实例。
本实施例以合拍素材为用户拍摄界面中的拍摄画面为例,利用语义分割或实例分割算法,可以从拍摄画面中提取第二实例,且第二实例可能为一个或多个。如果合拍素材中仅有一个第二实例,则利用语义分割算法提取第二实例的轮廓即可,在此基础上可以将其添加至模板素材中第一实例对应的区域;如果 合拍素材中有多个第二实例,则利用实例分割算法,可以识别出合拍素材中所有第二实例,这种情况下,可以将每个第二实例用于替换或覆盖模板素材中相关联的第一实例,在此基础上实现多用户的合拍。
S240、从模板素材中去除第一实例。
本实施例中,从模板素材中去除第一实例,可以通过抠图算法将第一实例抠除。去除第一实例后的区域可以为空缺状态或者为空白,也可以根据实例背景的纹理特征进行填充。
在一实施例中,在去除第一实例之后,还包括:根据实例背景的图像特征对去除第一实例后的空缺区域进行背景补全。
图5是本公开实施例二提供的一种对去除第一实例的区域进行背景补全的示意图。背景补全的过程可以为通过图像修复或补全算法,利用实例背景的图像特征预测空缺区域内的像素点的特征,并据此对空缺区域进行填充。如图5所示,实例背景中主要包括大海和栏杆的特征,据此在去除第一实例后的空缺区域中,填充了大海和栏杆的纹理,并且空缺区域的填充内容基本与实例背景对齐,从而提高合成质量,保证背景在视觉上的连贯性和一致性。在此基础上,将第二实例添加至第一实例对应的区域后,第二实例与实例背景的过渡更自然,合成效果更好。
S250、将第二实例添加至模板素材中第一实例对应的区域。
S260、根据实例背景的图像特征调整第二实例的色彩。
合拍素材是由用户拍摄的,而模板素材通常是由专业人员或者熟悉视频制作的人员拍摄的,合拍素材与模板素材之间的拍摄条件、素材的色彩和风格等通常具有差异,导致第二实例相比于实例背景较为突兀、过渡不自然。本实施例中,为改善第二实例与实例背景的合成效果,根据实例背景的图像特征调整第二实例的色彩,以使合成素材在视觉上更加和谐自然。对于合拍素材,逐帧将其中的第二实例与模板素材中的相应帧的实例背景对照,调整第二实例的色彩,例如可以是调整第二实例中每个像素点的色值。
本实施例中,根据实例背景的图像特征调整第二实例的色彩,也可以为将模板素材中实例背景的色彩迁移至第二实例。在色彩迁移过程中,还可以根据实例背景的图像特征调整第二实例的色调、滤镜和/或特效等,使得第二实例与实例背景融合更为自然。一实施例中,色彩迁移的过程可以在线进行。
S270、根据模板素材计算第一实例的球谐光系数,并估计第一实例对应的法线方向。
本实施例中,为使合成结果与模板素材更贴近、更逼真,还利用球谐光模 型对第二实例进行光照渲染,此过程也可以为将模板素材中的环境光照迁移至第二实例,以增强合拍素材中第二实例的真实感和立体感。
通过将模板素材中第一实例周围的环境光采样成多个不同方位的球谐光系数,用于在对第二实例进行光照渲染的过程中还原其周围的环境光,从而简化对环境光建模的计算过程。一实施例中,球谐光建模过程可以离线进行。
S280、根据球谐光系数和法线方向,对第二实例进行光照渲染。
本实施例中,通过对模板素材中的环境光照建模,获取到用于描述环境光照的球谐光系数,并根据模板素材的图像和分割得到的第一实例估计第一实例对应的法线方向,根据球谐光系数和法线方向,可以分析第一实例在法线方向上的光照强度分布或深度,据此对第二实例进行光照渲染,从不同方位对合拍结果中的第二实例进行补光。一实施例中,对第二实例进行光照渲染的过程可以在线进行。
在上述基础上,基于经过色彩迁移和光照迁移后的第二实例与实例背景的合成结果,可输出最终的合拍结果。
本实施例不限定S250-S270的执行顺序。例如,在一些场景中,也可以先对第二实例进行色彩迁移和/或光照迁移,再将经过色彩迁移和/或光照迁移的第二实例添加至模板素材中,与实例背景进行合成,得到合拍结果。
在一实施例中,用户拍摄界面显示的内容还包括第二实例以及拍摄背景,即,在用户拍摄界面中显示用户在真实的拍摄环境中进行模仿或演绎时的拍摄画面,第二实例是显示在真实的拍摄背景上,用户可根据轮廓辅助线调整位置并完成相应的动作,这种情况下,第二实例与实例背景的合成可在合拍素材拍摄完成后进行,能够降低电子设备的计算量和中央处理器(Central Processing Unit,CPU)占用率,对电子设备的性能要求相对较低。
或者,用户拍摄界面显示的内容还包括第二实例以及实例背景,即,在用户拍摄合拍素材的过程中,用户拍摄界面中实时显示将第二实例与模板素材中实例背景的合成画面,这种情况下,用户可实时预览合成的效果,便于用户灵活地调整拍摄位置和动作,这种情况下,第二实例与实例背景的合成与拍摄同步进行,计算量较高,对电子设备的性能要求相对较高。
在一实施例中,用户拍摄界面中还可以显示模板素材,即,除了显示上述的拍摄画面或合成画面,还可以同步显示模板素材,便于用户进行对照。图6是本公开实施例二提供的一种用户拍摄界面的示意图。如图6所示,在用户拍摄界面中,上半部分显示的是模板素材,下半部分显示的内容包括轮廓辅助线,还包括实时显示的第二实例与实例背景的合成画面,用户根据轮廓辅助线,可 以调整其占位,使其位于轮廓辅助线内(图6中的白色斜线填充区域)并完成相应的动作。
在一实施例中,素材库中有多个可供合拍的模板,模板素材可由用户通过模板选择界面选定。图7是本公开实施例二提供的一种确定模板素材的示意图。如图7所示,素材库中提供多个模板素材,分别是不同的电影片段,用户可以通过模板选择界面从中选择一个模板素材,并进入用户拍摄界面,完成合拍素材的拍摄。
在一实施例中,在提取模板素材中第一实例的轮廓信息之前,还包括:识别模板素材中支持合拍的实例;根据用户选择信息,从支持合拍的实例中确定至少一个第一实例。
模板素材中可以有多个支持合拍的实例,第一实例可以为其中的一个或多个。用户可以通过实例选择界面选择第一实例。例如,模板素材为一个电影片段,片段中共出现两个人物。则用户可以只选择其中一个人物进行演绎,可以在实例选择界面中显示每个人物的人物标识,例如将每个人物用闪烁框框起来,如果用户点击了其中一个人物,则其他人物的闪烁框消失,被选中的人物的闪烁框变为常亮状态,该被选中的人物即为第一实例。用户也可以选择两个人物作为第一实例,这种情况下需要两个用户共同演绎,每个用户演绎其中一个人物。
以下以一个第一实例的情况为例,对电子设备实现合拍的过程进行简要说明:
1)通过模板选择界面确定用户所选择的模板素材,例如选择一部想要演绎的电影;2)通过实例分割算法提取电影人物(即第一实例)的轮廓信息,并将模板素材中的电影人物去除;3)通过图像修复算法,根据模板素材中实例背景的图像特征,将去除人物后的空缺区域补全;4)进入用户拍摄界面,在用户拍摄界面中显示根据轮廓信息生成的轮廓辅助线,用于引导用户占位并完成相应的动作,将拍摄画面作为合拍素材;5)通过实例分割算法提取合拍素材中的第二实例;6)根据模板素材中的实例背景对第二实例进行色彩迁移和/或光照迁移;7)将经过色彩迁移和/或光照迁移的第二实例与模板素材中的实例背景合成,输出完整演绎片段,即合拍结果。
本实施例提供的一种合拍方法,通过将第一实例去除、利用轮廓辅助线引导用户占位并完成相应动作、对第二实例进行色彩迁移和光照迁移,实现第二实例与实例背景的高质量合成。其中,用户根据轮廓辅助线的引导拍摄得到合拍素材,可以灵活调整拍摄角度和动作,保证合拍素材与模板素材的高度一致性,进而提高合成的准确性和效率;通过根据实例背景的图像特征对去除第一 实例后的空缺区域进行背景补全,保证背景在视觉上的连贯性和一致性,提高合成的质量;通过根据实例背景的图像特征调整第二实例的色彩,使得第二实例与实例背景的过渡更自然;通过利用球谐光模型对第二实例进行光照渲染,增强合拍素材中第二实例的真实感和立体感;通过在用户拍摄界面中灵活显示拍摄画面或合成画面,使用户可实时预览合成效果并户灵活地调整拍摄位置和动作,也可满足电子设备性能要求。
实施例三
图8是本公开实施例三提供的一种合拍方法的流程图。本实施例三在上述实施例的基础上,对有多个第一实例和多个第二实例的情况进行说明。
本实施例中,第一实例的数量与第二实例的数量相同,且至少为两个。可以将每个第一实例的分割结果和实例标识关联起来,将每个第二实例的分割结果和实例标识也关联起来,具有相同实例标识的第一实例与第二实例是相互关联的。根据实例标识、实例之间的相对位置关系、和/或实例的轮廓信息,可以确定每个第二实例在模板素材中关联的第一实例,在此基础上实现多用户的合拍。
如图8所示,本公开实施例三提供的一种合拍方法,包括:
S310、识别模板素材中支持合拍的实例。
S320、根据用户选择信息,从支持合拍的实例中确定至少两个第一实例。
S330、提取模板素材中第一实例的轮廓信息,模板素材包括第一实例和实例背景。
此处提取的是模板素材中每个第一实例的轮廓信息,模板素材中至少两第一实例以外的部分为实例背景。
S340、获取用户基于轮廓信息导入的合拍素材,合拍素材包括与每个第一实例对应的第二实例。
一实施例中,在获取用户基于轮廓信息导入的合拍素材之后,通过对合拍素材进行实例分割,可以获得至少两个第二实例。
合拍素材由多人共同演绎,第二实例有至少两个,至少两个第二实例与至少两个第一实例一一对应。
S350、根据模板素材中每个第一实例的轮廓信息以及合拍素材中每个第二实例的轮廓信息,确定至少两个第一实例与至少两个第二实例之间的关联关系。
通过实例分割算法可以获得模板素材中每个第一实例的轮廓信息,还可以 获得合拍素材中每个第二实例的轮廓信息。多个第二实例与多个第一实例一一对应,通过比对每个第一实例与每个第二实例的轮廓信息,可以确定每个第二实例是演绎的哪个第一实例,从而确定第一实例与第二实例之间的关联关系。例如,模板素材中有两个人物,人物A为站立状态,人物B坐在椅子上,则关联关系是指:合拍素材中所拍摄到的站立的用户演绎的是人物A,坐着的用户演绎的是人物B。
在一实施例中,也可以根据至少两个第二实例之间的位置关系,确定每个第二实例是演绎的哪个第一实例,从而确定第一实例与第二实例之间的关联关系。例如,模板素材中有两个人物,人物A在左侧,人物B在右侧,则关联关系是指:合拍素材中所拍摄到的左边的用户演绎的是人物A,右边的用户演绎的是人物B。
S360、将每个第二实例添加至模板素材中关联的第一实例对应的区域。
一实施例中,在将每个第二实例添加至关联的第一实例对应的区域之前,还包括:从模板素材中去除每个第一实例;对去除每个第一实例后的空缺区域进行背景补全。
一实施例中,在将每个第二实例添加至关联的第一实例对应的区域之后,还包括:对合拍结果中的每个第二实例进行色彩迁移和/或光照迁移。
在一实施例中,合拍素材可以由多人同时拍摄,也可以由一个或多个用户分次拍摄得到。例如,两个人同时演绎模板素材,一个人演绎人物A,一个人演绎人物B,可以只拍摄一个合拍素材,将其中的两个第二实例分别添加至模板素材中;也可以先由第一个人演绎人物A,拍摄得到第一合拍素材,第一合拍素材拍摄完成后,再由第二个人演绎人物B,拍摄得到第二合拍素材,这种情况下,每个合拍素材中包括一个第二实例,且该第二实例对应于模板素材中的一个第一实例。在分次拍摄的情况下,也可以由一个用户分次拍摄、分饰多角,以增强合拍的灵活性和趣味性。
本实施例提供的一种合拍方法,根据第二实例与第一实例之间的关联关系,可将多个第二实例分别添加至模板素材中,实现多用户的合拍,提高了合拍的灵活性和趣味性,可以满足多样化的合拍需求。在此基础上,用户可以体验真实的电影氛围,也可以与其他对象同台飙戏、跨越时空进行对话,增加合拍应用的多样性和可玩性。
实施例四
图9是本公开实施例四中的合拍装置的结构示意图。本实施例尚未详尽的 内容请参考上述实施例。
如图9所示,该装置包括:轮廓提取模块310,设置为提取模板素材中第一实例的轮廓信息,所述模板素材包括所述第一实例和实例背景;素材获取模块320,设置为获取用户基于所述轮廓信息导入的合拍素材,所述合拍素材包括与所述第一实例对应的第二实例;合拍模块330,设置为将所述第二实例添加至所述模板素材中所述第一实例对应的区域,得到合拍结果,其中,所述合拍结果包括所述第二实例和所述实例背景。
本实施例的合拍装置,利用第一实例的轮廓信息引导用户导入合拍素材,以提高合拍素材与模板素材中实例轮廓的一致性,从而实现第二实例与模板素材实例背景的合成,提高合拍的准确性。
在上述基础上,素材获取模块320,包括:
辅助线生成单元,设置为根据所述轮廓信息生成所述第一实例的轮廓辅助线;辅助线显示单元,设置为在用户拍摄界面中显示所述轮廓辅助线,以引导用户按照所述轮廓辅助线拍摄得到所述合拍素材。
在上述基础上,素材获取模块320,还包括:
素材确定单元,设置为若在所述用户拍摄界面中,所述第二实例的轮廓与所述轮廓辅助线之间的误差在设定范围内,则将拍摄的画面作为所述合拍素材。
在上述基础上,合拍模块330,设置为:
从所述模板素材中去除所述第一实例,并将所述第二实例添加至所述模板素材中所述第一实例对应的区域。
在上述基础上,该装置还包括:
背景补全模块,设置为根据所述实例背景的图像特征对去除所述第一实例后的空缺区域进行背景补全。
在上述基础上,该装置还包括:
分割模块,设置为在获取用户基于轮廓信息导入的合拍素材之后,对合拍素材进行语义分割或实例分割,以获得第二实例。
在上述基础上,该装置还包括:
色彩调整单元,设置为根据所述实例背景的图像特征调整所述第二实例的色彩。
在上述基础上,该装置还包括:光照渲染模块,设置为:
根据所述模板素材计算所述第一实例的球谐光系数,并估计所述第一实例 对应的法线方向;根据所述球谐光系数和所述法线方向,对所述第二实例进行光照渲染。
在上述基础上,所述用户拍摄界面显示的内容还包括所述第二实例以及拍摄背景;或者,所述用户拍摄界面显示的内容还包括所述第二实例以及所述实例背景。
在上述基础上,该装置还包括:
实例识别模块,设置为在所述提取模板素材中第一实例的轮廓信息之前,识别所述模板素材中支持合拍的实例;实例确定模块,设置为根据用户选择信息,从支持合拍的实例中确定至少一个第一实例。
在上述基础上,所述第一实例的数量与所述第二实例的数量相同,且至少为两个;合拍模块330,包括:关系确定单元,设置为根据所述模板素材中每个第一实例的轮廓信息以及所述合拍素材中每个第二实例的轮廓信息,确定至少两个第一实例与至少两个第二实例之间的关联关系;实例添加单元,设置为将每个第二实例添加至所述模板素材中关联的第一实例对应的区域。
上述合拍装置可执行本公开任意实施例所提供的合拍方法,具备执行方法相应的功能模块和效果。
实施例五
图10是本公开实施例五提供的一种电子设备的硬件结构示意图。图10示出了适于用来实现本公开实施例的电子设备500的结构示意图。本公开实施例中的电子设备500包括但不限于计算机、笔记本电脑、服务器、平板电脑或智能手机等具有图像处理功能的设备。图10示出的电子设备500仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图10所示,电子设备500可以包括一个或多个处理装置(例如中央处理器、图形处理器等)501,其可以根据存储在只读存储器(Read-Only Memory,ROM)502中的程序或者从存储装置508加载到随机访问存储器(Random Access Memory,RAM)503中的程序而执行多种适当的动作和处理。一个或多个处理装置501实现如本公开提供的流量数据包转发方法。在RAM 503中,还存储有电子设备500操作所需的多种程序和数据。处理装置501、ROM 502以及RAM503通过总线505彼此相连。输入/输出(Input/Output,I/O)接口504也连接至总线505。
通常,以下装置可以连接至I/O接口504:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置506;包括例如液晶显 示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置507;包括例如磁带、硬盘等的存储装置508,存储装置508设置为存储一个或多个程序;以及通信装置509。通信装置509可以允许电子设备500与其他设备进行无线或有线通信以交换数据。虽然图10示出了具有多种装置的电子设备500,并不要求实施或具备全部示出的装置。可以替代地实施或具备更多或更少的装置。
根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置509从网络上被下载和安装,或者从存储装置508被安装,或者从ROM 502被安装。在该计算机程序被处理装置501执行时,执行本公开实施例的方法中限定的上述功能。
本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如是但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、RAM、ROM、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如超文本传输协议(HyperText Transfer Protocol,HTTP)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN),网际网(例如,互联网)以及端对端网络(例如,ad hoc 端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:提取模板素材中第一实例的轮廓信息,所述模板素材包括所述第一实例和实例背景;获取用户基于所述轮廓信息导入的合拍素材,所述合拍素材包括与所述第一实例对应的第二实例;将所述第二实例添加至所述模板素材中所述第一实例对应的区域,得到合拍结果,其中,所述合拍结果包括所述第二实例和所述实例背景。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括LAN或WAN—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开多种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在一种情况下并不构成对该单元本身的限定,例如,第一获取单元还可以被描述为“获取至少两个网际协议地址的单元”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门 阵列(Field Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Parts,ASSP)、片上系统(System on Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、RAM、ROM、EPROM或快闪存储器、光纤、CD-ROM、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,示例1提供了一种合拍方法,包括:
提取模板素材中第一实例的轮廓信息,所述模板素材包括所述第一实例和实例背景;
获取用户基于所述轮廓信息导入的合拍素材,所述合拍素材包括与所述第一实例对应的第二实例;
将所述第二实例添加至所述模板素材中所述第一实例对应的区域,得到合拍结果,其中,所述合拍结果包括所述第二实例和所述实例背景。
示例2根据示例1所述的方法,所述获取用户基于所述轮廓信息导入的合拍素材,包括:
根据所述轮廓信息生成所述第一实例的轮廓辅助线;
在用户拍摄界面中显示所述轮廓辅助线,以引导用户按照所述轮廓辅助线拍摄得到所述合拍素材。
示例3根据示例1所述的方法,所述引导用户按照所述轮廓辅助线拍摄得到所述合拍素材,包括:
若在所述用户拍摄界面中,所述第二实例的轮廓与所述轮廓辅助线之间的误差在设定范围内,则将拍摄的画面作为所述合拍素材。
示例4根据示例1所述的方法,所述将所述第二实例添加至所述模板素材中所述第一实例对应的区域,包括:
从所述模板素材中去除所述第一实例,并将所述第二实例添加至所述模板 素材中所述第一实例对应的区域。
示例5根据示例1所述的方法,还包括:
根据所述实例背景的图像特征对去除所述第一实例后的空缺区域进行背景补全。
示例6根据示例1所述的方法,还包括:
在获取用户基于轮廓信息导入的合拍素材之后,还包括:对合拍素材进行语义分割或实例分割,以获得第二实例。
示例7根据示例1所述的方法,还包括:
根据所述实例背景的图像特征调整所述第二实例的色彩。
示例8根据示例1所述的方法,还包括:
根据所述模板素材计算所述第一实例的球谐光系数,并估计所述第一实例对应的法线方向;
根据所述球谐光系数和所述法线方向,对所述第二实例进行光照渲染。
示例9根据示例1所述的方法,所述用户拍摄界面显示的内容还包括所述第二实例以及拍摄背景;或者,
所述用户拍摄界面显示的内容还包括所述第二实例以及所述实例背景。
示例10根据示例1所述的方法,在所述提取模板素材中第一实例的轮廓信息之前,还包括:
识别所述模板素材中支持合拍的实例;
根据用户选择信息,从识别到的支持合拍的实例中确定至少一个第一实例。
示例11根据示例1所述的方法,所述第一实例的数量与所述第二实例的数量相同,且至少为两个;
将所述第二实例添加至所述模板素材中所述第一实例对应的区域,包括:
根据所述模板素材中每个第一实例的轮廓信息以及所述合拍素材中每个第二实例的轮廓信息,确定至少两个第一实例与至少两个第二实例之间的关联关系;
将每个第二实例添加至所述模板素材中关联的第一实例对应的区域。
根据本公开的一个或多个实施例,示例12提供了一种合拍装置,包括:
轮廓提取模块,设置为提取模板素材中第一实例的轮廓信息,所述模板素材包括所述第一实例和实例背景;
素材获取模块,设置为获取用户基于所述轮廓信息导入的合拍素材,所述合拍素材包括与所述第一实例对应的第二实例;
合拍模块,设置为将所述第二实例添加至所述模板素材中所述第一实例对应的区域,得到合拍结果,其中,所述合拍结果包括所述第二实例和所述实例背景。
根据本公开的一个或多个实施例,示例13提供了一种电子设备,包括:
一个或多个处理器;
存储装置,设置为存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如示例1-11中任一示例所述的合拍方法。
根据本公开的一个或多个实施例,示例14提供了一种计算机可读介质,其上存储有计算机程序,该程序被处理器执行时实现如示例1-11中任一示例所述的合拍方法。
此外,虽然采用特定次序描绘了多个操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了多个实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的一些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的多种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。

Claims (14)

  1. 一种合拍方法,包括:
    提取模板素材中第一实例的轮廓信息,其中,所述模板素材包括所述第一实例和实例背景;
    获取用户基于所述轮廓信息导入的合拍素材,其中,所述合拍素材包括与所述第一实例对应的第二实例;
    将所述第二实例添加至所述模板素材中所述第一实例对应的区域,得到合拍结果,其中,所述合拍结果包括所述第二实例和所述实例背景。
  2. 根据权利要求1所述的方法,其中,所述获取用户基于所述轮廓信息导入的合拍素材,包括:
    根据所述轮廓信息生成所述第一实例的轮廓辅助线;
    在用户拍摄界面中显示所述轮廓辅助线,以引导用户按照所述轮廓辅助线拍摄得到所述合拍素材。
  3. 根据权利要求2所述的方法,其中,所述获取用户基于所述轮廓信息导入的合拍素材,包括:
    在所述用户拍摄界面中,在所述第二实例的轮廓与所述轮廓辅助线之间的误差在设定范围内的情况下,将拍摄的画面作为所述合拍素材。
  4. 根据权利要求1所述的方法,其中,所述将所述第二实例添加至所述模板素材中所述第一实例对应的区域,包括:
    从所述模板素材中去除所述第一实例,并将所述第二实例添加至所述模板素材中所述第一实例对应的区域。
  5. 根据权利要求4所述的方法,还包括:
    根据所述实例背景的图像特征对去除所述第一实例后的空缺区域进行背景补全。
  6. 根据权利要求1所述的方法,其中,在所述获取用户基于所述轮廓信息导入的合拍素材之后,还包括:
    对所述合拍素材进行语义分割或实例分割,以获得所述第二实例。
  7. 根据权利要求1所述的方法,还包括:
    根据所述实例背景的图像特征调整所述第二实例的色彩。
  8. 根据权利要求1所述的方法,还包括:
    根据所述模板素材计算所述第一实例的球谐光系数,并估计所述第一实例 对应的法线方向;
    根据所述球谐光系数和所述法线方向,对所述第二实例进行光照渲染。
  9. 根据权利要求2所述的方法,其中,所述用户拍摄界面显示的内容还包括所述第二实例以及拍摄背景;或者,
    所述用户拍摄界面显示的内容还包括所述第二实例以及所述实例背景。
  10. 根据权利要求1所述的方法,在所述提取模板素材中第一实例的轮廓信息之前,还包括:
    识别所述模板素材中支持合拍的实例;
    根据用户选择信息,从识别到的支持合拍的实例中确定至少一个第一实例。
  11. 根据权利要求1所述的方法,其中,所述第一实例的数量与所述第二实例的数量相同,且至少为两个;
    将所述第二实例添加至所述模板素材中所述第一实例对应的区域,包括:
    根据所述模板素材中每个第一实例的轮廓信息以及所述合拍素材中每个第二实例的轮廓信息,确定至少两个第一实例与至少两个第二实例之间的关联关系;
    将每个第二实例添加至所述模板素材中所述每个第二实例关联的第一实例对应的区域。
  12. 一种合拍装置,包括:
    轮廓提取模块,设置为提取模板素材中第一实例的轮廓信息,其中,所述模板素材包括所述第一实例和实例背景;
    素材获取模块,设置为获取用户基于所述轮廓信息导入的合拍素材,其中,所述合拍素材包括与所述第一实例对应的第二实例;
    合拍模块,设置为将所述第二实例添加至所述模板素材中所述第一实例对应的区域,得到合拍结果,其中,所述合拍结果包括所述第二实例和所述实例背景。
  13. 一种电子设备,包括:
    至少一个处理器;
    存储装置,设置为存储至少一个程序;
    当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如权利要求1-11中任一项所述的合拍方法。
  14. 一种计算机可读存储介质,存储有计算机程序,其中,所述程序被处理器执行时实现如权利要求1-11中任一项所述的合拍方法。
PCT/CN2022/114379 2021-09-02 2022-08-24 合拍方法、装置、电子设备及可读介质 WO2023030107A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111027906.9A CN115766972A (zh) 2021-09-02 2021-09-02 合拍方法、装置、电子设备及可读介质
CN202111027906.9 2021-09-02

Publications (1)

Publication Number Publication Date
WO2023030107A1 true WO2023030107A1 (zh) 2023-03-09

Family

ID=85332242

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/114379 WO2023030107A1 (zh) 2021-09-02 2022-08-24 合拍方法、装置、电子设备及可读介质

Country Status (2)

Country Link
CN (1) CN115766972A (zh)
WO (1) WO2023030107A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101945223A (zh) * 2010-09-06 2011-01-12 浙江大学 视频一致性融合处理方法
CN105516575A (zh) * 2014-09-23 2016-04-20 中兴通讯股份有限公司 按照自定义模板拍照的方法和装置
CN105635553A (zh) * 2014-10-30 2016-06-01 腾讯科技(深圳)有限公司 一种图像拍摄方法和装置
CN105872381A (zh) * 2016-04-29 2016-08-17 潘成军 趣味图像拍摄方法
EP3065389A1 (de) * 2015-03-06 2016-09-07 Florian Potucek Verfahren zur Herstellung von Videoaufnahmen
CN109040643A (zh) * 2018-07-18 2018-12-18 奇酷互联网络科技(深圳)有限公司 移动终端及远程合影的方法、装置
CN110602396A (zh) * 2019-09-11 2019-12-20 腾讯科技(深圳)有限公司 智能合影方法、装置、电子设备及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101945223A (zh) * 2010-09-06 2011-01-12 浙江大学 视频一致性融合处理方法
CN105516575A (zh) * 2014-09-23 2016-04-20 中兴通讯股份有限公司 按照自定义模板拍照的方法和装置
CN105635553A (zh) * 2014-10-30 2016-06-01 腾讯科技(深圳)有限公司 一种图像拍摄方法和装置
EP3065389A1 (de) * 2015-03-06 2016-09-07 Florian Potucek Verfahren zur Herstellung von Videoaufnahmen
CN105872381A (zh) * 2016-04-29 2016-08-17 潘成军 趣味图像拍摄方法
CN109040643A (zh) * 2018-07-18 2018-12-18 奇酷互联网络科技(深圳)有限公司 移动终端及远程合影的方法、装置
CN110602396A (zh) * 2019-09-11 2019-12-20 腾讯科技(深圳)有限公司 智能合影方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN115766972A (zh) 2023-03-07

Similar Documents

Publication Publication Date Title
CN113287118A (zh) 用于面部再现的系统和方法
CN112967212A (zh) 一种虚拟人物的合成方法、装置、设备及存储介质
CN112199016B (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
CN113272870A (zh) 用于逼真的实时人像动画的系统和方法
CN110766776A (zh) 生成表情动画的方法及装置
US11640687B2 (en) Volumetric capture and mesh-tracking based machine learning 4D face/body deformation training
CN112995534B (zh) 视频生成方法、装置、设备及可读存储介质
KR102353556B1 (ko) 사용자 얼굴기반 표정 및 포즈 재현 아바타 생성장치
WO2023077742A1 (zh) 视频处理方法及装置、神经网络的训练方法及装置
US11918412B2 (en) Generating a simulated image of a baby
WO2023056835A1 (zh) 视频封面生成方法、装置、电子设备及可读介质
WO2019233348A1 (zh) 动画展示、制作方法及装置
CN115810101A (zh) 三维模型风格化方法、装置、电子设备及存储介质
JP7467780B2 (ja) 画像処理方法、装置、デバイス及び媒体
WO2020077913A1 (zh) 图像处理方法、装置、硬件装置
WO2023030107A1 (zh) 合拍方法、装置、电子设备及可读介质
CN116958344A (zh) 虚拟形象的动画生成方法、装置、计算机设备及存储介质
WO2021155666A1 (zh) 用于生成图像的方法和装置
CN112598771A (zh) 一种三维动画制作过程的处理方法及装置
CN112257653A (zh) 空间装饰效果图确定方法、装置、存储介质与电子设备
KR20060040118A (ko) 맞춤형 3차원 애니메이션 제작 방법 및 장치와 그 배포시스템
Huang et al. A process for the semi-automated generation of life-sized, interactive 3D character models for holographic projection
US20230386135A1 (en) Methods and systems for deforming a 3d body model based on a 2d image of an adorned subject
US20230274502A1 (en) Methods and systems for 3d modeling of a human subject having hair based on 2d imagery
KR100965622B1 (ko) 감성형 캐릭터 및 애니메이션 생성 방법 및 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22863246

Country of ref document: EP

Kind code of ref document: A1