WO2023151510A1 - 拍摄方法、装置和电子设备 - Google Patents

拍摄方法、装置和电子设备 Download PDF

Info

Publication number
WO2023151510A1
WO2023151510A1 PCT/CN2023/074318 CN2023074318W WO2023151510A1 WO 2023151510 A1 WO2023151510 A1 WO 2023151510A1 CN 2023074318 W CN2023074318 W CN 2023074318W WO 2023151510 A1 WO2023151510 A1 WO 2023151510A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
image
sample
images
background image
Prior art date
Application number
PCT/CN2023/074318
Other languages
English (en)
French (fr)
Inventor
陈洁茹
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2023151510A1 publication Critical patent/WO2023151510A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Definitions

  • the present application belongs to the technical field of image processing, and specifically relates to a photographing method, device and electronic equipment.
  • the camera function is the most commonly used function in electronic devices, and people can use electronic devices to capture images in daily life. However, usually the captured image is a still image, and the subject in the image is in a static state, which cannot vividly display the subject.
  • the captured still images can be processed by special image processing software, so as to make the captured photos into videos.
  • this method requires special image processing software to post-process the image to obtain the video, which is cumbersome and difficult to operate.
  • the purpose of the embodiments of the present application is to provide a photographing method, device and electronic device, which can solve the problem that the subject in the image captured by the electronic device is in a static state and cannot be displayed vividly.
  • the embodiment of the present application provides a shooting method, the method includes:
  • the target file includes at least one of the following:
  • an embodiment of the present application provides a photographing device, the device comprising:
  • a first acquiring module configured to acquire a first image, the first image including a first object
  • a fusion module configured to fuse the first model corresponding to the first object with the background image to obtain at least two second images, wherein the display positions of the first model in the at least two second images are different,
  • the background image is the first image or the third image;
  • a first output module configured to output a target file, the target file is synthesized from the at least two second images
  • the target file includes at least one of the following:
  • the embodiment of the present application provides an electronic device, the electronic device includes a processor and a memory, the memory stores programs or instructions that can run on the processor, and the programs or instructions are processed by the The steps of the method described in the first aspect are realized when the controller is executed.
  • an embodiment of the present application provides a readable storage medium, on which a program or an instruction is stored, and when the program or instruction is executed by a processor, the steps of the method described in the first aspect are implemented .
  • the embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions, so as to implement the first aspect the method described.
  • an embodiment of the present application provides a computer program product, the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the method described in the first aspect.
  • the first image including the first object is obtained, and the first model corresponding to the first object is fused with the background image to obtain at least two second images, wherein the first The display position of the model is different, and the video or animation is obtained by synthesizing at least two second images image, and output a video or dynamic image.
  • the first object can be replaced with the first model, which makes the first object more vivid and can improve the fun of shooting.
  • the user only needs to take one image to obtain the video or dynamic image of the first object, without using special video production software, and the operation is simple.
  • FIG. 1 is a schematic flow chart of a photographing method provided in an embodiment of the present application
  • FIG. 2 is a schematic diagram of a shooting preview interface provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of obtaining a first trajectory provided by an embodiment of the present application.
  • Fig. 4 is one of the schematic diagrams of the model library adding interface provided by the embodiment of the present application.
  • Fig. 5 is the second schematic diagram of the model library adding interface provided by the embodiment of the present application.
  • Fig. 6 is the third schematic diagram of the model library adding interface provided by the embodiment of the present application.
  • Fig. 7 is a schematic diagram of the model library deletion interface provided by the embodiment of the present application.
  • Fig. 8 is a schematic diagram of obtaining the target filter effect provided by the embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a photographing device provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
  • FIG. 1 is a flow chart of a shooting method provided by an embodiment of the present application.
  • the method can be applied to an electronic device, and the electronic device can be a mobile phone, a tablet computer, a notebook computer, and the like.
  • the method may include step 101 and step 103, which will be described in detail below.
  • Step 101 acquire a first image, where the first image includes a first object.
  • the first image may be an image collected by a camera of the electronic device and containing the first object.
  • the first image may also be an image selected from an album of the electronic device and containing the first object.
  • the first object may be a subject to be processed in the first image.
  • the first object may be an animal, a plant, an item, or the like.
  • the item may be, for example, a cartoon character, a mascot, an exhibit, and the like.
  • acquiring the first image may further include: receiving a third input from a user, and acquiring the first image in response to the third input.
  • the third input may be used to capture the first image.
  • the third input may be a user's click input on the target control, or a specific gesture input by the user, which may be specifically determined according to actual usage requirements, which is not limited in this embodiment of the present application.
  • the click input in the embodiment of the present application may be a single click input, a double click input, or any number of click inputs, etc., and may also be a long press input or a short press input.
  • the specific gesture in this embodiment of the present application may be any one of a click gesture, a slide gesture, and a drag gesture.
  • the method may further include: receiving a fourth input from the user, and starting the first shooting mode in response to the fourth input.
  • the first shooting mode may be a shooting mode for outputting the target file based on the first captured image.
  • the fourth input can be used to start the first shooting mode of the camera application.
  • the fourth input may be a click input by the user on the target control, or a specific gesture input by the user, which may be specifically determined according to actual usage requirements, which is not limited in this embodiment of the present application.
  • the specific gesture in this embodiment of the application may be any one of a click gesture, a slide gesture, and a drag gesture;
  • the click input in the embodiment of the present application may be a single click input, a double click input, or any number of click inputs, etc., and may also be a long press input or a short press input.
  • an electronic device with a shooting function provides users with multiple shooting modes, for example, a panorama mode, a beauty mode, a video recording mode, and the like.
  • the camera application program of the electronic device includes a first shooting mode, and the first shooting mode specifically refers to a shooting mode for outputting a target file based on a first captured image.
  • FIG. 2 is a schematic diagram of a shooting preview interface according to an embodiment of the present application.
  • the electronic device displays a shooting preview interface
  • the shooting preview interface includes the option 201 of the first shooting mode
  • the user's click input on the option 201 of the first shooting mode is received. to enter the first shooting mode.
  • the user may choose whether to enable the first shooting mode according to actual usage requirements, and output the target file based on the acquired first image when the user activates the first shooting mode.
  • step 102 is performed to fuse the first model corresponding to the first object with the background image to obtain at least two second images, wherein the first model in the at least two second images The display positions are different, and the background image is the first image or the third image.
  • the first model may be a two-dimensional model or a three-dimensional model.
  • the first object is a cartoon character
  • the first model may be a three-dimensional model of the cartoon character.
  • the first model may be a model selected from a model library and corresponding to the first object.
  • the model repository may be pre-established for storing sample objects and sample models corresponding to the sample objects.
  • the background image may be the first image or the third image.
  • the third image may only include the same background picture as the first image.
  • a third image may be acquired in response to user input, and then the first image is acquired.
  • the background of the first image is the same as that of the third image, and the first image also includes the first object.
  • the second image can be used to generate the target file, for example, a video, or a dynamic image.
  • the first model corresponding to the first object is fused with the background image to obtain at least two second images, which may be obtained by obtaining at least two background images, and the first model corresponding to the first object is fused with each background image to obtain At least two second images are obtained, wherein the display positions of the first model in the background image are different in the at least two second images.
  • the first model corresponding to the first object is fused with the background image to obtain at least two second images, which may be the first model corresponding to the first object and the second image One image is fused to obtain at least two second images. That is to say, the captured first image containing the first object is used as the background image, and each of the at least two first images is fused with the first model corresponding to the first object to obtain at least two Two second images, wherein the display position of the first model in at least two second images is different, or the background content in at least two second images is different.
  • the video or dynamic image synthesized from at least two second images includes the first object and the first model corresponding to the first object and having a dynamic effect.
  • the first model corresponding to the first object is fused with the background image to obtain at least two second images, which may be the first model corresponding to the first object and the second image
  • the three images are fused to obtain at least two second images. That is to say, acquire the third image and the first image, wherein, the third image is the same as the background picture of the first image, and the third image also includes the first object, after that, the third image will be taken as the background image, and at least Each of the two third images is fused with the first model corresponding to the first object to obtain at least two second images, wherein the display positions of the first models in the at least two second images are different, Or at least the background content of the two second images is different.
  • a video or dynamic image with dynamic effects can be synthesized.
  • the display content of the background images may also be different.
  • the fusing the first model corresponding to the first object with the background image includes: fusing the first model corresponding to the first object with the target background image, the target background image is at least a partial image in the background image.
  • the target background image may be at least a part of the background image.
  • the background image of the target background image may change following the display position of the first model. For example, when the display position of the first model corresponding to the first object moves from far to near, the target background image can follow The display position of the first model gradually changes from the distant view to the close view.
  • the display content of the target background image may change along with the display position of the first model.
  • the first model corresponding to the first object when the first model corresponding to the first object is fused with the background image, the first model corresponding to the first object can be fused with the target background image to obtain at least two second images.
  • the display position of the first model in the background image is different in at least two second images, and the display content of the background image can change with the display position of the first model, so that the video or dynamic of the first object can be obtained by taking images image.
  • the display positions of the first model in the background image may be different.
  • the method before the fusion of the first model corresponding to the first object with the background image, the method further includes: adjusting the first model corresponding to the first object in the background image display position in .
  • the display position of the first model corresponding to the first object in the background image may be adjusted to obtain at least two second images, In this way, based on at least two second images, a video or dynamic image of the first object can be obtained.
  • the adjusting the display position of the first model corresponding to the first object in the background image includes: adjusting the first model corresponding to the first object according to the first track The display position within the background image.
  • the first track may be a moving track of the first model corresponding to the first object.
  • the first trajectory may be preset, for example, a straight line or a curve.
  • the first trajectory may also be user-input.
  • the first trajectory may also be a trajectory obtained by analyzing the background image. For example, the first trajectory is determined according to the depth value of the background image.
  • the display position of the corresponding first model of the first object in the background image is adjusted according to the first trajectory, and the first model and the background image are fused to obtain at least two second images to synthesize the view of the first object based on at least two second images
  • the first object in the first image can be replaced by the first model, and the first model can move along the first trajectory, so that the first object in the first image more vivid image.
  • the adjusting the display position of the first model corresponding to the first object in the background image according to the first track includes: receiving a first input from a user, and the first The input is used to determine a first trajectory; in response to the first input, the display position of the first model corresponding to the first object in the background image is adjusted according to the first trajectory.
  • the first input may be an input for acquiring a first track.
  • the first input may be a sliding gesture input by the user.
  • the first trajectory includes a start point and an end point.
  • the starting point of the first trajectory may be the starting position of the sliding gesture input by the user.
  • the end point of the first trajectory may be the end position of the sliding gesture input by the user.
  • adjusting the display position of the first model corresponding to the first object in the background image according to the first trajectory may be to move the display position of the first model from the starting point of the first trajectory to the end.
  • the first model may move along preset distance intervals. That is to say, according to the preset distance interval and the first trajectory, the number of required background images and the display position of the first object model in each background image can be determined, and based on this, the first object can be adjusted according to the first trajectory Corresponding to the display position of the first model in the background image, and fusing the first model with the background image to obtain at least two second images.
  • FIG. 3 is a schematic diagram of acquiring a first track according to an embodiment of the present application.
  • the shooting preview interface includes a shooting option 301.
  • the user clicks on the shooting option 301, and the trajectory setting option 302 is displayed on the shooting preview interface.
  • the user can draw first track.
  • the user draws an S-shaped first trajectory 303, and then, the user clicks on the camera control 304 to take a first image, and based on the first trajectory drawn by the user, at least two second images are obtained, so as to obtain at least two second images based on the at least two second images.
  • Image compositing object file is a schematic diagram of acquiring a first track according to an embodiment of the present application.
  • the user clicks on the camera option 301, and the default setting option 305 is also displayed on the shooting preview interface.
  • the position of the first model corresponding to the first object in the background image is adjusted. display location, get at least two second images, and synthesize the target file based on at least two second images.
  • the user can draw the first trajectory according to actual needs, so that the first model corresponding to the first object moves according to the first trajectory, and a specific display effect can be obtained, and the user can have interesting interactions with the first object, improving Fun to shoot.
  • the operation is simple, and the target file can be generated quickly.
  • the method further includes: acquiring the first model from a model library according to the first object, the model library includes a sample object and the sample object The corresponding sample model.
  • the model library may include sample objects and sample models corresponding to the sample objects.
  • Sample objects can be animals, plants, articles, etc.
  • the item may be, for example, a cartoon character, a mascot, an exhibit, and the like.
  • Sample objects can be displayed in the form of images.
  • the sample model can be a 2D model or a 3D model.
  • the model library may include one sample model corresponding to the sample object, or may include multiple sample models corresponding to the sample object, wherein the multiple sample models corresponding to one sample object are different.
  • the multiple sample models may be sample models of different forms or different display effects.
  • the first object in the first image is compared with the sample objects in the model library, that is, the first image including the first object and the image of each sample object in the model library A comparison is performed, and if the first object matches the sample object successfully, the sample model corresponding to the sample object is used as the first model corresponding to the first object.
  • the target sample model is used as the first model corresponding to the first object.
  • the fifth input may be an input for obtaining the first model.
  • the fifth input may be a user's click input on the target control.
  • the click input in the embodiment of the present application may be a single click input, a double click input, or any number of click inputs, etc., and may also be a long press input or a short press input.
  • the first model corresponding to the first object can be obtained from the model library, and the first model is fused with the background image to obtain at least two second images, and the target is output according to the at least two second images.
  • the first model can be quickly obtained, improving the response speed of the electronic device.
  • model library may be pre-established. The following will be described with specific embodiments.
  • the method may further include: receiving a sixth input from the user, and in response to the sixth input, converting the sample object to The corresponding fourth model is stored in the model library.
  • storing the fourth model corresponding to the sample object in the model library may be to obtain the fourth model corresponding to the sample object, use the fourth model of the sample object as the sample model corresponding to the sample object, and store the sample object and
  • the sample model corresponding to the sample object is associated and stored in the model repository. Based on this, when obtaining the first model corresponding to the first object, the first object can be compared with the sample object, and if the first object matches the sample object successfully, the second a model.
  • the sixth input may be an input for importing a sample model corresponding to the sample object into the model library.
  • the sixth input may be a user's click input on the target control.
  • the click input in the embodiment of the present application may be a single click input, a double click input, or any number of click inputs, etc., and may also be a long press input or a short press input.
  • FIG. 4 is a schematic diagram of an interface for adding a model library according to an embodiment of the present application.
  • the shooting preview interface includes the option 401 of the model library, and the user clicks the option 401 of the model library to enter the model library.
  • the display interface of the model library includes a display area 402 for sample objects and a display area 403 for sample models.
  • the sample object display area 402 is used to display the stored sample objects 404 in the form of images, and the sample objects 404 may be displayed in the form of thumbnails.
  • the sample model display area 403 is used to display the sample model corresponding to the stored sample object 405.
  • the sample model 405 can also be displayed in the form of thumbnails.
  • the display interface of the model library also includes an add option 406.
  • a first add control 407 is displayed in the display area 402 of the sample object.
  • the The library imports sample objects.
  • the second adding control 408 is displayed in the display area 403 of the sample model, and the user's click input on the second adding control 408 is received to obtain the fourth model, and store the fourth model in the model library as a sample model corresponding to the sample object. It should be noted that there may be one or more sample models corresponding to the sample object.
  • the user can import the sample object and the sample model corresponding to the sample object into the model library in advance, so as to facilitate subsequent acquisition of the first model corresponding to the first object based on the sample object, and fuse the first model with the background image to obtain at least Two second images, so as to generate a video or dynamic image of the first object according to at least two second images.
  • the method may further include: acquiring a second model corresponding to the sample object according to preset information , and store the second model in the model library; wherein, the preset information includes at least one of the following: link information; information code.
  • the link information may be URL information used to acquire the second model corresponding to the sample object.
  • the information code may be, for example, two-dimensional code information storing the second model corresponding to the sample object. For example, when visiting a museum and needing to take pictures of an exhibit, the model of the exhibit can be obtained by scanning the corresponding QR code.
  • FIG. 5 is a schematic diagram of another interface for adding a model library according to an embodiment of the present application.
  • the display interface of the model library includes a quick-add option 501 , and in response to the user's click input on the quick-add option 501 , multiple ways of adding sample models are displayed.
  • a link information adding area 502 and a QR code scanning entry 503 are displayed.
  • the user can input the corresponding link information in the link information adding area 502, and based on the connection information input by the user, the image of the sample object and the second model corresponding to the sample object can be obtained, and the second model corresponding to the sample object can be used as the corresponding model of the sample object.
  • the sample model of and the image of the sample object is associated with the sample model and stored in the model library.
  • the user can also click on the two-dimensional code scanning entry 503 to obtain the image of the sample object and the second model corresponding to the sample object by scanning the two-dimensional code, use the second model corresponding to the sample object as the sample model corresponding to the sample object, and store the sample object
  • the image of the object is associated with the sample model and stored in the model library.
  • the first image including the first object can be compared with the image of the sample object.
  • the first model is obtained from the corresponding sample model. It should be noted here that there may be one or more second models corresponding to the sample object.
  • the method may further include: receiving the second model corresponding to the sample object sent by the communication object.
  • the second model corresponding to the sample object is stored in the model library.
  • the electronic device implementing the shooting method can establish a communication connection with other electronic devices to receive the sample object and the second model corresponding to the sample object sent by the communication object, and use the second model corresponding to the sample object as the sample The sample model corresponding to the object, and store the sample object and the sample model in the model repository. It can be understood that the user can also send the sample object in the model library and the second model corresponding to the sample object to the communication object through the electronic device.
  • FIG. 5 and FIG. 6 are schematic diagrams of another interface for adding a model library according to an embodiment of the present application.
  • the display interface of the model library includes a quick add option, and in response to the user's click input on the quick add option, a control 504 of "friend transfer” is displayed on the quick add interface, and the user clicks the "friend transfer” control 504, Enter the interface for mutual transmission between friends, which includes a control 601 marked with "outgoing” and a control 602 marked “incoming and receiving”.
  • the model selection interface is entered, and the user can select the sample object to be transferred and the sample model of the sample object.
  • the sample object to be transmitted and the sample model corresponding to the sample object are selected from the model library.
  • click the "Confirm” option to transfer the sample object selected by the user and the sample model corresponding to the sample object.
  • the model is sent to the communication object.
  • the wireless communication module of the electronic device used by the user is turned on, for example, the WIFI or Bluetooth of the electronic device is turned on, so that other users' electronic devices can communicate with the user's
  • the electronic device establishes a communication connection to receive the image of the sample object and the second model corresponding to the sample object sent by other users, use the second model corresponding to the sample object as the sample model corresponding to the sample object, and combine the image of the sample object with the sample model Associated storage to the model repository.
  • the first image including the first object can be compared with the image of the sample object.
  • the first model is obtained from the corresponding sample model. It should be noted here that there may be one or more second models corresponding to the sample object.
  • the second model corresponding to the sample object sent by the communication object can be received, and the second model of the sample object can also be sent to the communication object.
  • the user can share video production materials with the communication object, which is convenient for the user to obtain all required model, and the user can interact based on the camera application program of the electronic device, which enriches the functions of the camera application program of the electronic device.
  • the method before acquiring the first model from the model library according to the first object, the method further includes: receiving a second input from the user; in response to the second input, acquiring At least two fourth images, the image content of the sample object in each fourth image is different; according to the at least two fourth images, a third model of the sample object is output; the third model is stored in the model library .
  • the second input may be an input of capturing a fourth image.
  • the second input may be the user's click input on the target control.
  • the click input in the embodiment of the present application may be single-click input, double-click input, or any number of click inputs, etc., and may also be long-press input or short-press enter.
  • the image content of the sample object included in each of the at least two fourth images is different, and it may be that the shooting angles of each fourth image are different.
  • the third model of the sample object can be generated, that is, the sample model corresponding to the sample object can be obtained, and the image of the sample object and the sample model can be associated and stored in the model library.
  • the first image including the first object may be compared with the image of the sample object, and if the first object and the sample object are successfully matched, the first model is obtained from the sample model corresponding to the sample object.
  • the third model of the first object when the first object is photographed, at least two fourth images can be obtained, and the third model of the first object can be generated according to the at least two fourth images. In this way, no model of the first object is stored in the model library.
  • a third model may be obtained, and the third model may be fused with the background image to obtain at least two second images, so as to synthesize a dynamic image or video of the first object based on the at least two second images.
  • the method further includes: receiving a seventh input from the user on the sample object or the sample model of the sample object; in response to the seventh input, deleting the sample object and the sample model corresponding to the sample object in the model library .
  • the seventh input may be an input for selecting a sample object to be deleted and a sample model corresponding to the sample object.
  • the seventh input may be the user's click input on the target control.
  • the click input in the embodiment of the present application may be single-click input, double-click input, or any number of click inputs, etc., and may also be long-press input or short-press enter.
  • the seventh input may be a click input on an image of a sample object.
  • the seventh input may be a click input on the sample model corresponding to the sample object.
  • FIG. 7 is a schematic diagram of an interface for deleting a model library according to an embodiment of the present application.
  • the display interface of the model library includes a display area 701 for sample objects and a display area 702 for sample models.
  • the sample object display area 701 is used to display the stored images of the sample objects, and the sample objects may be displayed in the form of thumbnails.
  • the sample model display area 702 is used to display the sample model corresponding to the stored sample object.
  • the display interface of the model library also includes an edit option 703.
  • a deletion mark 704 is displayed on the image of the sample object and/or the sample model corresponding to the sample object. By clicking on the image of the sample object
  • the deletion mark 704 or the deletion mark 704 on the sample model corresponding to the sample object deletes the sample object and the sample model corresponding to the sample object in the model library.
  • the user can edit the sample model in the model library, and can keep the regular Delete the sample model that is no longer used, which can save the storage space of the electronic device.
  • step 103 is executed to output a target file, the target file is synthesized from the at least two second images, wherein the target file includes at least one of the following: video; dynamic image.
  • the first model after acquiring the first model corresponding to the first object, the first model is fused with the background image to obtain at least two second images, wherein the display positions of the first model in the at least two second images are different , or at least two second images have different background contents, and a video or dynamic image is generated according to the at least two second images.
  • the method may further include: receiving an eighth input from the user; and acquiring a target filter effect in response to the eighth input.
  • the acquiring the first image includes: adjusting the display parameters of the first image according to the target parameter value corresponding to the target filter effect to obtain the first image with the target filter effect.
  • the eighth input may be an input for selecting a target filter effect.
  • the eighth input may be a click input.
  • the click input in the embodiment of the present application may be a single click input, a double click input, or any number of click inputs, etc., and may also be a long press input or a short press input.
  • FIG. 8 is a schematic diagram of obtaining a target filter effect according to an embodiment of the present application.
  • the shooting preview interface includes a filter option 801
  • various filter effects are displayed on the shooting preview interface, for example, filter a, filter b, and filter Mirror c
  • filter c is determined as the target filter effect.
  • the user when shooting a video or a dynamic image, the user can select a filter effect, obtain a first image with a target filter effect, and use the first image with a target filter effect as a background image, so that The first model corresponding to the first object is fused with the background image to generate a video or dynamic image.
  • the video or dynamic image of the first object can be obtained, and the generated video or dynamic image has the target filter effect, which further improves the video or dynamic image.
  • Image display effect fruit fruit.
  • the first image including the first object is obtained, and the first model corresponding to the first object is fused with the background image to obtain at least two second images, wherein the first The display positions of the models are different, and a video or dynamic image is synthesized from at least two second images, and the video or dynamic image is output.
  • the first object when the first object is photographed, the first object can be replaced with the first model, so that the first The subject is more vivid, which can improve the fun of shooting.
  • the user only needs to take one image to obtain the video or dynamic image of the first object, without using special video production software, and the operation is simple.
  • the shooting method provided in the embodiment of the present application may be executed by a shooting device.
  • the method for performing the photographing by the photographing device is taken as an example to illustrate the photographing device provided in the embodiment of the present application.
  • the embodiment of the present application further provides a photographing device 900 , which includes a first acquisition module 901 , a fusion module 902 and a first output module 903 .
  • the first acquiring module 901 is configured to acquire a first image, where the first image includes a first object;
  • the fusion module 902 is configured to fuse the first model corresponding to the first object with the background image to obtain at least two second images, wherein the display position of the first model in the at least two second images Different, the background image is the first image or the third image;
  • the first output module 903 is configured to output a target file, the target file is synthesized from the at least two second images;
  • the target file includes at least one of the following:
  • the fusion module 902 is specifically configured to fuse the first model corresponding to the first object with a target background image, where the target background image is at least a part of the background image.
  • the photographing device 900 further includes: an adjustment module, configured to adjust a display position of the first model corresponding to the first object in the background image.
  • the adjustment module is specifically configured to adjust the display position of the first model corresponding to the first object in the background image according to the first track.
  • the adjustment module includes: a receiving unit, configured to receive a first input from a user, and the first input is used to determine a first trajectory; an adjustment unit, configured to respond to the first input, according to the The first track is used to adjust the display position of the first model corresponding to the first object in the background image.
  • the photographing device 900 further includes: a second acquisition module, configured to acquire the first model from a model library according to the first object, the model library includes a sample object and a sample model corresponding to the sample object .
  • the photographing device 900 further includes: a first storage module, configured to store the second model corresponding to the sample object in the described model library;
  • a third acquisition module configured to acquire a second model corresponding to the sample object according to preset information
  • a second storage module configured to store the second model in the model library; wherein, the preset information includes At least one of the following: link information; information code.
  • the photographing device 900 further includes: a receiving module, configured to receive a second input from the user; a fourth acquiring module, configured to acquire at least two fourth images in response to the second input, each of the fourth images The image content of the sample object in the four images is different; the second output module is used to output the third model of the sample object according to the at least two fourth images; the third storage module is used to store the third model in the The model library.
  • the first image including the first object is obtained, and the first model corresponding to the first object is fused with the background image to obtain at least two second images, wherein the first The display positions of the models are different, and a video or dynamic image is synthesized from at least two second images, and the video or dynamic image is output.
  • the first object when the first object is photographed, the first object can be replaced with the first model, so that the first The subject is more vivid, which can improve the fun of shooting.
  • the user only needs to take one image to obtain the video or dynamic image of the first object, without using special video production software, and the operation is simple.
  • the photographing device in the embodiment of the present application may be an electronic device, or may be a component in the electronic device, such as an integrated circuit or a chip.
  • the electronic device may be a terminal, or other devices other than the terminal.
  • the electronic device can be a mobile phone, a tablet computer, a notebook computer, a handheld computer, a vehicle electronic device, a mobile Internet device (Mobile Internet Device, MID), an augmented reality (augmented reality, AR)/virtual reality (virtual reality, VR) ) equipment, robot, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc.
  • the embodiment of the present application does not specifically limit it.
  • the photographing device in the embodiment of the present application may be a device with an operating system.
  • the operating system may be an Android operating system, an Apple mobile device operating system (iPhone Operation System, ios), or other possible operating systems, which are not specifically limited in the embodiment of the present application.
  • the photographing device provided by the embodiment of the present application can realize various processes realized by the method embodiment in FIG. 1 , and details are not repeated here to avoid repetition.
  • the embodiment of the present application also provides an electronic device 1000, including a processor 1001 and a memory 1002, and the memory 1002 stores programs or instructions that can run on the processor 1001, the When the programs or instructions are executed by the processor 1001, the various steps of the above-mentioned photographing method embodiments can be realized, and the same technical effect can be achieved. To avoid repetition, details are not repeated here.
  • the electronic device in the embodiment of the present application includes the above-mentioned mobile electronic device.
  • FIG. 11 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
  • the electronic device 1100 includes, but is not limited to: a radio frequency unit 1101, a network module 1102, an audio output unit 1103, an input unit 1104, a sensor 1105, a display unit 1106, a user input unit 1107, an interface unit 1108, a memory 1109, and a processor 1110, etc. part.
  • the electronic device 1100 can also include a power supply (such as a battery) for supplying power to various components, and the power supply can be logically connected to the processor 1110 through the power management system, so that the management of charging, discharging, and function can be realized through the power management system. Consumption management and other functions.
  • a power supply such as a battery
  • the power supply can be logically connected to the processor 1110 through the power management system, so that the management of charging, discharging, and function can be realized through the power management system. Consumption management and other functions.
  • Figure 11 The structure of the electronic device shown in the above does not constitute a limitation to the electronic device, and the electronic device may include more or fewer components than shown in the figure, or combine certain components, or arrange different components, which will not be repeated here.
  • the processor 1110 is configured to: acquire a first image, the first image includes a first object; fuse the first model corresponding to the first object with the background image to obtain at least two second images, wherein, The display positions of the first model in the at least two second images are different, the background image is the first image or the third image; output a target file, and the target file consists of the at least two second images obtained by image synthesis; wherein, the target file includes at least one of the following: video; dynamic image.
  • the processor 1110 when the processor 1110 fuses the first model corresponding to the first object with the background image, it is configured to: fuse the first model corresponding to the first object with the target background image, and the target The background image is at least a part of the background images.
  • the processor 1110 is further configured to: adjust the display of the first model corresponding to the first object in the background image Location.
  • the processor 1110 when adjusting the display position of the first model corresponding to the first object in the background image, is configured to: adjust the first model corresponding to the first object according to the first trajectory. Where the model will appear within the background image.
  • the user input unit 1107 when adjusting the display position of the first model corresponding to the first object in the background image according to the first trajectory, is configured to: receive a first input from the user, the first The input is used to determine the first trajectory; when the processor 1110 adjusts the display position of the first model corresponding to the first object in the background image according to the first trajectory, it is configured to: respond to the first input, According to the first trajectory, the display position of the first model corresponding to the first object in the background image is adjusted.
  • the processor 1110 is further configured to: acquire the first model from a model library according to the first object, the model library includes a sample object and a sample corresponding to the sample object Model.
  • the memory 1109 is configured to: in the case of receiving the second model corresponding to the sample object sent by the communication object, store the storing the second model corresponding to the sample object in the model library; or
  • Processor 1110 configured to: acquire a second model corresponding to the sample object according to preset information; memory 1109, configured to: store the second model in the model library; wherein, the preset information includes the following At least one item: link information; information code.
  • the user input unit 1107 is further configured to: receive a second input from the user; the processor 1110 is further configured to: respond to The second input is to acquire at least two fourth images, and the image content of the sample object in each fourth image is different; according to the at least two fourth images, output the third model of the sample object; memory 1109, It is also used for: storing the third model in the model library.
  • the first image including the first object is obtained, and the first model corresponding to the first object is fused with the background image to obtain at least two second images, wherein the first The display positions of the models are different, and a video or dynamic image is synthesized from at least two second images, and the video or dynamic image is output.
  • the first object when the first object is photographed, the first object can be replaced with the first model, so that the first The subject is more vivid, which can improve the fun of shooting.
  • the user only needs to take one image to obtain the video or dynamic image of the first object, without using special video production software, and the operation is simple.
  • the input unit 1104 may include a graphics processor (Graphics Processing Unit, GPU) 11041 and a microphone 11042, and the graphics processor 11041 is used for the image capture device (such as the image data of the still picture or video obtained by the camera) for processing.
  • the display unit 1106 may include a display panel 11061, and the display panel 11061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the user input unit 1107 includes a touch panel 11071 and other input devices 11072 . Touch panel 11071, also called touch screen.
  • the touch panel 11071 may include two parts, a touch detection device and a touch controller.
  • Other input devices 11072 may include, but are not limited to, physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which will not be repeated here.
  • the memory 1109 can be used to store software programs as well as various data.
  • the memory 1109 may mainly include a first storage area for storing programs or instructions and a second storage area for storing data, wherein the first storage area may store an operating system, an application program or instructions required by at least one function (such as a sound playing function, image playback function, etc.), etc.
  • memory 1109 may include volatile memory or nonvolatile memory, or, memory 1109 may include both volatile and nonvolatile memory.
  • the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electronically programmable Erase Programmable Read-Only Memory (Electrically EPROM, EEPROM) or Flash.
  • ROM Read-Only Memory
  • PROM programmable read-only memory
  • Erasable PROM Erasable PROM
  • EPROM erasable programmable read-only memory
  • Electrical EPROM Electrical EPROM
  • EEPROM electronically programmable Erase Programmable Read-Only Memory
  • Volatile memory can be random access memory (Random Access Memory, RAM), static random access memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random access memory (Synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDRSDRAM), enhanced synchronous dynamic random access memory (Enhanced SDRAM, ESDRAM), synchronous connection dynamic random access memory (Synch link DRAM , SLDRAM) and Direct Memory Bus Random Access Memory (Direct Rambus RAM, DRRAM).
  • RAM Random Access Memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM double data rate synchronous dynamic random access memory
  • Double Data Rate SDRAM Double Data Rate SDRAM
  • DDRSDRAM double data rate synchronous dynamic random access memory
  • Enhanced SDRAM, ESDRAM enhanced synchronous dynamic random access memory
  • Synch link DRAM , SLDRAM
  • Direct Memory Bus Random Access Memory Direct Rambus
  • the processor 1110 may include one or more processing units; optionally, the processor 1110 integrates an application processor and a modem processor, wherein the application processor mainly processes operations related to the operating system, user interface, and application programs, etc., Modem processors mainly process wireless communication signals, such as baseband processors. It can be understood that the foregoing modem processor may not be integrated into the processor 1110 .
  • the embodiment of the present application also provides a readable storage medium, the readable storage medium stores a program or an instruction, and when the program or instruction is executed by a processor, each process of the above-mentioned photographing method embodiment is realized, and can achieve the same Technical effects, in order to avoid repetition, will not be repeated here.
  • the processor is the processor in the electronic device described in the above embodiments.
  • the readable storage medium includes computer readable storage medium, such as computer read-only memory ROM, random Access memory RAM, magnetic disk or optical disk, etc.
  • the embodiment of the present application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions to realize the various aspects of the above shooting method embodiments process, and can achieve the same technical effect, in order to avoid repetition, it will not be repeated here.
  • chips mentioned in the embodiments of the present application may also be called system-on-chip, system-on-chip, system-on-a-chip, or system-on-a-chip.
  • An embodiment of the present application provides a computer program product, the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the various processes in the above-mentioned shooting method embodiment, and can achieve the same technical effect, To avoid repetition, details are not repeated here.
  • the term “comprising”, “comprising” or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article or apparatus comprising a set of elements includes not only those elements, It also includes other elements not expressly listed, or elements inherent in the process, method, article, or device. Without further limitations, an element defined by the phrase “comprising a " does not preclude the presence of additional identical elements in the process, method, article, or apparatus comprising that element.
  • the scope of the methods and devices in the embodiments of the present application is not limited to performing functions in the order shown or discussed, and may also include performing functions in a substantially simultaneous manner or in reverse order according to the functions involved. Functions are performed, for example, the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
  • the disclosed devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • modules, units, and subunits can be implemented in one or more application specific integrated circuits (Application Specific Integrated Circuits, ASIC), digital signal processor (Digital Signal Processor, DSP), digital signal Processing equipment (DSP Device, DSPD), programmable logic device (Programmable Logic Device, PLD), field-programmable gate array (Field-Programmable Gate Array, FPGA), general-purpose processor, controller, microcontroller, microprocessor , other electronic units for performing the functions described in the present disclosure, or a combination thereof.
  • ASIC Application Specific Integrated Circuits
  • DSP Digital Signal Processor
  • DSP Device digital signal Processing equipment
  • PLD programmable logic device
  • FPGA field-programmable gate array
  • controller microcontroller
  • microprocessor other electronic units for performing the functions described in the present disclosure, or a combination thereof.
  • the technologies described in the embodiments of the present disclosure may be implemented through modules (such as procedures, functions, etc.) that execute the functions described in the embodiments of the present disclosure.
  • Software codes can be stored in memory and executed by a processor.
  • Memory can be implemented within the processor or external to the processor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

本申请公开了一种拍摄方法、装置和电子设备,属于图像处理技术领域。所述方法包括:获取第一图像,所述第一图像包括第一对象;将所述第一对象对应的第一模型与背景图像融合,得到至少两张第二图像,其中,所述至少两张第二图像中所述第一模型的显示位置不同,所述背景图像为所述第一图像或第三图像;输出目标文件,所述目标文件由所述至少两张第二图像合成得到的;其中,所述目标文件包括以下至少一项:视频;动态图像。

Description

拍摄方法、装置和电子设备
相关申请的交叉引用
本申请要求在2022年02月08日提交中国专利局、申请号为202210119654.0、名称为“拍摄方法、装置和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于图像处理技术领域,具体涉及一种拍摄方法、装置和电子设备。
背景技术
拍照功能是电子设备中最常用的功能,日常生活中人们可以使用电子设备拍摄图像。然而,通常拍摄得到的图像为静止图像,图像中的拍摄对象为静止状态,无法生动形象地展示拍摄对象。
对此,相关技术中,可以通过专门的图像处理软件对拍摄的静止图像进行处理,以将拍摄好的照片制作成视频。但是,这种方式需要通过专门的图像处理软件对图像进行后期处理,才能得到视频,操作繁琐,且操作难度大。
发明内容
本申请实施例的目的是提供一种拍摄方法、装置和电子设备,能够解决通过电子设备拍摄的图像中的拍摄对象为静止状态,无法生动形象地展示拍摄对象的问题。
第一方面,本申请实施例提供了一种拍摄方法,该方法包括:
获取第一图像,所述第一图像包括第一对象;
将所述第一对象对应的第一模型与背景图像融合,得到至少两张第二图像,其中,所述至少两张第二图像中所述第一模型的显示位置不同,所述背景图像为所述第一图像或第三图像;
输出目标文件,所述目标文件由所述至少两张第二图像合成得到的;
其中,所述目标文件包括以下至少一项:
视频;
动态图像。
第二方面,本申请实施例提供了一种拍摄装置,所述装置包括:
第一获取模块,用于获取第一图像,所述第一图像包括第一对象;
融合模块,用于将所述第一对象对应的第一模型与背景图像融合,得到至少两张第二图像,其中,所述至少两张第二图像中所述第一模型的显示位置不同,所述背景图像为所述第一图像或第三图像;
第一输出模块,用于输出目标文件,所述目标文件由所述至少两张第二图像合成得到的;
其中,所述目标文件包括以下至少一项:
视频;
动态图像。
第三方面,本申请实施例提供了一种电子设备,该电子设备包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤。
第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法。
第六方面,本申请实施例提供一种计算机程序产品,该程序产品被存储在存储介质中,该程序产品被至少一个处理器执行以实现如第一方面所述的方法。
在本申请实施例中,获取包括第一对象的第一图像,将第一对象对应的第一模型与背景图像融合,得到至少两张第二图像,其中,至少两张第二图像中第一模型的显示位置不同,由至少两张第二图像合成得到视频或者动态 图像,并输出视频或者动态图像,这样,在拍摄第一对象时,可以将第一对象替换为第一模型,使得第一对象更加生动形象,可以提高拍摄的趣味性。并且,用户只需要拍摄一张图像,可以获得第一对象的视频或者动态图像,不需要借助专门的视频制作软件,操作简单。
附图说明
图1是本申请实施例提供的拍摄方法的流程示意图;
图2是本申请实施例提供的拍摄预览界面的示意图;
图3是本申请实施例提供的获取第一轨迹的示意图;
图4是本申请实施例提供的模型库添加界面的示意图之一;
图5是本申请实施例提供的模型库添加界面的示意图之二;
图6是本申请实施例提供的模型库添加界面的示意图之三;
图7是本申请实施例提供的模型库删除界面的示意图;
图8是本申请实施例提供的获取目标滤镜效果的示意图;
图9是本申请实施例提供的拍摄装置的结构示意图;
图10是本申请实施例提供的电子设备的结构示意图;
图11是实现本申请实施例的一种电子设备的硬件结构示意图。
具体实施例
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数。此外,说明书以及权利要求中“和/或”表示所 连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的拍摄方法进行详细地说明。
请参看图1,其是本申请实施例提供的一种拍摄方法的流程图。该方法可以应用于电子设备中,该电子设备可以为手机、平板电脑、笔记本电脑等。如图1所示,该方法可以包括步骤101步骤103,以下予以详细说明。
步骤101,获取第一图像,所述第一图像包括第一对象。
在本实施例中,第一图像可以是电子设备的摄像头采集的、包含第一对象的图像。第一图像也可以是从电子设备的相册中选取的、包含第一对象的图像。第一对象可以是第一图像中待处理的拍摄对象。第一对象可以是动物、植物、物品等。该物品例如可以是卡通形象、吉祥物、展览品等。
在一些可选的实施例中,获取第一图像,可以进一步包括:接收用户的第三输入,响应于第三输入,获取第一图像。
在本实施例中,第三输入可以用于拍摄第一图像。示例性地,第三输入可以是用户对目标控件的点击输入,或者是用户输入的特定手势,具体的可以根据实际使用需求确定,本申请实施例对此不作限定。本申请实施例中的点击输入可以为单击输入、双击输入或任意次数的点击输入等,还可以为长按输入或短按输入。本申请实施例中的特定手势可以为单击手势、滑动手势、拖动手势中的任意一种。
在一些可选的实施例中,在获取第一图像之前,该方法还可以包括:接收用户的第四输入,响应于第四输入,开启第一拍摄模式。
在本实施例中,第一拍摄模式可以是基于拍摄的第一图像输出目标文件的拍摄模式。第四输入可以用于开启相机应用程序的第一拍摄模式。示例性地,第四输入可以是用户对目标控件的点击输入,或者是用户输入的特定手势,具体的可以根据实际使用需求确定,本申请实施例对此不作限定。本申请实施例中的特定手势可以为单击手势、滑动手势、拖动手势中的任意一种; 本申请实施例中的点击输入可以为单击输入、双击输入或任意次数的点击输入等,还可以为长按输入或短按输入。
需要说明的是,具有拍摄功能的电子设备为用户提供了多种拍摄模式,例如,全景模式、美颜模式、录像模式等。类似的,在本实施例中,电子设备的相机应用程序包括第一拍摄模式,第一拍摄模式具体是指基于拍摄的第一图像输出目标文件的拍摄模式。
示例性地,请参见图2,其是本申请实施例的一种拍摄预览界面的示意图。具体来讲,在用户使用电子设备拍摄图像时,启动相机应用程序,电子设备显示拍摄预览界面,拍摄预览界面包括第一拍摄模式的选项201,接收用户对第一拍摄模式的选项201的点击输入,进入第一拍摄模式。
在本申请实施例中,用户可以根据实际使用需求,选择是否开启第一拍摄模式,并在用户开启第一拍摄模式的情况下,基于获取的第一图像输出目标文件。
在步骤101之后,执行步骤102,将所述第一对象对应的第一模型与背景图像融合,得到至少两张第二图像,其中,所述至少两张第二图像中所述第一模型的显示位置不同,所述背景图像为所述第一图像或第三图像。
在本实施例中,第一模型可以是二维模型,也可以是三维模型。例如。第一对象为卡通形象,第一模型可以是卡通形象的三维模型。第一模型可以是从模型库中选取的与第一对象对应的模型。模型库可以是预先建立的、用于存储样本对象和与样本对象对应的样本模型。
在本实施例中,背景图像可以是第一图像,也可以第三图像。第三图像可以仅包括与第一图像相同的背景画面。在具体实施时,在获取第一图像之前,可以响应于用户的输入,获取第三图像,之后,获取第一图像,第一图像与第三图像的背景相同,且第一图像还包括第一对象。
第二图像可以是用于生成目标文件,例如,视频、或动态图像。将第一对象对应的第一模型与背景图像融合,得到至少两张第二图像,可以是获取至少两张背景图像,将第一对象对应的第一模型与每一张背景图像融合,得 到至少两张第二图像,其中,至少两张第二图像中第一模型在背景图像中的显示位置不同。
示例性地,在背景图像为第一图像的情况下,将第一对象对应的第一模型与背景图像融合,得到至少两张第二图像,可以是将第一对象对应的第一模型与第一图像融合,得到至少两张第二图像。也就是说,将拍摄的包含第一对象的第一图像作为背景图像,分别将至少两张第一图像中的每张第一图像与第一对象对应的第一模型进行融合处理,得到至少两张第二图像,其中,至少两张第二图像中第一模型的显示位置不同,或者至少两张第二图像中背景内容不同。这样,由至少两张第二图像合成的视频或者动态图像中,包括第一对象和与第一对象对应的、具有动态效果的第一模型。
示例性地,在背景图像为第三图像的情况下,将第一对象对应的第一模型与背景图像融合,得到至少两张第二图像,可以是将第一对象对应的第一模型与第三图像融合,得到至少两张第二图像。也就是说,获取第三图像和第一图像,其中,第三图像与第一图像的背景画面相同,第三图像还包括第一对象,之后,将拍摄第三图像作为背景图像,分别将至少两张第三图像中的每张第三图像与第一对象对应的第一模型进行融合处理,得到至少两张第二图像,其中,至少两张第二图像中第一模型的显示位置不同,或者至少两张第二图像中背景内容不同。这样,根据至少两张第二图像,可以合成具有动态效果的视频或者动态图像。
在本实施例中,在至少两张第二图像中,背景图像的显示内容也可以不同。
在本申请的一些实施例中,所述将所述第一对象对应的第一模型与背景图像融合包括:将所述第一对象对应的第一模型与目标背景图像融合,所述目标背景图像为所述背景图像中的至少部分图像。
在本实施例中,目标背景图像可以是背景图像中的至少部分图像。目标背景图像的背景画面可以跟随第一模型的显示位置变化。示例性地,在第一对象对应的第一模型的显示位置由远处移动至近处,目标背景图像可以跟随 第一模型的显示位置由远景逐渐变化近景。示例性地,在第一对象对应第一模型平移时,目标背景图像的显示内容可以跟随第一模型的显示位置发生变化。
在本实施例中,在将第一对象对应的第一模型与背景图像融合时,可以将第一对象对应的第一模型与目标背景图像融合,以得到至少两张第二图像,这样,得到的至少两张第二图像中第一模型在背景图像中的显示位置不同,且背景图像的显示内容可以跟随第一模型的显示位置发生变化,从而通过拍摄图像可以得到第一对象的视频或者动态图像。
在本实施例中,在至少两张第二图像中,第一模型在背景图像中的显示位置可以不同。
在本申请的一些实施例中,所述将所述第一对象对应的第一模型与背景图像融合之前,所述方法还包括:调整所述第一对象对应的第一模型在所述背景图像中的显示位置。
在本实施例中,在将第一对象对应的第一模型与背景图像融合之前,可以将调整第一对象对应的第一模型在背景图像中的显示位置,以得到至少两张第二图像,这样,根据至少两张第二图像,可以得到第一对象的视频或者动态图像。
在本申请的一些实施例中,所述调整所述第一对象对应的第一模型在所述背景图像中的显示位置,包括:根据第一轨迹,调整所述第一对象对应的第一模型在所述背景图像中的显示位置。
在本实施例中,第一轨迹可以是第一对象对应的第一模型的移动轨迹。第一轨迹可以是预先设置的,例如,直线、曲线。第一轨迹也可以是用户输入的。第一轨迹还可以是对背景图像进行分析得到的轨迹。例如,根据背景图像的深度值,确定第一轨迹。
在本实施例中,在获取第一图像之后,根据第一轨迹,调整第一对象的对应的第一模型在背景图像中显示位置,并将第一模型与背景图像进行融合处理,得到至少两张第二图像,以根据至少两张第二图像合成第一对象的视 频或动态图像,这样,在拍摄第一图像时,可以将第一图像中的第一对象替换为第一模型,并第一模型可以沿第一轨迹移动,使得第一图像中的第一对象更加生动形象。
在本申请的一些实施例中,所述根据第一轨迹,调整所述第一对象对应的第一模型在所述背景图像中的显示位置,包括:接收用户的第一输入,所述第一输入用于确定第一轨迹;响应于所述第一输入,根据所述第一轨迹,调整所述第一对象对应的第一模型在所述背景图像中的显示位置。
在本实施例中,第一输入可以是获取第一轨迹的输入。示例性地,第一输入可以是用户输入的滑动手势。第一轨迹包括起始点和终点。第一轨迹的起始点可以是用户输入的滑动手势的起始位置。第一轨迹的终点可以是用户输入的滑动手势的结束位置。
在具体实施时,根据所述第一轨迹,调整所述第一对象对应的第一模型在所述背景图像中的显示位置,可以是使得第一模型的显示位置从第一轨迹的起始点移动至终点。可选地,第一模型可以沿预设的距离间隔移动。也就是说,根据预设的距离间隔和第一轨迹,可以确定所要的背景图像的数量和第一对象模型在每张背景图像中的显示位置,基于此,根据第一轨迹,调整第一对象对应的第一模型在背景图像中的显示位置,并将第一模型与背景图像融合,得到至少两张第二图像。
示例性地,请参见图3,其是本申请实施例的一种获取第一轨迹的示意图。具体来讲,在进入第一拍摄模式之后,拍摄预览界面包括拍照选项301,用户点击拍照选项301,在拍摄预览界面显示轨迹设置选项302,响应于用户对轨迹设置选项302的点击,用户可以绘制第一轨迹。例如,用户绘制一条S形的第一轨迹303,之后,用户点击拍照控件304,拍摄第一图像,并基于用户绘制的第一轨迹,得到至少两张第二图像,以根据至少两张第二图像合成目标文件。此外,用户点击拍照选项301,在拍摄预览界面还显示默认设置选项305,响应于用户对默认设置选项305的点击,基于默认设置的轨迹,调整第一对象对应的第一模型在背景图像中的显示位置,得到至少两 张第二图像,并根据至少两张第二图像合成目标文件。
在本实施例中,用户可以根据实际需求绘制第一轨迹,使得第一对象对应的第一模型按照第一轨迹移动,可以获得特定的显示效果,用户可以与第一对象产生有趣的互动,提升拍摄的趣味性。并且,操作简单,可以快速生成目标文件。
在本申请的一些实施例中,所述获取第一图像之后,所述方法还包括:根据所述第一对象,从模型库中获取第一模型,模型库包括样本对象和与所述样本对象对应的样本模型。
在本实施例中,模型库可以包括样本对象和与样本对象对应的样本模型。样本对象可以是动物、植物、物品等。该物品例如可以是卡通形象、吉祥物、展览品等。样本对象可以以图像的形式显示。样本模型可以是二维模型,也可以是三维模型。对于每一个样本对象,模型库中可以包括与样本对象对应的一个样本模型,也可以包括与样本对象对应的多个样本模型,其中,一个样本对象对应的多个样本模型不同。例如,多个样本模型可以是不同形态或者不同显示效果的样本模型。
在具体实施时,获取第一图像之后,将第一图像中的第一对象与模型库中的样本对象进行比对,即包括第一对象的第一图像与模型库中每一样本对象的图像进行比对,在第一对象与样本对象匹配成功的情况下,将样本对象对应的样本模型作为与第一对象对应的第一模型。
这里需要说明的是,在一个样本对象对应多个样本模型的情况下,在第一对象与样本对象匹配成功的情况下,显示与样本对象对应的多个样本模型,在接收到用户对多个样本模型中的目标样本模型的第五输入的情况下,将目标样本模型作为与第一对象对应的第一模型。
在本实施例中,第五输入可以是获取第一模型的输入。示例性地,第五输入可以是用户对目标控件的点击输入。本申请实施例中的点击输入可以为单击输入、双击输入或任意次数的点击输入等,还可以为长按输入或短按输入。
在本实施例中,可以从模型库中获取与第一对象对应的第一模型,并将第一模型与背景图像融合,得到至少两张第二图像,并根据至少两张第二图像输出目标文件,这样,将拍摄得到的第一图像中的第一对象替换为第一模型,可以更生动形象的展示第一对象,得到的第一对象的视频或者动态图像具有更好的显示效果。并且,基于预先建立的模型库,可以快速获取第一模型,提高电子设备的响应速度。
在本实施例中,模型库可以是预先建立的。下面以具体的实施例进行说明。
在一些可选的实施例中,所述根据所述第一对象,从模型库中获取第一模型之前,该方法还可以包括:接收用户的第六输入,响应于第六输入,将样本对象对应的第四模型存储至模型库。
在本实施例中,将样本对象对应的第四模型存储至模型库,可以是获取样本对象对应的第四模型,将样本对象的第四模型作为样本对象对应的样本模型,并将样本对象和样本对象对应的样本模型关联存储至模型库。基于此,在获取第一对象对应的第一模型时,可以将第一对象与样本对象进行比对,在第一对象与样本对象匹配成功的情况下,从样本对象对应的样本模型中获取第一模型。
第六输入可以是向模型库导入样本对象对应的样本模型的输入。示例性地,第六输入可以是用户对目标控件的点击输入。本申请实施例中的点击输入可以为单击输入、双击输入或任意次数的点击输入等,还可以为长按输入或短按输入。
请参见图4,其是本申请实施例的一种模型库添加界面的示意图。具体来讲,在进入第一拍摄模式之后,拍摄预览界面包括模型库的选项401,用户点击模型库的选项401,进入模型库。模型库的显示界面包括样本对象的显示区域402和样本模型的显示区域403。样本对象的显示区域402用于以图像的形式显示所存储的样本对象404,样本对象404可以采用缩略图的形式显示。样本模型的显示区域403用于显示存储的样本对象对应的样本模型 405,样本模型405也可以采用缩略图的形式显示。模型库的显示界面还包括添加选项406,响应于用户对添加选项406的点击输入,在样本对象的显示区域402显示第一添加控件407,接收用户对第一添加控件407的点击输入,向模型库导入样本对象。在样本模型的显示区域403显示第二添加控件408,接收用户对第二添加控件408的点击输入,获取第四模型,并第四模型作为样本对象对应的样本模型存储至模型库。需要说明的是,样本对象对应的样本模型可以是一个,也可以是多个。
在本实施例中,用户可以预先将样本对象和样本对象对应的样本模型导入模型库,以方便后续基于样本对象获取第一对象对应的第一模型,并将第一模型与背景图像融合得到至少两张第二图像,以根据至少两张第二图像生成第一对象的视频或者动态图像。
在又一些可选的实施例中,所述根据所述第一对象,从模型库中获取第一模型之前,该方法还可以包括:根据预设信息,获取所述样本对象对应的第二模型,并存储所述第二模型至所述模型库;其中,所述预设信息包括以下至少一项:链接信息;信息码。
在本实施例中,链接信息可以是用于获取样本对象对应的第二模型的网址信息。例如,用户想要拍摄某卡通形象的视频时,可以从相应网址上获取该卡通形象的模型。信息码例如可以是存储有样本对象对应的第二模型的二维码信息。例如,在参观博物馆时,需要拍摄某展览品时,可以通过扫描相应的二维码,获取该展览品的模型。
示例性地,请参见图5,其是本申请实施例的又一种模型库添加界面的示意图。具体来讲,模型库的显示界面包括快速添加选项501,响应于用户对快速添加选项501的点击输入,显示多种样本模型的添加方式。例如,链接信息添加区域502和二维码扫描入口503。之后,用户可以在链接信息添加区域502输入相应的链接信息,基于用户输入的连接信息,可以获取样本对象的图像和样本对象对应的第二模型,将样本对象对应的第二模型作为样本对象对应的样本模型,并将样本对象的图像与样本模型关联存储至模型 库。用户也可以点击二维码扫描入口503,通过扫描二维码,获取样本对象的图像和样本对象对应的第二模型,将样本对象对应的第二模型作为样本对象对应的样本模型,并将样本对象的图像与样本模型关联存储至模型库。再之后,在获取第一对象对应的第一模型时,可以将包括第一对象的第一图像与样本对象的图像进行比对,在第一对象与样本对象匹配成功的情况下,从样本对象对应的样本模型中获取第一模型。这里需要说明的是,样本对象对应的第二模型可以是一个,也可以是多个。
在本实施例中,提供了多种样本对象对应的第二模型的获取方式,方便用户快速获取所要的模型,操作简单,使用更灵活。
在另一些可选的实施例中,所述根据所述第一对象,从模型库中获取第一模型之前,该方法还可以包括:在接收到通讯对象发送的样本对象对应的第二模型的情况下,将所述样本对象对应的第二模型存储至所述模型库。
在本实施例中,实施该拍摄方法的电子设备可以与其他电子设备建立通信连接,以接收通讯对象发送的样本对象和与样本对象对应的第二模型,将样本对象对应的第二模型作为样本对象对应的样本模型,并将样本对象与样本模型关联存储至模型库。可以理解的是,用户也可以通过该电子设备将模型库中的样本对象和与样本对象对应的第二模型发送给通讯对象。
示例性地,请参见图5和图6,其是本申请实施例的另一种模型库添加界面的示意图。具体来讲,模型库的显示界面包括快速添加选项,响应于用户对快速添加选项的点击输入,在快速添加界面显示“好友互传”的控件504,用户点击“好友互传”的控件504,进入好友互传界面,该好友互传界面包括标注有“传出”的控件601和“传入接收”的控件602。响应于用户对标注有“传出”的控件601的点击输入,进入模型选取界面,用户可以选择所要传输的样本对象和样本对象的样本模型。例如,通过点击样本对象的图像上的“+”添加标记603,从模型库中选取所要传输的样本对象和样本对象对应的样本模型。在选取所要传输的样本对象和样本对象对应的样本模型之后,点击“确认”选项,可以将用户选取的样本对象和样本对象对应的样本 模型发送给通讯对象。响应于用户对标注有“传入接收”的控件602的点击输入,开启用户使用的电子设备的无线通信模块,例如,开启电子设备的WIFI或者蓝牙,以使其他用户的电子设备与该用户的电子设备建立通信连接,以接收其他用户发送的样本对象的图像和样本对象对应的第二模型,将样本对象对应的第二模型作为样本对象对应的样本模型,并将样本对象的图像与样本模型关联存储至模型库。再之后,在获取第一对象对应的第一模型时,可以将包括第一对象的第一图像与样本对象的图像进行比对,在第一对象与样本对象匹配成功的情况下,从样本对象对应的样本模型中获取第一模型。这里需要说明的是,样本对象对应的第二模型可以是一个,也可以是多个。
在本实施例中,可以接收通讯对象发送的样本对象对应的第二模型,也可以向通讯对象发送样本对象的第二模型,这样,用户可以与通讯对象相互分享视频制作素材,方便用户获取所需的模型,并且,用户可以基于电子设备的相机应用程序进行互动,丰富了电子设备的相机应用程序的功能。
在另一些可选的实施例中,所述根据所述第一对象,从模型库中获取第一模型之前,该方法还包括:接收用户的第二输入;响应于所述第二输入,获取至少两张第四图像,每张第四图像中样本对象的图像内容不同;根据所述至少两张第四图像,输出所述样本对象的第三模型;将第三模型存储至所述模型库。
在本实施例中,第二输入可以是拍摄第四图像的输入。示例性地,第二输入可以是用户对目标控件的点击输入,本申请实施例中的点击输入可以为单击输入、双击输入或任意次数的点击输入等,还可以为长按输入或短按输入。
至少两张第四图像中每张第四图像包含的样本对象的图像内容不同,可以是每张第四图像的拍摄角度不同。基于此,根据至少两张第四图像,可以生成样本对象的第三模型,即得到样本对象对应的样本模型,并将样本对象的图像与样本模型关联存储至模型库。再之后,在获取第一对象对应的第一 模型时,可以将包括第一对象的第一图像与样本对象的图像进行比对,在第一对象与样本对象匹配成功的情况下,从样本对象对应的样本模型中获取第一模型。
在本实施例中,在拍摄第一对象时,可以获取至少两张第四图像,根据至少两张第四图像生成第一对象的第三模型,这样,在模型库中未存储第一对象的第一模型的情况下,可以获取第三模型,并将第三模型与背景图像进行融合得到至少两张第二图像,以根据至少两张第二图像合成第一对象的动态图像或者视频。
在本申请的一些实施例中,该方法还包括:接收用户对样本对象或者样本对象的样本模型的第七输入;响应于第七输入,删除模型库中的样本对象和样本对象对应的样本模型。
在本实施例中,第七输入可以是选择要删除的样本对象和样本对象对应的样本模型的输入。示例性地,第七输入可以是用户对目标控件的点击输入,本申请实施例中的点击输入可以为单击输入、双击输入或任意次数的点击输入等,还可以为长按输入或短按输入。例如,第七输入可以是对样本对象的图像的点击输入。还例如,第七输入可以是对样本对象对应的样本模型的点击输入。
示例性地,请参见图7,其是本申请实施例的一种模型库删除界面的示意图。具体来讲,模型库的显示界面包括样本对象的显示区域701和样本模型的显示区域702。样本对象的显示区域701用于显示所存储的样本对象的图像,样本对象可以采用缩略图的形式显示。样本模型的显示区域702用于显示所存储的样本对象对应的样本模型。模型库的显示界面还包括编辑选项703,响应于用户对编辑选项703的点击输入,在样本对象的图像和/或样本对象对应的样本模型上显示删除标记704,通过点击样本对象的图像上的删除标记704或者样本对象对应的样本模型上的删除标记704,删除模型库中的样本对象和与样本对象对应的样本模型。
在本实施例中,用户可以对模型库中的样本模型进行编辑,可以保留常 用的样本模型,删除不再使用样本模型,可以节省电子设备的存储空间。
在步骤102之后,执行步骤103,输出目标文件,所述目标文件由所述至少两张第二图像合成得到的,其中,所述目标文件包括以下至少一项:视频;动态图像。
在具体实施时,在获取第一对象对应的第一模型之后,将第一模型与背景图像进行融合得到至少两张第二图像,其中,至少两张第二图像中第一模型的显示位置不同,或者至少两张第二图像中背景内容不同,并根据至少两张第二图像生成视频或者动态图像。
在本申请的一些实施例中,所述获取第一图像之前,该方法还可以包括:接收用户的第八输入;响应于所述第八输入,获取目标滤镜效果。所述获取第一图像,包括:按照所述目标滤镜效果对应的目标参数值,调整所述第一图像的显示参数,得到具有目标滤镜效果的第一图像。
在本实施例中,第八输入可以是选择目标滤镜效果的输入。示例性地,第八输入可以是点击输入,本申请实施例中的点击输入可以为单击输入、双击输入或任意次数的点击输入等,还可以为长按输入或短按输入。
请参见图8,其是本申请实施例的一种获取目标滤镜效果的示意图。具体来讲,在进入第一拍摄模式之后,拍摄预览界面包括滤镜选项801,用户点击滤镜选项801,在拍摄预览界面显示多种滤镜效果,例如,滤镜a、滤镜b和滤镜c,响应于用户对滤镜c的点击输入,将滤镜c确定为目标滤镜效果。之后,用户点击返回,拍摄预览界面再次显示拍照控件802,用户通过点击拍照控件802,根据目标滤镜效果拍摄第一图像,以根据具有目标滤镜效果的第一图像,生成视频或者动态图像。
在本实施例中,在拍摄视频或者动态图像时,用户可以选择滤镜效果,可以获取具有目标滤镜效果的第一图像,并将具有目标滤镜效果的第一图像作为背景图像,以将第一对象对应的第一模型与背景图像融合,生成视频或者动态图像,这样,可以获得第一对象的视频或者动态图像,并且生成的视频或者动态图像具有目标滤镜效果,进一步提升视频或者动态图像的显示效 果。
在本申请实施例中,获取包括第一对象的第一图像,将第一对象对应的第一模型与背景图像融合,得到至少两张第二图像,其中,至少两张第二图像中第一模型的显示位置不同,由至少两张第二图像合成得到视频或者动态图像,并输出视频或者动态图像,这样,在拍摄第一对象时,可以将第一对象替换为第一模型,使得第一对象更加生动形象,可以提高拍摄的趣味性。并且,用户只需要拍摄一张图像,可以获得第一对象的视频或者动态图像,不需要借助专门的视频制作软件,操作简单。
本申请实施例提供的拍摄方法,执行主体可以为拍摄装置。本申请实施例中以拍摄装置执行拍摄的方法为例,说明本申请实施例提供的拍摄的装置。
与上述实施例相对应,参见图9,本申请实施例还提供一种拍摄装置900,该拍摄装置900包括第一获取模块901、融合模块902和第一输出模块903。
该第一获取模块901,用于获取第一图像,所述第一图像包括第一对象;
该融合模块902,用于将所述第一对象对应的第一模型与背景图像融合,得到至少两张第二图像,其中,所述至少两张第二图像中所述第一模型的显示位置不同,所述背景图像为所述第一图像或第三图像;
该第一输出模块903,用于输出目标文件,所述目标文件由所述至少两张第二图像合成得到的;
其中,所述目标文件包括以下至少一项:
视频;
动态图像。
可选地,所述融合模块902,具体用于将所述第一对象对应的第一模型与目标背景图像融合,所述目标背景图像为所述背景图像中的至少部分图像。
可选地,所述拍摄装置900还包括:调整模块,用于调整所述第一对象对应的第一模型在所述背景图像中的显示位置。
可选地,所述调整模块,具体用于根据第一轨迹,调整所述第一对象对应的第一模型在所述背景图像中的显示位置。
可选地,所述调整模块,包括:接收单元,用于接收用户的第一输入,所述第一输入用于确定第一轨迹;调整单元,用于响应于所述第一输入,根据所述第一轨迹,调整所述第一对象对应的第一模型在所述背景图像中的显示位置。
可选地,所述拍摄装置900还包括:第二获取模块,用于根据所述第一对象,从模型库中获取第一模型,模型库包括样本对象和与所述样本对象对应的样本模型。
可选地,所述拍摄装置900还包括:第一存储模块,用于在接收到通讯对象发送的样本对象对应的第二模型的情况下,将所述样本对象对应的第二模型存储至所述模型库;
第三获取模块,用于根据预设信息,获取所述样本对象对应的第二模型;第二存储模块,用于存储所述第二模型至所述模型库;其中,所述预设信息包括以下至少一项:链接信息;信息码。
可选地,所述拍摄装置900还包括:接收模块,用于接收用户的第二输入;第四获取模块,用于响应于所述第二输入,获取至少两张第四图像,每张第四图像中样本对象的图像内容不同;第二输出模块,用于根据所述至少两张第四图像,输出所述样本对象的第三模型;第三存储模块,用于将第三模型存储至所述模型库。
在本申请实施例中,获取包括第一对象的第一图像,将第一对象对应的第一模型与背景图像融合,得到至少两张第二图像,其中,至少两张第二图像中第一模型的显示位置不同,由至少两张第二图像合成得到视频或者动态图像,并输出视频或者动态图像,这样,在拍摄第一对象时,可以将第一对象替换为第一模型,使得第一对象更加生动形象,可以提高拍摄的趣味性。并且,用户只需要拍摄一张图像,可以获得第一对象的视频或者动态图像,不需要借助专门的视频制作软件,操作简单。
本申请实施例中的拍摄装置可以是电子设备,也可以是电子设备中的部件,例如集成电路或芯片。该电子设备可以是终端,也可以为除终端之外的其他设备。示例性的,电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、移动上网装置(Mobile Internet Device,MID)、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、机器人、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,本申请实施例不作具体限定。
本申请实施例中的拍摄装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为苹果移动设备操作系统(iPhone Operation System,ios),还可以为其他可能的操作系统,本申请实施例不作具体限定。
本申请实施例提供的拍摄装置能够实现图1的方法实施例实现的各个过程,为避免重复,这里不再赘述。
可选地,如图10所示,本申请实施例还提供一种电子设备1000,包括处理器1001和存储器1002,存储器1002上存储有可在所述处理器1001上运行的程序或指令,该程序或指令被处理器1001执行时实现上述拍摄方法实施例的各个步骤,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,本申请实施例中的电子设备包括上述所述的移动电子设备。
图11为实现本申请实施例的一种电子设备的硬件结构示意图。
该电子设备1100包括但不限于:射频单元1101、网络模块1102、音频输出单元1103、输入单元1104、传感器1105、显示单元1106、用户输入单元1107、接口单元1108、存储器1109、以及处理器1110等部件。
本领域技术人员可以理解,电子设备1100还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器1110逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图11 中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
其中,处理器1110,用于:获取第一图像,所述第一图像包括第一对象;将所述第一对象对应的第一模型与背景图像融合,得到至少两张第二图像,其中,所述至少两张第二图像中所述第一模型的显示位置不同,所述背景图像为所述第一图像或第三图像;输出目标文件,所述目标文件由所述至少两张第二图像合成得到的;其中,所述目标文件包括以下至少一项:视频;动态图像。
可选地,处理器1110在所述将所述第一对象对应的第一模型与背景图像融合时,用于:将所述第一对象对应的第一模型与目标背景图像融合,所述目标背景图像为所述背景图像中的至少部分图像。
可选地,处理器1110在所述将所述第一对象对应的第一模型与背景图像融合之前,还用于:调整所述第一对象对应的第一模型在所述背景图像中的显示位置。
可选地,处理器1110在所述调整所述第一对象对应的第一模型在所述背景图像中的显示位置时,用于:根据第一轨迹,调整所述第一对象对应的第一模型在所述背景图像中的显示位置。
可选地,用户输入单元1107在根据第一轨迹,调整所述第一对象对应的第一模型在所述背景图像中的显示位置时,用于:接收用户的第一输入,所述第一输入用于确定第一轨迹;处理器1110在根据第一轨迹,调整所述第一对象对应的第一模型在所述背景图像中的显示位置时,用于:响应于所述第一输入,根据所述第一轨迹,调整所述第一对象对应的第一模型在所述背景图像中的显示位置。
可选地,处理器1110在所述获取第一图像之后,还用于:根据所述第一对象,从模型库中获取第一模型,模型库包括样本对象和与所述样本对象对应的样本模型。
可选地,在所述根据所述第一对象,从模型库中获取第一模型之前,存储器1109,用于:在接收到通讯对象发送的样本对象对应的第二模型的情况下,将所述样本对象对应的第二模型存储至所述模型库;或者
处理器1110,用于:根据预设信息,获取所述样本对象对应的第二模型;存储器1109,用于:存储所述第二模型至所述模型库;其中,所述预设信息包括以下至少一项:链接信息;信息码。
可选地,在所述根据所述第一对象,从模型库中获取第一模型之前,用户输入单元1107,还用于:接收用户的第二输入;处理器1110,还用于:响应于所述第二输入,获取至少两张第四图像,每张第四图像中样本对象的图像内容不同;根据所述至少两张第四图像,输出所述样本对象的第三模型;存储器1109,还用于:将第三模型存储至所述模型库。
在本申请实施例中,获取包括第一对象的第一图像,将第一对象对应的第一模型与背景图像融合,得到至少两张第二图像,其中,至少两张第二图像中第一模型的显示位置不同,由至少两张第二图像合成得到视频或者动态图像,并输出视频或者动态图像,这样,在拍摄第一对象时,可以将第一对象替换为第一模型,使得第一对象更加生动形象,可以提高拍摄的趣味性。并且,用户只需要拍摄一张图像,可以获得第一对象的视频或者动态图像,不需要借助专门的视频制作软件,操作简单。
应理解的是,本申请实施例中,输入单元1104可以包括图形处理器(Graphics Processing Unit,GPU)11041和麦克风11042,图形处理器11041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元1106可包括显示面板11061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板11061。用户输入单元1107包括触控面板11071以及其他输入设备11072。触控面板11071,也称为触摸屏。触控面板11071可包括触摸检测装置和触摸控制器两个部分。其他输入设备11072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
存储器1109可用于存储软件程序以及各种数据。存储器1109可主要包括存储程序或指令的第一存储区和存储数据的第二存储区,其中,第一存储区可存储操作系统、至少一个功能所需的应用程序或指令(比如声音播放功能、图像播放功能等)等。此外,存储器1109可以包括易失性存储器或非易失性存储器,或者,存储器1109可以包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请实施例中的存储器1109包括但不限于这些和任意其它适合类型的存储器。
处理器1110可包括一个或多个处理单元;可选的,处理器1110集成应用处理器和调制解调处理器,其中,应用处理器主要处理涉及操作系统、用户界面和应用程序等的操作,调制解调处理器主要处理无线通信信号,如基带处理器。可以理解的是,上述调制解调处理器也可以不集成到处理器1110中。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述拍摄方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器ROM、随机 存取存储器RAM、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述拍摄方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
本申请实施例提供一种计算机程序产品,该程序产品被存储在存储介质中,该程序产品被至少一个处理器执行以实现如上述拍摄方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描 述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽咯,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
可以理解的是,本公开实施例描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,模块、单元、子单元可以实现在一个或多个专用集成电路(Application Specific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processor,DSP)、数字信号 处理设备(DSP Device,DSPD)、可编程逻辑设备(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本公开所述功能的其它电子单元或其组合中。
对于软件实现,可通过执行本公开实施例所述功能的模块(例如过程、函数等)来实现本公开实施例所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (13)

  1. 一种拍摄方法,其中,所述方法包括:
    获取第一图像,所述第一图像包括第一对象;
    将所述第一对象对应的第一模型与背景图像融合,得到至少两张第二图像,其中,所述至少两张第二图像中所述第一模型的显示位置不同,所述背景图像为所述第一图像或第三图像;
    输出目标文件,所述目标文件由所述至少两张第二图像合成得到的;
    其中,所述目标文件包括以下至少一项:
    视频;
    动态图像。
  2. 根据权利要求1所述的方法,其中,所述将所述第一对象对应的第一模型与背景图像融合包括:
    将所述第一对象对应的第一模型与目标背景图像融合,所述目标背景图像为所述背景图像中的至少部分图像。
  3. 根据权利要求1所述的方法,其中,所述将所述第一对象对应的第一模型与背景图像融合之前,所述方法还包括:
    调整所述第一对象对应的第一模型在所述背景图像中的显示位置。
  4. 根据权利要求3所述的方法,其中,所述调整所述第一对象对应的第一模型在所述背景图像中的显示位置包括:
    根据第一轨迹,调整所述第一对象对应的第一模型在所述背景图像中的显示位置。
  5. 根据权利要求4所述的方法,其中,所述根据第一轨迹,调整所述第一对象对应的第一模型在所述背景图像中的显示位置,包括:
    接收用户的第一输入,所述第一输入用于确定第一轨迹;
    响应于所述第一输入,根据所述第一轨迹,调整所述第一对象对应的第一模型在所述背景图像中的显示位置。
  6. 根据权利要求1所述的方法,其中,所述获取第一图像之后,所述方法还包括:
    根据所述第一对象,从模型库中获取第一模型,模型库包括样本对象和与所述样本对象对应的样本模型。
  7. 根据权利要求6所述的方法,其中,所述根据所述第一对象,从模型库中获取第一模型之前,所述方法还包括:
    在接收到通讯对象发送的样本对象对应的第二模型的情况下,将所述样本对象对应的第二模型存储至所述模型库;或者
    根据预设信息,获取所述样本对象对应的第二模型,并存储所述第二模型至所述模型库;
    其中,所述预设信息包括以下至少一项:
    链接信息;
    信息码。
  8. 根据权利要求6所述的方法,其中,所述根据所述第一对象,从模型库中获取第一模型之前,所述方法还包括:
    接收用户的第二输入;
    响应于所述第二输入,获取至少两张第四图像,每张第四图像中样本对象的图像内容不同;
    根据所述至少两张第四图像,输出所述样本对象的第三模型;
    将第三模型存储至所述模型库。
  9. 一种拍摄装置,其中,所述装置包括:
    第一获取模块,用于获取第一图像,所述第一图像包括第一对象;
    融合模块,用于将所述第一对象对应的第一模型与背景图像融合,得到至少两张第二图像,其中,所述至少两张第二图像中所述第一模型的显示位置不同,所述背景图像为所述第一图像或第三图像;
    第一输出模块,用于输出目标文件,所述目标文件由所述至少两张第二图像合成得到的;
    其中,所述目标文件包括以下至少一项:
    视频;
    动态图像。
  10. 一种电子设备,其中,包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1-8任一项所述的拍摄方法的步骤。
  11. 一种可读存储介质,其中,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1-8任一项所述的拍摄方法的步骤。
  12. 一种芯片,其中,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如权利要求1-8任一项所述的拍摄方法的步骤。
  13. 一种计算机程序产品,其中,所述程序产品被存储在存储介质中,所述程序产品被至少一个处理器执行以实现如权利要求1-8任一项所述的拍 摄方法的步骤。
PCT/CN2023/074318 2022-02-08 2023-02-03 拍摄方法、装置和电子设备 WO2023151510A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210119654.0 2022-02-08
CN202210119654.0A CN114584704A (zh) 2022-02-08 2022-02-08 拍摄方法、装置和电子设备

Publications (1)

Publication Number Publication Date
WO2023151510A1 true WO2023151510A1 (zh) 2023-08-17

Family

ID=81775161

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/074318 WO2023151510A1 (zh) 2022-02-08 2023-02-03 拍摄方法、装置和电子设备

Country Status (2)

Country Link
CN (1) CN114584704A (zh)
WO (1) WO2023151510A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114584704A (zh) * 2022-02-08 2022-06-03 维沃移动通信有限公司 拍摄方法、装置和电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190124248A1 (en) * 2016-12-20 2019-04-25 Microsoft Technology Licensing, Llc Dynamic range extension to produce images
CN110012226A (zh) * 2019-03-27 2019-07-12 联想(北京)有限公司 一种电子设备及其图像处理方法
CN111917979A (zh) * 2020-07-27 2020-11-10 维沃移动通信有限公司 多媒体文件输出方法、装置、电子设备及可读存储介质
CN113763445A (zh) * 2021-09-22 2021-12-07 黎川县凡帝科技有限公司 静态图像获取方法、系统和电子设备
CN113794829A (zh) * 2021-08-02 2021-12-14 维沃移动通信(杭州)有限公司 拍摄方法、装置及电子设备
CN114584704A (zh) * 2022-02-08 2022-06-03 维沃移动通信有限公司 拍摄方法、装置和电子设备

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6402934B2 (ja) * 2015-05-19 2018-10-10 カシオ計算機株式会社 動画生成装置、動画生成方法、及びプログラム
CN106816077B (zh) * 2015-12-08 2019-03-22 张涛 基于二维码和增强现实技术的互动沙盘展示方法
CN105574914B (zh) * 2015-12-18 2018-11-30 深圳市沃优文化有限公司 3d动态场景的制作装置及其制作方法
CN108111748B (zh) * 2017-11-30 2021-01-08 维沃移动通信有限公司 一种生成动态图像的方法和装置
CN109922252B (zh) * 2017-12-12 2021-11-02 北京小米移动软件有限公司 短视频的生成方法及装置、电子设备
CN110827376A (zh) * 2018-08-09 2020-02-21 北京微播视界科技有限公司 增强现实多平面模型动画交互方法、装置、设备及存储介质
CN109361880A (zh) * 2018-11-30 2019-02-19 三星电子(中国)研发中心 一种展示静态图片对应的动态图片或视频的方法及系统
CN109702747A (zh) * 2019-01-21 2019-05-03 广东康云科技有限公司 一种机器狗系统及其实现方法
CN109859100A (zh) * 2019-01-30 2019-06-07 深圳安泰创新科技股份有限公司 虚拟背景的显示方法、电子设备和计算机可读存储介质
CN112511815B (zh) * 2019-12-05 2022-01-21 中兴通讯股份有限公司 图像或视频生成方法及装置
CN113038001A (zh) * 2021-02-26 2021-06-25 维沃移动通信有限公司 显示方法、装置及电子设备
CN113408484A (zh) * 2021-07-14 2021-09-17 广州繁星互娱信息科技有限公司 画面展示方法、装置、终端及存储介质
CN113538642A (zh) * 2021-07-20 2021-10-22 广州虎牙科技有限公司 虚拟形象生成方法、装置、电子设备和存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190124248A1 (en) * 2016-12-20 2019-04-25 Microsoft Technology Licensing, Llc Dynamic range extension to produce images
CN110012226A (zh) * 2019-03-27 2019-07-12 联想(北京)有限公司 一种电子设备及其图像处理方法
CN111917979A (zh) * 2020-07-27 2020-11-10 维沃移动通信有限公司 多媒体文件输出方法、装置、电子设备及可读存储介质
CN113794829A (zh) * 2021-08-02 2021-12-14 维沃移动通信(杭州)有限公司 拍摄方法、装置及电子设备
CN113763445A (zh) * 2021-09-22 2021-12-07 黎川县凡帝科技有限公司 静态图像获取方法、系统和电子设备
CN114584704A (zh) * 2022-02-08 2022-06-03 维沃移动通信有限公司 拍摄方法、装置和电子设备

Also Published As

Publication number Publication date
CN114584704A (zh) 2022-06-03

Similar Documents

Publication Publication Date Title
US9407834B2 (en) Apparatus and method for synthesizing an image in a portable terminal equipped with a dual camera
CN106775334B (zh) 移动终端上的文件调用方法、装置及移动终端
KR102384054B1 (ko) 이동 단말기 및 그 제어 방법
EP2917819B1 (en) METHOD AND DEVICE FOR SCROLLING PHOTOS BY PASSING OVER IT AND TRANSITION TO THE NEXT
EP2811731B1 (en) Electronic device for editing dual image and method thereof
JP2017531330A (ja) ピクチャ処理方法および装置
WO2015085960A1 (zh) 一种照片处理方法及装置
US9137461B2 (en) Real-time camera view through drawn region for image capture
WO2023143531A1 (zh) 拍摄方法、装置和电子设备
WO2023151510A1 (zh) 拍摄方法、装置和电子设备
WO2022048373A1 (zh) 图像处理方法、移动终端及存储介质
CN108781254A (zh) 拍照预览方法、图形用户界面及终端
WO2016028396A1 (en) Digital media message generation
WO2022048372A1 (zh) 图像处理方法、移动终端及存储介质
US9405174B2 (en) Portable image storage device with integrated projector
WO2024051556A1 (zh) 壁纸显示的方法、电子设备及存储介质
WO2024022349A1 (zh) 图像处理方法、装置、电子设备及存储介质
KR101511101B1 (ko) 스마트폰을 위한 개인화된 쇼핑몰 애플리케이션 제작 시스템, 방법 및 컴퓨터 판독 가능한 기록 매체
WO2023143529A1 (zh) 拍摄方法、装置和电子设备
WO2023155858A1 (zh) 文档编辑方法及其装置
WO2023093669A1 (zh) 视频拍摄方法、装置、电子设备及存储介质
WO2023087703A9 (zh) 媒体文件处理方法及装置
CN113325946A (zh) 一种基于增强现实的虚拟礼物交互方法以及相关装置
US20140304659A1 (en) Multimedia early chilhood journal tablet
CN113596329A (zh) 拍摄方法和拍摄装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23752290

Country of ref document: EP

Kind code of ref document: A1