CN114584704A - Shooting method and device and electronic equipment - Google Patents

Shooting method and device and electronic equipment Download PDF

Info

Publication number
CN114584704A
CN114584704A CN202210119654.0A CN202210119654A CN114584704A CN 114584704 A CN114584704 A CN 114584704A CN 202210119654 A CN202210119654 A CN 202210119654A CN 114584704 A CN114584704 A CN 114584704A
Authority
CN
China
Prior art keywords
model
image
sample
images
model corresponding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210119654.0A
Other languages
Chinese (zh)
Inventor
陈洁茹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202210119654.0A priority Critical patent/CN114584704A/en
Publication of CN114584704A publication Critical patent/CN114584704A/en
Priority to PCT/CN2023/074318 priority patent/WO2023151510A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a shooting method, a shooting device and electronic equipment, and belongs to the technical field of image processing. The method comprises the following steps: acquiring a first image, wherein the first image comprises a first object; fusing a first model corresponding to the first object with a background image to obtain at least two second images, wherein the display positions of the first model in the at least two second images are different, and the background image is the first image or a third image; outputting a target file, wherein the target file is obtained by synthesizing the at least two second images; wherein the target file comprises at least one of: video; and (4) a dynamic image.

Description

Shooting method and device and electronic equipment
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a shooting method, a shooting device and electronic equipment.
Background
The photographing function is the most common function in electronic devices, and people can use electronic devices to photograph images in daily life. However, the image obtained by shooting is usually a still image, and the shooting object in the image is in a still state, so that the shooting object cannot be vividly shown.
In this regard, in the related art, a photographed still image may be processed by a dedicated image processing software to make a photographed picture into a video. However, in this way, the video can be obtained only by performing post-processing on the image through special image processing software, which is complicated and difficult to operate.
Disclosure of Invention
The embodiment of the application aims to provide a shooting method, a shooting device and electronic equipment, which can solve the problem that a shooting object in an image shot by the electronic equipment is in a static state and cannot be vividly and vividly shown.
In a first aspect, an embodiment of the present application provides a shooting method, where the method includes:
acquiring a first image, wherein the first image comprises a first object;
fusing a first model corresponding to the first object with a background image to obtain at least two second images, wherein the display positions of the first model in the at least two second images are different, and the background image is the first image or a third image;
outputting a target file, wherein the target file is obtained by synthesizing the at least two second images;
wherein the target file comprises at least one of:
video;
and (4) a dynamic image.
In a second aspect, an embodiment of the present application provides a shooting apparatus, including:
a first acquisition module for acquiring a first image, the first image including a first object;
a fusion module, configured to fuse a first model corresponding to the first object with a background image to obtain at least two second images, where display positions of the first model in the at least two second images are different, and the background image is the first image or a third image;
the first output module is used for outputting a target file, and the target file is obtained by synthesizing the at least two second images;
wherein the target file comprises at least one of:
video;
and (4) a dynamic image.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, which is stored in a storage medium and executed by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, a first image comprising a first object is obtained, a first model corresponding to the first object is fused with a background image to obtain at least two second images, wherein the display positions of the first model in the at least two second images are different, a video or a dynamic image is synthesized from the at least two second images, and the video or the dynamic image is output. Moreover, the user only needs to shoot one image, the video or the dynamic image of the first object can be obtained, special video making software is not needed, and the operation is simple.
Drawings
Fig. 1 is a schematic flowchart of a shooting method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a shooting preview interface provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of acquiring a first track according to an embodiment of the present disclosure;
FIG. 4 is one of schematic diagrams of a model library addition interface provided by an embodiment of the present application;
FIG. 5 is a second schematic diagram of a model library adding interface provided in the embodiment of the present application;
FIG. 6 is a third schematic diagram of a model library adding interface provided in the present application;
FIG. 7 is a schematic diagram of a model library deletion interface provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of obtaining a target filter effect according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a shooting device provided in an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 11 is a hardware structure diagram of an electronic device implementing an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application are capable of operation in sequences other than those illustrated or described herein, and that the words "first", "second", etc. do not imply a limitation on the number of words. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The shooting method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Please refer to fig. 1, which is a flowchart illustrating a photographing method according to an embodiment of the present application. The method can be applied to electronic equipment, and the electronic equipment can be a mobile phone, a tablet computer, a notebook computer and the like. As shown in fig. 1, the method may include a step 101, step 103, described in detail below.
Step 101, a first image is acquired, wherein the first image comprises a first object.
In this embodiment, the first image may be an image that includes the first object and is captured by a camera of the electronic device. The first image may also be an image selected from an album of the electronic device containing the first object. The first object may be a photographic object to be processed in the first image. The first object may be an animal, a plant, an item, etc. The object may be, for example, a cartoon character, a mascot, an exhibit, or the like.
In some alternative embodiments, acquiring the first image may further include: and receiving a third input of the user, and responding to the third input to acquire the first image.
In this embodiment, the third input may be for capturing the first image. Illustratively, the third input may be a click input of the target control by the user, or a specific gesture input by the user, which may be determined according to actual usage requirements, and this is not limited in this embodiment of the application. The click input in the embodiment of the application can be click input, double-click input, click input of any number of times and the like, and can also be long-time press input or short-time press input. The specific gesture in the embodiment of the present application may be any one of a single-click gesture, a slide gesture, and a drag gesture.
In some optional embodiments, before acquiring the first image, the method may further comprise: and receiving a fourth input of the user, and responding to the fourth input to start the first shooting mode.
In the present embodiment, the first photographing mode may be a photographing mode in which the target file is output based on the photographed first image. The fourth input may be for turning on a first photographing mode of the camera application. The fourth input may be, for example, a click input of the target control by the user, or a specific gesture input by the user, which may be specifically determined according to actual use requirements, and is not limited in this embodiment of the application. The specific gesture in the embodiment of the application may be any one of a single-click gesture, a sliding gesture and a dragging gesture; the click input in the embodiment of the application can be click input, double-click input, click input of any number of times and the like, and can also be long-time press input or short-time press input.
It should be noted that the electronic device having a shooting function provides a user with a plurality of shooting modes, for example, a panorama mode, a beauty mode, a video recording mode, and the like. Similarly, in the present embodiment, the camera application of the electronic device includes a first shooting mode, which specifically refers to a shooting mode in which the target file is output based on the shot first image.
For example, please refer to fig. 2, which is a schematic diagram of a shooting preview interface according to an embodiment of the present application. Specifically, when a user uses the electronic device to capture an image, a camera application is started, the electronic device displays a capture preview interface, the capture preview interface includes an option 201 of a first capture mode, and the electronic device enters the first capture mode by receiving a click input of the user on the option 201 of the first capture mode.
In the embodiment of the application, a user can select whether to start the first shooting mode according to actual use requirements, and under the condition that the user starts the first shooting mode, the target file is output based on the acquired first image.
After step 101, step 102 is executed to fuse a first model corresponding to the first object with a background image to obtain at least two second images, where display positions of the first model in the at least two second images are different, and the background image is the first image or a third image.
In this embodiment, the first model may be a two-dimensional model or a three-dimensional model. For example. The first object is an avatar, and the first model may be a three-dimensional model of the avatar. The first model may be a model corresponding to the first object selected from a library of models. The model library may be pre-established for storing the sample object and the sample model corresponding to the sample object.
In this embodiment, the background image may be the first image or the third image. The third image may include only the same background picture as the first image. In a specific implementation, before the first image is acquired, a third image may be acquired in response to an input of a user, and then the first image is acquired, the first image has the same background as the third image, and the first image further includes the first object.
The second image may be for generating a target file, e.g., a video, or a dynamic image. And fusing the first model corresponding to the first object with the background image to obtain at least two second images, wherein the at least two second images can be obtained by obtaining the at least two background images and fusing the first model corresponding to the first object with each background image to obtain the at least two second images, and the display positions of the first model in the background images in the at least two second images are different.
For example, when the background image is a first image, the first model corresponding to the first object and the background image are fused to obtain at least two second images, and the first model corresponding to the first object and the first image may be fused to obtain at least two second images. That is to say, taking a first image including a first object as a background image, and respectively fusing each of at least two first images and a first model corresponding to the first object to obtain at least two second images, wherein the display positions of the first models in the at least two second images are different, or the background contents in the at least two second images are different. In this way, the video or moving image composed of at least two second images includes the first object and the first model having a moving effect corresponding to the first object.
For example, when the background image is the third image, the first model corresponding to the first object and the background image are fused to obtain at least two second images, and the first model corresponding to the first object and the third image may be fused to obtain at least two second images. That is to say, a third image and a first image are obtained, wherein the third image is the same as a background picture of the first image, the third image further includes a first object, then, the shot third image is taken as a background image, and each of at least two third images and a first model corresponding to the first object are respectively subjected to fusion processing to obtain at least two second images, wherein the display positions of the first model in the at least two second images are different, or the background contents in the at least two second images are different. In this way, a video or a moving image having a moving effect can be synthesized from at least two second images.
In this embodiment, the display contents of the background image may be different between the at least two second images.
In some embodiments of the present application, the fusing the first model corresponding to the first object with the background image includes: and fusing a first model corresponding to the first object with a target background image, wherein the target background image is at least part of image in the background image.
In this embodiment, the target background image may be at least a partial image in the background image. The background picture of the target background image may follow the display position change of the first model. For example, when the display position of the first model corresponding to the first object moves from far to near, the target background image may gradually change from a distant view to a near view following the display position of the first model. For example, when the first object is translated corresponding to the first model, the display content of the target background image may change following the display position of the first model.
In this embodiment, when a first model corresponding to a first object is fused with a background image, the first model corresponding to the first object may be fused with a target background image to obtain at least two second images, so that display positions of the first model in the background image in the obtained at least two second images are different, and display contents of the background image may change along with the display position of the first model, so that a video or a dynamic image of the first object may be obtained by capturing an image.
In this embodiment, the display position of the first model in the background image may be different in the at least two second images.
In some embodiments of the present application, before the fusing the first model corresponding to the first object with the background image, the method further includes: and adjusting the display position of the first model corresponding to the first object in the background image.
In this embodiment, before the first model corresponding to the first object is fused with the background image, the display position of the first model corresponding to the first object in the background image may be adjusted to obtain at least two second images, so that a video or a dynamic image of the first object may be obtained according to the at least two second images.
In some embodiments of the present application, the adjusting a display position of a first model corresponding to the first object in the background image includes: and adjusting the display position of the first model corresponding to the first object in the background image according to the first track.
In this embodiment, the first trajectory may be a movement trajectory of the first model corresponding to the first object. The first trajectory may be preset, for example, straight line, curved line. The first trajectory may also be user input. The first trajectory may also be a trajectory obtained by analyzing the background image. For example, the first trajectory is determined based on the depth value of the background image.
In this embodiment, after the first image is acquired, the display position of the corresponding first model of the first object in the background image is adjusted according to the first trajectory, and the first model and the background image are subjected to fusion processing to obtain at least two second images, so as to synthesize a video or a dynamic image of the first object according to the at least two second images.
In some embodiments of the application, the adjusting, according to the first trajectory, a display position of a first model corresponding to the first object in the background image includes: receiving a first input of a user, wherein the first input is used for determining a first track; and responding to the first input, and adjusting the display position of the first model corresponding to the first object in the background image according to the first track.
In this embodiment, the first input may be an input to acquire the first trajectory. Illustratively, the first input may be a swipe gesture input by the user. The first trajectory includes a start point and an end point. The starting point of the first trajectory may be a starting position of a swipe gesture input by the user. The end point of the first trajectory may be an end position of the slide gesture input by the user.
In a specific implementation, the display position of the first model corresponding to the first object in the background image is adjusted according to the first trajectory, where the display position of the first model may be moved from a starting point to an end point of the first trajectory. Alternatively, the first model may be moved at intervals along a preset distance. That is, according to the preset distance interval and the first trajectory, the number of the required background images and the display position of the first object model in each background image can be determined, based on which, the display position of the first model corresponding to the first object in the background image is adjusted according to the first trajectory, and the first model and the background image are fused to obtain at least two second images.
For example, please refer to fig. 3, which is a schematic diagram of acquiring a first track according to an embodiment of the present application. Specifically, after entering the first photographing mode, the photographing preview interface includes a photographing option 301, the user clicks the photographing option 301, a trajectory setting option 302 is displayed on the photographing preview interface, and the user can draw the first trajectory in response to the user clicking the trajectory setting option 302. For example, the user draws an S-shaped first track 303, and then clicks the photographing control 304 to photograph the first image, and obtains at least two second images based on the first track drawn by the user, so as to synthesize the target file according to the at least two second images. In addition, the user clicks the photographing option 301, a default setting option 305 is also displayed on the photographing preview interface, the display position of the first model corresponding to the first object in the background image is adjusted based on the default setting track in response to the user clicking the default setting option 305, at least two second images are obtained, and the target file is synthesized according to the at least two second images.
In this embodiment, the user can draw the first track according to actual needs, so that the first model corresponding to the first object moves according to the first track, a specific display effect can be obtained, interesting interaction can be generated between the user and the first object, and the shooting interest is improved. And the operation is simple, and the target file can be generated quickly.
In some embodiments of the present application, after the acquiring the first image, the method further comprises: and acquiring a first model from a model library according to the first object, wherein the model library comprises a sample object and a sample model corresponding to the sample object.
In this embodiment, the model library may include a sample object and a sample model corresponding to the sample object. The sample object may be an animal, a plant, an item, or the like. The object may be, for example, a cartoon character, a mascot, an exhibit, or the like. The sample object may be displayed in the form of an image. The sample model may be a two-dimensional model or a three-dimensional model. For each sample object, the model library may include one sample model corresponding to the sample object, or may include a plurality of sample models corresponding to the sample object, where the plurality of sample models corresponding to one sample object are different. For example, the plurality of sample models may be sample models of different morphologies or different display effects.
In specific implementation, after the first image is obtained, the first object in the first image is compared with the sample objects in the model base, that is, the first image including the first object is compared with the image of each sample object in the model base, and when the first object is successfully matched with the sample object, the sample model corresponding to the sample object is used as the first model corresponding to the first object.
Here, in the case where one sample object corresponds to a plurality of sample models, in the case where the first object is successfully matched with the sample object, the plurality of sample models corresponding to the sample object are displayed, and in the case where a fifth input by the user is received for a target sample model among the plurality of sample models, the target sample model is set as the first model corresponding to the first object.
In this embodiment, the fifth input may be an input to obtain the first model. Illustratively, the fifth input may be a user click input to a target control. The click input in the embodiment of the application can be click input, double-click input, click input of any number of times and the like, and can also be long-time press input or short-time press input.
In this embodiment, a first model corresponding to a first object may be obtained from a model library, the first model may be fused with a background image to obtain at least two second images, and a target file may be output according to the at least two second images, so that the first object in the captured first image may be replaced by the first model, the first object may be more vividly displayed, and the obtained video or dynamic image of the first object may have a better display effect. And based on a pre-established model base, the first model can be quickly acquired, and the response speed of the electronic equipment is improved.
In this embodiment, the model library may be pre-established. The following examples are given by way of illustration.
In some optional embodiments, before the obtaining, according to the first object, the first model from the model library, the method may further include: and receiving a sixth input of the user, and responding to the sixth input, and storing a fourth model corresponding to the sample object into the model library.
In this embodiment, the storing the fourth model corresponding to the sample object in the model library may be to acquire the fourth model corresponding to the sample object, use the fourth model of the sample object as the sample model corresponding to the sample object, and store the sample object and the sample model corresponding to the sample object in the model library in an associated manner. Based on this, when the first model corresponding to the first object is obtained, the first object and the sample object may be compared, and when the first object and the sample object are successfully matched, the first model may be obtained from the sample model corresponding to the sample object.
The sixth input may be an input to import a sample model corresponding to the sample object into the model repository. Illustratively, the sixth input may be a user click input to a target control. The click input in the embodiment of the application can be click input, double-click input, click input of any number of times and the like, and can also be long-time press input or short-time press input.
Please refer to fig. 4, which is a schematic diagram of a model library adding interface according to an embodiment of the present application. Specifically, after entering the first photographing mode, the photographing preview interface includes an option 401 of a model library, and the user clicks the option 401 of the model library to enter the model library. The display interface of the model library includes a display area 402 for the sample object and a display area 403 for the sample model. The sample object display area 402 is used to display the stored sample objects 404 in the form of images, and the sample objects 404 may be displayed in the form of thumbnails. The display area 403 of the sample model is used to display the sample model 405 corresponding to the stored sample object, and the sample model 405 may also be displayed in the form of a thumbnail. The display interface of the model library further comprises an adding option 406, a first adding control 407 is displayed in the display area 402 of the sample object in response to the clicking input of the adding option 406 by the user, the clicking input of the first adding control 407 by the user is received, and the sample object is imported into the model library. And displaying a second adding control 408 in a display area 403 of the sample model, receiving click input of a user on the second adding control 408, acquiring a fourth model, and storing the fourth model as a sample model corresponding to the sample object in a model library. One or more sample models corresponding to the sample object may be provided.
In this embodiment, a user may import the sample object and the sample model corresponding to the sample object into the model library in advance, so as to facilitate subsequent obtaining of the first model corresponding to the first object based on the sample object, and fuse the first model and the background image to obtain at least two second images, so as to generate a video or a dynamic image of the first object according to the at least two second images.
In some optional embodiments, before the obtaining, according to the first object, the first model from the model library, the method may further include: acquiring a second model corresponding to the sample object according to preset information, and storing the second model into the model library; wherein the preset information comprises at least one of the following: link information; and (4) information codes.
In this embodiment, the link information may be website information for acquiring the second model corresponding to the sample object. For example, when a user wants to take a video of a cartoon character, the user can obtain a model of the cartoon character from a corresponding website. The information code may be, for example, two-dimensional code information in which the second model corresponding to the sample object is stored. For example, when a museum is visited and a certain exhibit needs to be photographed, the model of the exhibit can be obtained by scanning the corresponding two-dimensional code.
For example, please refer to fig. 5, which is a schematic diagram of another model library adding interface according to an embodiment of the present application. Specifically, the display interface of the model library includes a quick addition option 501, and in response to a user clicking on the quick addition option 501, a plurality of sample model addition modes are displayed. For example, the link information adding area 502 and the two-dimensional code scanning entry 503. Then, the user may input corresponding link information in the link information adding area 502, based on the connection information input by the user, may obtain an image of the sample object and a second model corresponding to the sample object, use the second model corresponding to the sample object as a sample model corresponding to the sample object, and store the image of the sample object and the sample model in association with each other in the model library. The user may also click the two-dimensional code scanning entry 503, obtain an image of the sample object and a second model corresponding to the sample object by scanning the two-dimensional code, use the second model corresponding to the sample object as the sample model corresponding to the sample object, and store the image of the sample object and the sample model in the model library in an associated manner. Then, when the first model corresponding to the first object is obtained, the first image including the first object may be compared with the image of the sample object, and when the first object and the sample object are successfully matched, the first model is obtained from the sample model corresponding to the sample object. Here, one or more second models corresponding to the sample object may be used.
In this embodiment, a mode of obtaining the second model corresponding to multiple sample objects is provided, which is convenient for a user to obtain a desired model quickly, and is simple to operate and more flexible to use.
In other optional embodiments, before the obtaining the first model from the model library according to the first object, the method may further include: and under the condition of receiving a second model corresponding to the sample object sent by the communication object, storing the second model corresponding to the sample object into the model library.
In this embodiment, the electronic device implementing the photographing method may establish a communication connection with other electronic devices to receive the sample object and the second model corresponding to the sample object sent by the communication object, use the second model corresponding to the sample object as the sample model corresponding to the sample object, and store the sample object and the sample model in the model library in an associated manner. It is understood that the user may also send the sample object in the model library and the second model corresponding to the sample object to the communication object via the electronic device.
For example, please refer to fig. 5 and fig. 6, which are schematic diagrams of another model library adding interface according to an embodiment of the present application. Specifically, the display interface of the model library includes a quick add option, a "friend-to-mutual" control 504 is displayed on the quick add interface in response to a user clicking input on the quick add option, and the user clicks the "friend-to-mutual" control 504 to enter a friend-to-mutual interface that includes a "outgoing" control 601 and an "incoming received" control 602. In response to a user clicking input on the control 601 labeled "outgoing", the user enters a model selection interface, and the user may select a sample object to be transferred and a sample model of the sample object. For example, by clicking on the "+" added mark 603 on the image of the sample object, the sample object to be transmitted and the sample model corresponding to the sample object are selected from the model library. After the sample object to be transmitted and the sample model corresponding to the sample object are selected, the user can send the sample object selected by the user and the sample model corresponding to the sample object to the communication object by clicking the 'confirmation' option. In response to a click input of the user on the control 602 marked with "incoming receiving", a wireless communication module of the electronic device used by the user is started, for example, WIFI or bluetooth of the electronic device is started, so that the electronic device of another user establishes a communication connection with the electronic device of the user, so as to receive the image of the sample object and the second model corresponding to the sample object sent by the other user, take the second model corresponding to the sample object as the sample model corresponding to the sample object, and store the image of the sample object and the sample model in association with each other to a model library. Then, when the first model corresponding to the first object is obtained, the first image including the first object may be compared with the image of the sample object, and when the first object and the sample object are successfully matched, the first model is obtained from the sample model corresponding to the sample object. Here, one or more second models corresponding to the sample object may be used.
In this embodiment, the second model corresponding to the sample object sent by the communication object may be received, and the second model of the sample object may also be sent to the communication object, so that the user may share the video production material with the communication object, and the user may conveniently obtain the required model, and the user may interact based on the camera application program of the electronic device, thereby enriching the functions of the camera application program of the electronic device.
In other optional embodiments, before the obtaining the first model from the model library according to the first object, the method further includes: receiving a second input of the user; responding to the second input, acquiring at least two fourth images, wherein the image content of the sample object in each fourth image is different; outputting a third model of the sample object according to the at least two fourth images; storing a third model to the model library.
In this embodiment, the second input may be an input to take a fourth image. The second input may be, for example, a click input of the target control by the user, and the click input in this embodiment may be a single click input, a double click input, or any number of times of click inputs, and may also be a long-press input or a short-press input.
Each of the at least two fourth images may include different image contents of the sample object, and the shooting angle of each fourth image may be different. Based on the above, according to the at least two fourth images, a third model of the sample object can be generated, that is, the sample model corresponding to the sample object is obtained, and the image of the sample object and the sample model are stored in the model library in an associated manner. Then, when the first model corresponding to the first object is obtained, the first image including the first object may be compared with the image of the sample object, and when the first object and the sample object are successfully matched, the first model is obtained from the sample model corresponding to the sample object.
In this embodiment, when the first object is captured, at least two fourth images may be acquired, and a third model of the first object may be generated according to the at least two fourth images, so that, when the first model of the first object is not stored in the model library, the third model may be acquired, and the third model and the background image are fused to obtain at least two second images, so as to synthesize a dynamic image or a video of the first object according to the at least two second images.
In some embodiments of the present application, the method further comprises: receiving a seventh input of the sample object or the sample model of the sample object by the user; and in response to the seventh input, deleting the sample object and the sample model corresponding to the sample object in the model library.
In this embodiment, the seventh input may be an input of selecting a sample object to be deleted and a sample model corresponding to the sample object. Illustratively, the seventh input may be a click input of the target control by the user, and the click input in this embodiment may be a single click input, a double click input, or any number of times of click inputs, and may also be a long-press input or a short-press input. For example, the seventh input may be a click input to an image of the sample object. For another example, the seventh input may be a click input to the sample model to which the sample object corresponds.
For example, please refer to fig. 7, which is a schematic diagram of a model library deletion interface according to an embodiment of the present application. Specifically, the display interface of the model library includes a display area 701 of the sample object and a display area 702 of the sample model. The display area 701 of the sample object is used to display the stored image of the sample object, and the sample object may be displayed in the form of a thumbnail. The display area 702 of the sample model is used to display the sample model corresponding to the stored sample object. The display interface of the model library further includes an editing option 703, and in response to a click input of the user to the editing option 703, a deletion mark 704 is displayed on the image of the sample object and/or the sample model corresponding to the sample object, and the sample object and the sample model corresponding to the sample object in the model library are deleted by clicking the deletion mark 704 on the image of the sample object or the deletion mark 704 on the sample model corresponding to the sample object.
In this embodiment, a user may edit the sample models in the model library, may retain the commonly used sample models, delete the sample models without using any more, and may save the storage space of the electronic device.
After step 102, step 103 is executed to output an object file, where the object file is synthesized by the at least two second images, and the object file includes at least one of the following: video; and (4) a dynamic image.
In specific implementation, after a first model corresponding to a first object is obtained, the first model and a background image are fused to obtain at least two second images, wherein the display positions of the first model in the at least two second images are different, or the background contents in the at least two second images are different, and a video or a dynamic image is generated according to the at least two second images.
In some embodiments of the present application, before the acquiring the first image, the method may further include: receiving an eighth input of the user; in response to the eighth input, a target filter effect is obtained. The acquiring a first image includes: and adjusting the display parameters of the first image according to the target parameter values corresponding to the target filter effect to obtain the first image with the target filter effect.
In this embodiment, the eighth input may be an input selecting a target filter effect. The eighth input may be, for example, a click input, and the click input in the embodiment of the present application may be a single click input, a double click input, or any number of times of click inputs, and may also be a long-press input or a short-press input.
Please refer to fig. 8, which is a schematic diagram illustrating obtaining a target filter effect according to an embodiment of the present disclosure. Specifically, after entering the first photographing mode, the photographing preview interface includes a filter option 801, the user clicks the filter option 801, various filter effects, for example, filter a, filter b, and filter c, are displayed on the photographing preview interface, and the filter c is determined as a target filter effect in response to a user's click input for the filter c. And then, the user clicks the return, the shooting preview interface displays the shooting control 802 again, and the user shoots the first image according to the target filter effect by clicking the shooting control 802 so as to generate a video or a dynamic image according to the first image with the target filter effect.
In this embodiment, when shooting a video or a dynamic image, a user may select a filter effect, may obtain a first image with a target filter effect, and use the first image with the target filter effect as a background image to fuse a first model corresponding to a first object with the background image to generate the video or the dynamic image, so that the video or the dynamic image of the first object may be obtained, and the generated video or the dynamic image has the target filter effect, thereby further improving a display effect of the video or the dynamic image.
In the embodiment of the application, a first image comprising a first object is obtained, a first model corresponding to the first object is fused with a background image to obtain at least two second images, wherein the display positions of the first model in the at least two second images are different, a video or a dynamic image is synthesized from the at least two second images, and the video or the dynamic image is output. Moreover, the user only needs to shoot one image, the video or the dynamic image of the first object can be obtained, special video making software is not needed, and the operation is simple.
According to the shooting method provided by the embodiment of the application, the execution main body can be a shooting device. The embodiment of the present application takes a method for executing shooting by a shooting device as an example, and describes a shooting device provided by the embodiment of the present application.
Corresponding to the above embodiments, referring to fig. 9, an embodiment of the present application further provides an image capturing apparatus 900, where the image capturing apparatus 900 includes a first obtaining module 901, a fusing module 902, and a first output module 903.
The first obtaining module 901 is configured to obtain a first image, where the first image includes a first object;
the fusion module 902 is configured to fuse a first model corresponding to the first object with a background image to obtain at least two second images, where display positions of the first model in the at least two second images are different, and the background image is the first image or a third image;
the first output module 903 is configured to output an object file, where the object file is obtained by synthesizing the at least two second images;
wherein the target file comprises at least one of:
video;
and (4) a dynamic image.
Optionally, the fusing module 902 is specifically configured to fuse the first model corresponding to the first object with a target background image, where the target background image is at least a partial image of the background image.
Optionally, the camera 900 further includes: and the adjusting module is used for adjusting the display position of the first model corresponding to the first object in the background image.
Optionally, the adjusting module is specifically configured to adjust a display position of the first model corresponding to the first object in the background image according to the first trajectory.
Optionally, the adjusting module includes: the receiving unit is used for receiving a first input of a user, and the first input is used for determining a first track; and the adjusting unit is used for responding to the first input and adjusting the display position of the first model corresponding to the first object in the background image according to the first track.
Optionally, the camera 900 further includes: and the second obtaining module is used for obtaining the first model from a model library according to the first object, wherein the model library comprises a sample object and a sample model corresponding to the sample object.
Optionally, the camera 900 further includes: the first storage module is used for storing a second model corresponding to a sample object to the model library under the condition of receiving the second model corresponding to the sample object sent by the communication object;
the third obtaining module is used for obtaining a second model corresponding to the sample object according to preset information; the second storage module is used for storing the second model to the model library; wherein the preset information comprises at least one of: link information; and (4) information codes.
Optionally, the camera 900 further includes: the receiving module is used for receiving a second input of the user; a fourth obtaining module, configured to obtain at least two fourth images in response to the second input, where image contents of the sample object in each fourth image are different; a second output module, configured to output a third model of the sample object according to the at least two fourth images; and the third storage module is used for storing a third model to the model library.
In the embodiment of the application, a first image comprising a first object is obtained, a first model corresponding to the first object is fused with a background image to obtain at least two second images, wherein the display positions of the first model in the at least two second images are different, a video or a dynamic image is synthesized from the at least two second images, and the video or the dynamic image is output. Moreover, the user only needs to shoot one image, the video or the dynamic image of the first object can be obtained, special video making software is not needed, and the operation is simple.
The shooting device in the embodiment of the present application may be an electronic device, or may be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. By way of example, the electronic Device may be a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the embodiment of the present application is not particularly limited.
The photographing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The shooting device provided in the embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 10, an electronic device 1000 is further provided in this embodiment of the present application, and includes a processor 1001 and a memory 1002, where the memory 1002 stores a program or an instruction that can be executed on the processor 1001, and when the program or the instruction is executed by the processor 1001, the steps of the foregoing shooting method embodiment are implemented, and the same technical effects can be achieved, and are not described again here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device described above.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1100 includes, but is not limited to: a radio frequency unit 1101, a network module 1102, an audio output unit 1103, an input unit 1104, a sensor 1105, a display unit 1106, a user input unit 1107, an interface unit 1108, a memory 1109, a processor 1110, and the like.
Those skilled in the art will appreciate that the electronic device 1100 may further include a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 1110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 11 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
Wherein, the processor 1110 is configured to: acquiring a first image, wherein the first image comprises a first object; fusing a first model corresponding to the first object with a background image to obtain at least two second images, wherein the display positions of the first model in the at least two second images are different, and the background image is the first image or a third image; outputting a target file, wherein the target file is obtained by synthesizing the at least two second images; wherein the target file comprises at least one of: video; and (4) dynamic images.
Optionally, the processor 1110, when said fusing the first model corresponding to the first object with the background image, is configured to: and fusing a first model corresponding to the first object with a target background image, wherein the target background image is at least part of image in the background image.
Optionally, the processor 1110, before said fusing the first model corresponding to the first object with the background image, is further configured to: and adjusting the display position of the first model corresponding to the first object in the background image.
Optionally, the processor 1110, when adjusting the display position of the first model corresponding to the first object in the background image, is configured to: and adjusting the display position of the first model corresponding to the first object in the background image according to the first track.
Optionally, the user input unit 1107, when adjusting the display position of the first model corresponding to the first object in the background image according to the first trajectory, is configured to: receiving a first input of a user, wherein the first input is used for determining a first track; processor 1110, when adjusting a display position of the first model corresponding to the first object in the background image according to the first trajectory, is configured to: and responding to the first input, and adjusting the display position of the first model corresponding to the first object in the background image according to the first track.
Optionally, the processor 1110, after said acquiring the first image, is further configured to: and acquiring a first model from a model library according to the first object, wherein the model library comprises a sample object and a sample model corresponding to the sample object.
Optionally, before said retrieving the first model from the model library according to the first object, the memory 1109 is configured to: under the condition that a second model corresponding to a sample object sent by a communication object is received, storing the second model corresponding to the sample object to the model library; or
A processor 1110 configured to: acquiring a second model corresponding to the sample object according to preset information; a memory 1109 to: storing the second model to the model library; wherein the preset information comprises at least one of: link information; and (4) information codes.
Optionally, before the obtaining of the first model from the model library according to the first object, the user input unit 1107 is further configured to: receiving a second input of the user; processor 1110 is further configured to: in response to the second input, acquiring at least two fourth images, wherein the image content of the sample object in each fourth image is different; outputting a third model of the sample object according to the at least two fourth images; a memory 1109, further operable to: storing a third model to the model library.
In the embodiment of the application, a first image comprising a first object is obtained, a first model corresponding to the first object is fused with a background image to obtain at least two second images, wherein the display positions of the first model in the at least two second images are different, a video or a dynamic image is synthesized from the at least two second images, and the video or the dynamic image is output. Moreover, the user only needs to shoot one image, the video or the dynamic image of the first object can be obtained, special video making software is not needed, and the operation is simple.
It should be understood that in the embodiment of the present application, the input Unit 1104 may include a Graphics Processing Unit (GPU) 11041 and a microphone 11042, and the Graphics processor 11041 processes image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1106 may include a display panel 11061, and the display panel 11061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1107 includes a touch panel 11071 and other input devices 11072. A touch panel 11071, also called a touch screen. The touch panel 11071 may include two portions of a touch detection device and a touch controller. Other input devices 11072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 1109 may be used to store software programs as well as various data. The memory 1109 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, an application program or instruction (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1109 may include volatile memory or nonvolatile memory, or the memory 1109 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 1109 in the embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 1110 may include one or more processing units; optionally, the processor 1110 integrates an application processor, which primarily handles operations related to the operating system, user interface, and applications, and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into processor 1110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above shooting method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing shooting method embodiments, and achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A photographing method, characterized in that the method comprises:
acquiring a first image, wherein the first image comprises a first object;
fusing a first model corresponding to the first object with a background image to obtain at least two second images, wherein the display positions of the first model in the at least two second images are different, and the background image is the first image or a third image;
outputting a target file, wherein the target file is obtained by synthesizing the at least two second images;
wherein the target file comprises at least one of:
video;
and (4) dynamic images.
2. The method of claim 1, wherein fusing the first model corresponding to the first object with a background image comprises:
and fusing a first model corresponding to the first object with a target background image, wherein the target background image is at least part of image in the background image.
3. The method of claim 1, wherein prior to fusing the first model corresponding to the first object with the background image, the method further comprises:
and adjusting the display position of the first model corresponding to the first object in the background image.
4. The method of claim 3, wherein the adjusting the display position of the first model corresponding to the first object in the background image comprises:
and adjusting the display position of the first model corresponding to the first object in the background image according to the first track.
5. The method of claim 4, wherein the adjusting, according to the first trajectory, a display position of the first model corresponding to the first object in the background image comprises:
receiving a first input of a user, wherein the first input is used for determining a first track;
and responding to the first input, and adjusting the display position of the first model corresponding to the first object in the background image according to the first track.
6. The method of claim 1, wherein after the acquiring the first image, the method further comprises:
and acquiring a first model from a model library according to the first object, wherein the model library comprises a sample object and a sample model corresponding to the sample object.
7. The method of claim 6, wherein prior to obtaining the first model from the model library based on the first object, the method further comprises:
under the condition that a second model corresponding to a sample object sent by a communication object is received, storing the second model corresponding to the sample object to the model library; or
Acquiring a second model corresponding to the sample object according to preset information, and storing the second model into the model library;
wherein the preset information comprises at least one of the following:
link information;
and (4) information codes.
8. The method of claim 6, wherein prior to obtaining the first model from the model library based on the first object, the method further comprises:
receiving a second input of the user;
responding to the second input, acquiring at least two fourth images, wherein the image content of the sample object in each fourth image is different;
outputting a third model of the sample object according to the at least two fourth images;
storing a third model to the model library.
9. A camera, characterized in that the camera comprises:
a first acquisition module for acquiring a first image, the first image including a first object;
a fusion module, configured to fuse a first model corresponding to the first object with a background image to obtain at least two second images, where display positions of the first model in the at least two second images are different, and the background image is the first image or a third image;
the first output module is used for outputting a target file, and the target file is obtained by synthesizing the at least two second images;
wherein the target file comprises at least one of:
video;
and (4) dynamic images.
10. An electronic device, characterized in that it comprises a processor and a memory, said memory storing a program or instructions executable on said processor, said program or instructions, when executed by said processor, implementing the steps of the shooting method according to any one of claims 1-8.
CN202210119654.0A 2022-02-08 2022-02-08 Shooting method and device and electronic equipment Pending CN114584704A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210119654.0A CN114584704A (en) 2022-02-08 2022-02-08 Shooting method and device and electronic equipment
PCT/CN2023/074318 WO2023151510A1 (en) 2022-02-08 2023-02-03 Photographing method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210119654.0A CN114584704A (en) 2022-02-08 2022-02-08 Shooting method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114584704A true CN114584704A (en) 2022-06-03

Family

ID=81775161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210119654.0A Pending CN114584704A (en) 2022-02-08 2022-02-08 Shooting method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN114584704A (en)
WO (1) WO2023151510A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023151510A1 (en) * 2022-02-08 2023-08-17 维沃移动通信有限公司 Photographing method and apparatus, and electronic device

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574914A (en) * 2015-12-18 2016-05-11 深圳市沃优文化有限公司 Manufacturing device and manufacturing method of 3D dynamic scene
CN106170068A (en) * 2015-05-19 2016-11-30 卡西欧计算机株式会社 Dynamic image generating means and dynamic image generate method
CN106816077A (en) * 2015-12-08 2017-06-09 张涛 Interactive sandbox methods of exhibiting based on Quick Response Code and augmented reality
CN108111748A (en) * 2017-11-30 2018-06-01 维沃移动通信有限公司 A kind of method and apparatus for generating dynamic image
CN109361880A (en) * 2018-11-30 2019-02-19 三星电子(中国)研发中心 A kind of method and system showing the corresponding dynamic picture of static images or video
CN109702747A (en) * 2019-01-21 2019-05-03 广东康云科技有限公司 A kind of robot dog system and its implementation
CN109859100A (en) * 2019-01-30 2019-06-07 深圳安泰创新科技股份有限公司 Display methods, electronic equipment and the computer readable storage medium of virtual background
CN109922252A (en) * 2017-12-12 2019-06-21 北京小米移动软件有限公司 The generation method and device of short-sighted frequency, electronic equipment
CN110827376A (en) * 2018-08-09 2020-02-21 北京微播视界科技有限公司 Augmented reality multi-plane model animation interaction method, device, equipment and storage medium
CN112511815A (en) * 2019-12-05 2021-03-16 中兴通讯股份有限公司 Image or video generation method and device
CN113038001A (en) * 2021-02-26 2021-06-25 维沃移动通信有限公司 Display method and device and electronic equipment
CN113408484A (en) * 2021-07-14 2021-09-17 广州繁星互娱信息科技有限公司 Picture display method, device, terminal and storage medium
CN113538642A (en) * 2021-07-20 2021-10-22 广州虎牙科技有限公司 Virtual image generation method and device, electronic equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10187584B2 (en) * 2016-12-20 2019-01-22 Microsoft Technology Licensing, Llc Dynamic range extension to produce high dynamic range images
CN110012226A (en) * 2019-03-27 2019-07-12 联想(北京)有限公司 A kind of electronic equipment and its image processing method
CN111917979B (en) * 2020-07-27 2022-09-23 维沃移动通信有限公司 Multimedia file output method and device, electronic equipment and readable storage medium
CN113794829B (en) * 2021-08-02 2023-11-10 维沃移动通信(杭州)有限公司 Shooting method and device and electronic equipment
CN113763445A (en) * 2021-09-22 2021-12-07 黎川县凡帝科技有限公司 Static image acquisition method and system and electronic equipment
CN114584704A (en) * 2022-02-08 2022-06-03 维沃移动通信有限公司 Shooting method and device and electronic equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106170068A (en) * 2015-05-19 2016-11-30 卡西欧计算机株式会社 Dynamic image generating means and dynamic image generate method
CN106816077A (en) * 2015-12-08 2017-06-09 张涛 Interactive sandbox methods of exhibiting based on Quick Response Code and augmented reality
CN105574914A (en) * 2015-12-18 2016-05-11 深圳市沃优文化有限公司 Manufacturing device and manufacturing method of 3D dynamic scene
CN108111748A (en) * 2017-11-30 2018-06-01 维沃移动通信有限公司 A kind of method and apparatus for generating dynamic image
CN109922252A (en) * 2017-12-12 2019-06-21 北京小米移动软件有限公司 The generation method and device of short-sighted frequency, electronic equipment
CN110827376A (en) * 2018-08-09 2020-02-21 北京微播视界科技有限公司 Augmented reality multi-plane model animation interaction method, device, equipment and storage medium
CN109361880A (en) * 2018-11-30 2019-02-19 三星电子(中国)研发中心 A kind of method and system showing the corresponding dynamic picture of static images or video
CN109702747A (en) * 2019-01-21 2019-05-03 广东康云科技有限公司 A kind of robot dog system and its implementation
CN109859100A (en) * 2019-01-30 2019-06-07 深圳安泰创新科技股份有限公司 Display methods, electronic equipment and the computer readable storage medium of virtual background
CN112511815A (en) * 2019-12-05 2021-03-16 中兴通讯股份有限公司 Image or video generation method and device
CN113038001A (en) * 2021-02-26 2021-06-25 维沃移动通信有限公司 Display method and device and electronic equipment
CN113408484A (en) * 2021-07-14 2021-09-17 广州繁星互娱信息科技有限公司 Picture display method, device, terminal and storage medium
CN113538642A (en) * 2021-07-20 2021-10-22 广州虎牙科技有限公司 Virtual image generation method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023151510A1 (en) * 2022-02-08 2023-08-17 维沃移动通信有限公司 Photographing method and apparatus, and electronic device

Also Published As

Publication number Publication date
WO2023151510A1 (en) 2023-08-17

Similar Documents

Publication Publication Date Title
CN113079316B (en) Image processing method, image processing device and electronic equipment
WO2023143531A1 (en) Photographing method and apparatus, and electronic device
CN113918522A (en) File generation method and device and electronic equipment
CN114598819A (en) Video recording method and device and electronic equipment
WO2023151510A1 (en) Photographing method and apparatus, and electronic device
CN112422812B (en) Image processing method, mobile terminal and storage medium
CN111885298B (en) Image processing method and device
CN112330728A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN115756275A (en) Screen capture method, screen capture device, electronic equipment and readable storage medium
CN117692552A (en) Wallpaper display method, electronic equipment and storage medium
CN114500844A (en) Shooting method and device and electronic equipment
CN114025237A (en) Video generation method and device and electronic equipment
CN114125297A (en) Video shooting method and device, electronic equipment and storage medium
CN112492206B (en) Image processing method and device and electronic equipment
CN117395462A (en) Method and device for generating media content, electronic equipment and readable storage medium
CN116156312A (en) File sharing method and device, electronic equipment and readable storage medium
CN113572961A (en) Shooting processing method and electronic equipment
CN115967854A (en) Photographing method and device and electronic equipment
CN114125325A (en) Video shooting method and device and electronic equipment
CN115334242A (en) Video recording method, video recording device, electronic equipment and medium
CN115278378A (en) Information display method, information display device, electronic apparatus, and storage medium
CN115665355A (en) Video processing method and device, electronic equipment and readable storage medium
CN114860122A (en) Application program control method and device
CN116156076A (en) Video recording method, device, electronic equipment and storage medium
CN114173178A (en) Video playing method, video playing device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination