CN114143461A - Shooting method and device and electronic equipment - Google Patents

Shooting method and device and electronic equipment Download PDF

Info

Publication number
CN114143461A
CN114143461A CN202111442093.XA CN202111442093A CN114143461A CN 114143461 A CN114143461 A CN 114143461A CN 202111442093 A CN202111442093 A CN 202111442093A CN 114143461 A CN114143461 A CN 114143461A
Authority
CN
China
Prior art keywords
image
shooting
target
input
exposure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111442093.XA
Other languages
Chinese (zh)
Other versions
CN114143461B (en
Inventor
柳宇航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111442093.XA priority Critical patent/CN114143461B/en
Priority claimed from CN202111442093.XA external-priority patent/CN114143461B/en
Publication of CN114143461A publication Critical patent/CN114143461A/en
Application granted granted Critical
Publication of CN114143461B publication Critical patent/CN114143461B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a shooting method, a shooting device and electronic equipment, and belongs to the technical field of communication. The method comprises the following steps: receiving a first input of a user in the case that the photographing preview interface includes a first image; responding to the first input, and controlling the camera to shoot a second image; performing multiple exposure fusion processing on the first image and the second image based on the contour information of a target exposure object in the target image, and outputting a third image; wherein the target image includes: a first image or a second image.

Description

Shooting method and device and electronic equipment
Technical Field
The application belongs to the field of camera shooting, and particularly relates to a shooting method and device and electronic equipment.
Background
With the prevalence of photography in recent years, users increasingly attach importance to the diversification of the shooting effect of electronic devices (e.g., smartphones). At present, double exposure is a skillful photography method, exposure is repeated twice during shooting, black cloth is used for shielding between the two exposures, and images shot twice are overlapped on the same film, so that the picture looks richer.
In the related art, double exposure can be performed in a front-back exposure mode, namely, two front-back cameras of the electronic equipment are opened simultaneously to shoot, and a picture is output by combining front-back effects after shooting is clicked. However, since the shooting process is difficult for the user to find a proper composition angle and is limited by the current shooting environment, content fusion in the entire picture content after composition is poor, so that the image content of the finally synthesized image is hard, and the shooting effect is poor.
Disclosure of Invention
The embodiment of the application aims to provide a shooting method, a shooting device and electronic equipment, which can solve the problem that the shooting effect is poor due to poor content fusion in the content of a synthesized whole picture.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a shooting method, where the method includes: receiving a first input of a user in the case that the photographing preview interface includes a first image; responding to the first input, and controlling the camera to shoot a second image; performing multiple exposure fusion processing on the first image and the second image based on the contour information of a target exposure object in the target image, and outputting a third image; wherein the target image includes: a first image or a second image.
In a second aspect, an embodiment of the present application provides a shooting device, including: the above-mentioned device includes: receiving module, shooting module and synthesis module, wherein: the receiving module is used for receiving a first input of a user under the condition that a first image is displayed on the shooting preview interface; the shooting module is used for responding to the first input received by the receiving module and controlling the camera to shoot a second image; the synthesis module is used for carrying out multiple exposure fusion processing on the first image and the second image based on the contour information of a target exposure object in a target image and outputting a third image; wherein the target image includes: a first image or a second image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, the present application provides a computer program product stored in a non-volatile storage medium, the program product being executed by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, when a first image is displayed on a shooting preview interface, a shooting device receives a first input of a user, shoots a preview picture of the shooting preview interface to obtain a second image, and then performs multiple exposure fusion processing on the first image and the second image based on contour information of a target exposure object in a target image to obtain a third image; wherein the target image comprises at least one of: a first image, a second image. By the method, the shooting device can identify the target exposure object in the image and carry out multiple exposure fusion processing on the first image and the second image according to the contour information of the target exposure object in the first image or the second image, so that the image with multiple exposure effect in the image area where the exposure object is located can be generated quickly and conveniently, and the shooting effect is optimized.
Drawings
Fig. 1 is a flowchart of a shooting method provided in an embodiment of the present application;
fig. 2 is one of schematic diagrams of an interface applied by a shooting method according to an embodiment of the present disclosure;
fig. 3 is a second schematic diagram of an interface applied in a shooting method according to an embodiment of the present disclosure;
fig. 4 is a third schematic view of an interface applied by a shooting method according to an embodiment of the present disclosure;
fig. 5 is a fourth schematic view of an interface applied in a shooting method provided in the embodiment of the present application;
fig. 6 is a fifth schematic view of an interface applied by a shooting method according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a shooting device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 9 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following noun explanations are made for terms related to the embodiments of the present application:
double exposure: the double exposure is a skillful photography method in the film era, the exposure is repeated twice during shooting, black cloth is used for shielding the exposure, and images shot twice are overlapped on the same film, so that the picture looks richer. However, the irreversibility of exposure and the non-return feature of the film present significant difficulties in sheeting. In the digital era, people who take pictures often guide the taken pictures into a computer, and then overlap the pictures by using image processing software to achieve the effect of double exposure.
And (3) backlight shooting: the back light is a kind of lighting with artistic charm and strong expressive power, which can make the picture produce an artistic effect completely different from the actual light seen by the naked eyes on the spot. Backlight photography is one of the means used for photography. Backlighting in the broad sense should include both total and side backlighting. It is basically characterized in that: from the optical position, the total backlight is light irradiated from the back of the object facing the camera, and is also called as "backlight"; the side backlight is light emitted to an object from the rear side surfaces of the camera at 135 degrees from the left and right, and the light receiving surface of the object occupies 1/3, and the backlight surface occupies 2/3.
Silhouette portrait: the silhouette portrait is a high-contrast image which is shot by a main body by utilizing the principle of backlight and has serious insufficient exposure, and highlights the outline of the main body to bring strong visual impact effect to people.
The embodiment of the application provides a shooting method, which can be applied to electronic equipment, and fig. 1 shows a flowchart of the shooting method provided by the embodiment of the application. As shown in fig. 1, the shooting method provided in the embodiment of the present application may include the following steps 201 to 203:
step 201: the photographing apparatus receives a first input of a user in a case where the photographing preview interface includes a first image.
In this embodiment of the application, a preview image acquired by a camera of the electronic device is displayed in the shooting preview interface.
Optionally, in this embodiment of the application, the shooting preview interface is a shooting preview interface of the electronic device in a target shooting mode, where the target shooting mode includes at least one of the following: a double exposure mode, a silhouette portrait mode, and a normal mode.
Exemplarily, in the case that the target shooting mode is the double exposure mode, entering a double exposure shooting process to obtain an image with a double exposure effect; and under the condition that the target shooting mode is a silhouette portrait mode, entering a silhouette portrait shooting process to obtain an image with clear outline and strong light-dark contrast.
In this embodiment, the first image may be an image captured by a capturing device when the first preview screen is displayed on the capture preview interface, or the first image may be an image selected by a user in an electronic device (e.g., an album or a gallery).
In a specific implementation, the shooting device defaults to enter a double exposure mode first, then the shooting device can detect whether a preview picture in a shooting preview interface meets shooting conditions corresponding to a silhouette portrait mode, and prompts a user to enter the silhouette portrait mode to shoot a silhouette image (namely the first image) if the preview picture meets the shooting conditions, and if the user does not enter the silhouette portrait mode, then a double exposure shooting flow is subsequently adopted to shoot; if the user confirms to enter the silhouette portrait mode, entering a silhouette portrait mode shooting flow, automatically returning to a double exposure mode after the silhouette portrait shooting is finished, displaying the shot silhouette portrait in a shooting preview interface in a suspended mode, and storing the shot silhouette portrait in an album; and if the preview picture in the shooting preview interface does not meet the shooting condition corresponding to the silhouette portrait mode, prompting the user to select the silhouette portrait from the photo album.
Alternatively, in the embodiment of the present application, the first image may be a cut image or a double exposure image.
In a specific implementation, the shooting device may detect whether a shooting subject (e.g., a portrait) is included in a first preview screen corresponding to the first image, and prompt the user to select an image (e.g., a silhouette portrait) including the shooting subject from the album as the first image if the shooting subject is not recognized.
When needing to be explained, because the silhouette of the silhouette portrait has the characteristic of strong contrast, and the double exposure has the unique visual effect, therefore, the silhouette portrait and the double exposure are combined together in technical realization, so that the variety of choices is provided for users to take pictures, the interestingness in the picture taking process is increased, the users can obtain the photo effect with achievement feeling, and the satisfaction degree of the users is improved.
Optionally, in this embodiment of the application, the shooting device may display the first image in a floating manner with a preset transparency in a part of or all of an interface area of the shooting preview interface.
In a specific implementation, in the case of displaying the shooting preview interface, the shooting device may display the acquired preview image on the shooting preview interface, and display the first image in an overlapping manner with a preset transparency on an upper layer of the preview image, so that the user can view the acquired preview image through the first image.
Optionally, in this embodiment of the application, the first input may be a touch input of a user on a shooting preview interface, or other feasible inputs, which is not limited in this embodiment of the application.
Illustratively, the first input user's click input, slide input, press input, etc. is described above. Further, the click operation may be any number of times of click operations. The sliding operation may be a sliding operation in any direction, such as an upward sliding operation, a downward sliding operation, a leftward sliding operation, or a rightward sliding operation, which is not limited in the embodiments of the present application.
Step 202: the photographing device controls the camera to photograph a second image in response to the first input.
Optionally, in this embodiment of the application, the photographing apparatus may control the camera to photograph the photographic object corresponding to the preview image to obtain the second image when the preview image is displayed on the photographing preview interface.
Alternatively, the second image may be an image captured by the camera in the double exposure mode, or an image captured by the camera in the normal shooting mode. Illustratively, the second image may be a normal image containing a natural landscape.
Alternatively, the photographing apparatus may automatically enter the double exposure mode in advance, or enter the double exposure mode triggered by the user, and then perform double exposure photographing after receiving a first input of the user to trigger photographing of the second image, that is, perform double exposure fusion processing on the second image and the first image, and output a third image with a double exposure effect.
It should be noted that, unlike the conventional shooting flow, in the case of performing double exposure shooting, the shooting apparatus may acquire the second image and perform image synthesis with the first image using the second image as a background image or an upper image of double exposure to obtain a double exposure image including image contents of the two images.
Step 203: the shooting device performs multiple exposure fusion processing on the first image and the second image based on the contour information of the target exposure object in the target image, and outputs a third image.
Wherein the target image comprises a first image or a second image.
In this embodiment of the present application, the target exposure object may be a background image in the target image, or a foreground image. For example, the foreground image may be a photographed subject in a target image, such as a person, a building, a tree, or the like; the background image may be a large area image area of the target image except for the foreground image.
For example, the target exposure object may be selected by a user in the target image, or may be automatically determined by the camera. For example, the target exposure object may be an object in an exposure area in a target image, and for a detailed description of the exposure area, reference is made to the following text, which is not repeated herein.
In this embodiment of the present application, the contour information is a contour of a photographed target in a target image obtained by performing edge extraction on the target image. Illustratively, in the case where the target exposure object is a photographic subject in the target image, the contour information of the target exposure object is contour information of the photographic subject. For example, taking the target image as a silhouette image including a silhouette portrait, the sky and the sea, if the target exposure object is a portrait, the contour information of the target exposure object is the contour information of the silhouette portrait.
Optionally, in this embodiment of the application, the shooting device may perform multiple exposure fusion processing on the first image and the second image according to a multiple exposure algorithm based on the contour information of the target exposure object in the target image, so as to obtain a third image with a multiple exposure effect.
In a specific implementation, the camera may determine an image area where the target exposure object is located and a target image area of another image to be subjected to multiple exposure fusion, and perform multiple exposure fusion processing on the two image areas according to a multiple exposure algorithm to obtain a third image with a local multiple exposure effect.
In one example, the target image is a first image, and the camera may perform multiple exposure fusion processing on the first image and the second image based on contour information of a target exposure object in the first image to obtain a third image.
In a specific implementation, the photographing apparatus may fuse the target image area of the second image according to a multiple exposure algorithm in the image area corresponding to the target exposure object in the first image based on the contour information of the target exposure object in the first image.
Example 1, a first image is a silhouette containing silhouette information, and a second image is a general article image. If the user wants to obtain an image with landscape image contents superimposed on the silhouette area of the image, as shown in fig. 2(a), the user may click the capture button 20 to trigger the capture device to capture the image 21 (i.e., the first image) in the double exposure mode, and the capture device displays the image 21 in the desired preview interface in a floating manner after capturing. In the case that the image 21 is displayed in a floating manner on the shooting preview interface, the user may move the electronic device to adjust the shooting angle, and click the shooting button 20 to trigger the shooting device to shoot the image 22 including the landscape image content that needs to be superimposed in the portrait area, and the shooting device may perform a double exposure fusion process on the image 21 and the image 22 according to the outline 21a of the silhouette in the single image 21, as shown in fig. 2(b), and after the process is completed, the shooting device may output an image 23 in which the image content of the image 22 (i.e., the clouds and the wild geese in the portrait head area, the wild geese, the mountains and the river water in the portrait body area) is superimposed in the image area where the silhouette 22a of the image 21 is located.
Example 2, the first image is taken as a silhouette containing silhouette information. When shooting is performed in the double exposure mode, after the user clicks the shooting button, as shown in fig. 3(a), the shooting device first shoots a person a to obtain an image 31 including a silhouette of the person a, then shoots a person B and a person C in the shooting field to obtain an image 32, then fuses an image area in which a part of the image 32 includes the portrait in an image area 31a corresponding to the head of the silhouette according to contour information of the head of the silhouette in the image 31, and finally outputs a double exposure image 33 in which the head area of the silhouette is fused with image contents including the person B and the person C, as shown in fig. 3 (B).
In another example, the target image is a second image, and the camera may perform multiple exposure fusion processing on the second image and the first image based on contour information of a target exposure object in the second image to obtain a third image.
In the shooting method provided by the embodiment of the application, under the condition that a first image is displayed on a shooting preview interface, a shooting device receives a first input of a user, shoots a preview picture of the shooting preview interface to obtain a second image, and then carries out multiple exposure fusion processing on the first image and the second image based on the outline information of a target exposure object in a target image to obtain a third image; wherein the target image comprises at least one of: a first image, a second image. By the method, the shooting device can identify the target exposure object in the image and carry out multiple exposure fusion processing on the first image and the second image according to the contour information of the target exposure object in the first image or the second image, so that the image with multiple exposure effect in the image area where the exposure object is located can be generated quickly and conveniently, and the shooting effect is optimized.
Optionally, in an embodiment of the present application, the first image is: and shooting the target exposure object in the shooting preview interface to obtain an image, wherein the first image comprises silhouette portrait information.
Optionally, the step 203 may include the following steps 203a1 and 203b 1:
step 203a 1: the photographing device determines a background image from the first image and the second image in a photographing order of the first image and the second image.
Step 203b 1: the imaging device performs multiple exposure fusion processing on the first image and the second image according to contour information of a target exposure object by taking the background image as a background, and outputs a third image.
For example, the photographing device may determine, as the background image, an image photographed first from among the first image and the second image, or determine an image photographed later as the background image, according to a preset background image determination manner.
Optionally, the shooting device may further determine the background image from the first image and the second image according to the shooting mode of the first image and the second image.
Illustratively, the above photographing mode may include: a double exposure mode, a silhouette portrait mode, and a normal mode. Illustratively, the above-described photographing mode may be fixed, or automatically switched. For example, the first image is obtained by shooting in a silhouette portrait mode for the first time, and when shooting is performed next time, the mode is switched to a double exposure mode to perform shooting.
For example, the photographing device may determine, as the background image, an image photographed in the silhouette portrait mode from among the first image and the second image according to a preset background image determination manner, and determine an image photographed in the normal mode as an image to be superimposed.
Alternatively, the shooting device may use the determined background image as a background, superimpose another image on an exposure area including a target exposure object in the background image, and perform multiple exposure fusion processing on the target image and the another image according to contour information of the target exposure object in the target object.
In one example, if the background image is determined to be the first image, the photographing device uses the first image as the background image, and superimposes the second image on the exposure area of the first image according to the contour information of the target exposure object in the first image to perform the secondary exposure fusion processing, so as to obtain the third image with the local double exposure effect.
In another example, if the background image is determined to be the second image, the photographing device uses the second image as the background image, and superimposes the first image on the exposure area of the second image according to the contour information of the target exposure object in the second image to perform the second exposure fusion processing, so as to obtain a third image with the local double exposure effect.
Therefore, the shooting device can take the image shot firstly or the image shot later as a background image, and carry out multiple exposure fusion processing on the exposure area of the image based on the contour information of the target exposure pair in the image, so that any area can be freely selected in the image to serve as the exposure area for overlapping other image contents, the actual shooting requirement of a user is met, and the shooting flexibility is improved.
Further optionally, in a case that the first image is an image obtained by shooting a preview screen of the shooting preview interface, before the first image is obtained, the shooting method provided in the embodiment of the present application further includes the following step 204:
step 204: and the shooting device shoots a preview picture in the shooting preview interface according to the first shooting parameters under the condition that the shooting scene corresponding to the preview image is a backlight scene to obtain a first image.
Wherein the first shooting parameter includes at least one of: exposure, contrast.
Illustratively, the above-described backlighting scene may be a backlighting environment. For example, when shooting, if the camera is shooting towards (facing or facing obliquely) the light source, it is considered to be in a backlight environment.
For example, the photographing apparatus may detect, in real time or periodically, whether a current photographing environment is a backlight environment from the preview screen when entering the double exposure photographing mode by default, and perform photographing according to the first photographing parameter when the current photographing environment is the backlight environment and it is recognized that the photographing object includes a human image.
For example, when the current environment is not a backlit environment or it is not recognized that the photographic subject includes a portrait, an image may be selected as the first image in the album by the user.
Illustratively, the first shooting parameter is a shooting parameter in a silhouette portrait shooting mode.
In a specific implementation, the shooting device may perform blocking according to a resolution of a preview picture during preview, count the number of highlight blocks, the ratio of highlight pixel values, and the ratio of low-light pixel values in the preview picture, combine the preview illuminance information (e.g., luxindex value) and the sensitivity information (e.g., gain value), determine whether the current shooting environment meets the backlight environment characteristics, and prompt the user to enter a silhouette portrait shooting mode when meeting the backlight environment characteristics or the large backlight environment characteristics and recognizing a portrait.
Illustratively, after entering the silhouette portrait shooting mode, the overall exposure is automatically decreased, the contrast is automatically enhanced, the subject in the preview picture is underexposed, and the background exposure is normal.
Therefore, the shooting device can identify the current shooting scene, enter a silhouette portrait shooting mode under the condition that the shooting device is in a backlight scene and the shooting object comprises a portrait, and shoot by adopting the first shooting parameters in the mode so as to obtain the silhouette portrait meeting the requirements of a user and improve the shooting flexibility.
Optionally, in this embodiment of the present application, the target image is a first image; and the first image and the preview picture are displayed on the shooting preview interface in a superposition manner.
Optionally, the performing, in the step 203, multiple exposure fusion processing on the first image and the second image based on the contour information of the target exposure object in the target image to obtain the target image may include the following steps 203 b:
step 203 b: the shooting device carries out multiple exposure fusion processing on the first image and the second image based on the contour information of the target exposure object in the first image and the display position of the first image in the shooting preview interface so as to obtain a target image.
Illustratively, the preview screen includes a preview screen corresponding to the second image.
For example, in the case that the shooting preview interface displays the preview screen, the shooting device may display the first image in a floating manner on the shooting preview interface, and determine the target image area to be fused in the second image according to the display position of the first image in the shooting preview interface.
Illustratively, the target image area in the second image is: in the second image, an image area corresponding to a target screen area in a preview screen of the second image, the target screen area being: and in the preview picture of the second image, a picture area corresponding to the picture area where the target exposure object of the target shooting object in the first image is located. For example, if the target exposure object in the first image is the head region of a silhouette portrait, and the head region is displayed in the region a of the shooting preview interface, the target screen region of the second image is: in the preview screen of the second image, the preview screen displayed in the area a of the shooting preview interface, the target image area in the second image is: an image area corresponding to the preview screen displayed in the area a.
In addition, in the case where the first image is displayed on the shooting preview interface, since the preview screen is hidden by the first image, the user cannot see the preview screen on the shooting preview interface with the naked eye.
Further optionally, in this embodiment of the present application, in a case where the first image and the preview screen are displayed in a superimposed manner on the shooting preview interface, the shooting method provided in this embodiment of the present application further includes steps a1 and a2 as follows:
step A1: the photographing apparatus receives a fourth input from the user.
Step A2: and the shooting device responds to the fourth input and displays the first image in the shooting preview interface with first transparency.
Wherein the first transparency is greater than the transparency before adjustment.
Optionally, the fourth input may be a feasible input manner such as a touch input, a voice input, or a gesture input of the user, which is not limited in this embodiment of the application.
Optionally, when the preview screen of the second image is displayed, the shooting device may add or subtract an exposure value (ev value), and adjust the brightness of the second image to achieve the purpose of adjusting the transparency of the first image, so that the user can view the double exposure effect of the partial or whole image in the preview interface.
Therefore, under the condition that the preview images of the first image and the second image are displayed in a superposed mode on the shooting preview interface, the transparency of the first image can be adjusted, the preview image below the first image can be seen through the first image, a user can conveniently and visually check the effect image of double exposure fusion of the first image and the second image, and the flexibility of user operation is improved.
Optionally, in this embodiment of the present application, before the step 203, the shooting method provided in this embodiment of the present application further includes the following step B1:
step B1: the shooting device determines an exposure area in the target image.
Optionally, the target exposure object is a photographic object in the exposure area.
Further alternatively, the step B1 may include the following steps C1 to C4:
step C1: the photographing device recognizes a background image and a silhouette portrait in a target image in a case where the target image includes silhouette portrait information.
Step C2: the shooting device determines at least one first image area in the target image according to the image information of the background image and the silhouette portrait and displays at least one recommendation mark.
The recommendation mark is used for indicating a first image area.
Step C3: and the shooting device receives a second input of the target recommendation identifier in the at least one recommendation identifier from the user.
Step C4: and the shooting device responds to the second input and determines the first image area indicated by the target recommendation identification as an exposure area.
Alternatively, the camera may recognize the photographic subject and the background in the first image based on an image recognition algorithm, and then determine at least one image area (i.e., the first image area) in the first image in which multiple exposure (e.g., double exposure) is possible, according to the number, position, size, and other display parameters of the photographic subject.
Alternatively, the recommendation flag may be a closed curve for describing each of the first image regions, or the recommendation flag may be a shape displayed in superimposition on each of the first image regions, and the shape is the same as the shape of the first image region displayed in superimposition thereon.
Illustratively, the first image is taken as a silhouette portrait. If the silhouette portrait comprises a portrait (a shooting subject) and a sky and a sea (namely a background), the shooting device can distinguish the silhouette from the background according to the outline information of the silhouette portrait, identify the number of the portraits, the head-body proportion of the portraits, the head orientation of the portraits, the positions of the portraits in the picture and the proportions of the sky and the sea in the background, then determine at least one first image area according to the information such as the proportion of the silhouette (namely the shooting subject) and the background, and superpose and display a recommendation mark in each first image area so that a user can select an exposure area.
Optionally, the shooting device may generate recommendation information according to the image information of the shooting subject and the background image, so that the user selects a double exposure mode and autonomously selects a proper scene to shoot the second image according to the recommendation information, thereby obtaining an image with a better exposure fusion effect.
Illustratively, the shooting scene recommendation information may include any one of:
1) the whole picture is exposed, the texture of the shot object is complex, and the color is bright;
2) the silhouette part is exposed, the texture of the shot object is complex, the color is bright, and the background part is a dark scene;
3) the background part is exposed, the texture of the shot object is complex, the color is bright, the background part is a dark scene, and the texture is simple.
Illustratively, the second input may be a touch input, a voice input, a gesture input, or the like, which is possible input.
Optionally, the target recommendation identifier may include one or more recommendation identifiers.
In one implementation, after a user clicks on a recommendation identifier, an image area corresponding to the recommendation identifier may be selected, and after the user's finger is pressed down, the image area may be highlighted with a predetermined effect (e.g., enlarged, suspended, etc.), and after the user's finger is lifted up, the image area may be normally displayed.
In another implementation, after the user clicks the target recommendation identifier, an image region corresponding to the target recommendation identifier may be selected, and then, the user clicks the region again, and may deselect the image region.
For example, as shown in fig. 4(b), if the recommended identifier is an identifier d, after the user clicks the identifier d, the shooting device determines an image area where the identifier d is located (i.e., an area where a human image is located) as the exposure area.
Therefore, the user can select the image area needing double exposure fusion according to the actual requirement, so that local double exposure is carried out on the image area in the first image, and the shooting flexibility is improved.
Further alternatively, the step B1 may include the following steps D1 and D4:
step D1: the photographing device receives a slide input of a user on the target image.
Step D2: the photographing device determines a second image area surrounded by an input track of the slide input as an exposure area in response to the slide input.
Alternatively, after receiving a slide input of the user on the target image, the camera may determine other areas outside the area surrounded by the slide trajectory in the target image as the exposure area.
For example, when the user performs a sliding input in the image area of the first image, the camera may display a corresponding sliding track in the image area according to the sliding input of the user, so that the user can visually view the currently selected exposure area. Specifically, the shooting device may generate and display a line indicating the sliding trajectory according to the sliding trajectory, and cancel displaying the line indicating the sliding trajectory when a predetermined condition is satisfied, where the predetermined condition includes any one of: and after the one-time sliding input is finished, forming a closed shape by the lines, forming a closed graph by the lines and the interface edges, and forming a closed graph by the lines and the image edges.
Illustratively, a user can draw a single figure or a plurality of figures through a sliding input, that is, the first area can be an area corresponding to the single figure, or a plurality of areas corresponding to the plurality of figures, or a closed figure formed by overlapping or subtracting the plurality of figures.
Illustratively, in the case of displaying the above-described figure, the photographing device adjusts a display parameter of the above-described figure, for example, a size, a position, an angle, a shape, and the like. For example, after the user finishes drawing the graph, the user may select an area of the graph and zoom in, zoom out, rotate, or move the area to adjust the first area of the first image.
For example, after the shape drawing is completed, the determination of the inner region of the shape (closed line) as the first region or the determination of the outer region of the shape as the first region may be selected by the user.
For example, as shown in fig. 4(a), a first image photographed in a silhouette portrait mode is displayed in the photographing preview interface 41, i.e., image 42, after the user clicks the "double exposure" photographing button, the photographing device photographs another image (i.e., a second image), the user can select an exposure area to be secondarily exposure-fused with another image among the image areas of the image 42, as shown in fig. 4(b), the photographing apparatus displays an image 42 on the photographing preview interface, the user performs a slide input in an image area of the image 42, drawing a figure a and a figure b in the image area of the image 42, and drawing a figure c formed with the image edge, to select the position of the second exposure in the first image, the exposure area may be the area formed by subtracting the pattern b from the pattern a, and the area formed by the pattern c (i.e., the shaded area in fig. 4 (b)).
With reference to fig. 4(b), the shooting preview interface further displays: a "select" button 43, a "hand-draw" button 44, a "previous step" button 45, a "next step" button 46, and an "eraser" button 47, and after the "select" button is clicked, the exposure area can be selected; clicking the 'hand-drawing' button can start hand-drawing the figure, and clicking the 'eraser' button can erase any drawn figure.
With reference to fig. 4(a) and 4(b), after determining the exposure area in the image 42, the camera may fuse the other image in the exposure area to perform local secondary exposure, so as to obtain a local secondary exposure image (i.e., a third image).
Therefore, the user can draw the image area which needs to be subjected to double exposure fusion in the first image according to actual requirements, so that local double exposure is carried out on the image area in the first image, and the operation flexibility is improved.
Further optionally, in this embodiment of the application, in a case that the target image is the first image, the step 203 may include the following step 203 c:
step 203 c: and performing image fusion on the exposure area of the first image and the target image area of the second image in the exposure area of the first image based on the contour information of the target exposure object in the first image to obtain a third image.
For example, the camera may perform image fusion processing on the exposure area of the second image in the exposure area of the first image to obtain a local double exposure fused third image.
Optionally, in this embodiment of the application, before the first input of the user is received in the step 201, the shooting method provided in this embodiment of the application further includes the following steps E1 to E4:
step E1: the shooting device displays the first identification and the second identification.
The first mark is used for indicating a first shooting mode, and the second mark is used for indicating a second shooting mode.
Step E2: and the shooting device receives a third input of the first identification and the second identification from the user.
Step E3: the photographing apparatus displays a photographing mode control in response to a third input.
The shooting mode control is used for indicating the shooting mode of N times of shooting; the shooting mode control comprises N shooting mode sub-controls.
Step E4: and the shooting device controls the camera to shoot N images according to the display information of the N shooting mode sub-controls.
Wherein the N images include: the first image and the second image.
Alternatively, the third input may be an input of dragging the second identifier to the first identifier, or an input of dragging the first identifier to the second identifier. For example, the third input may be any feasible input such as a touch input, a voice input, or a gesture input.
Illustratively, the first identifier and the second identifier are used for triggering the entering of the corresponding shooting mode. For example, the first photographing mode may be a double exposure mode, and the second photographing mode may be a silhouette portrait mode.
In one implementation, a double exposure shooting mode is entered when a user clicks a first identifier alone, and a silhouette portrait shooting mode is entered when a user clicks a second identifier alone.
In another implementation, the combined shooting mode is entered when the user drags the first identifier to the second identifier, or drags the second identifier to the first identifier. In the combined shooting mode, the silhouette portrait is shot firstly by default, and then the normal mode shooting is carried out. Further, when the silhouette portrait is shot firstly, the shooting device can firstly identify the shooting subject in the preview picture, and after the subject is identified, the preview can automatically reduce the preview brightness and enhance the contrast, so that the subject is underexposed, and the background exposure is proper. If the main body is not identified, inputting prompt information, wherein the prompt information comprises: if the shooting subject is not identified, please select a silhouette portrait from the photo album to prompt the user to select a silhouette portrait from the photo album.
Optionally, after receiving a third input from the user, the shooting device enters a combined shooting mode, and displays the shooting mode control on the shooting preview interface, and the user can adjust the sequence of shooting in each shooting mode in the combined shooting mode through the shooting mode control.
Optionally, the N shooting mode sub-controls include a first sub-control and a second sub-control, and in a default case, the first sub-control and the second sub-control are displayed in a shooting preview interface in a sequence from left to right, and the corresponding shooting sequence is that the shooting mode is first to enter a silhouette portrait mode to shoot a silhouette portrait, and then to enter a common mode to shoot an image.
Further optionally, after the shooting mode control is displayed in the step E3, the shooting method provided in the embodiment of the present application further includes the following steps F1 and F2:
step F1: and receiving a fourth input of the shooting mode control by the user.
Step F2: and updating the display position of at least one shooting mode sub-control in the N shooting mode sub-controls in response to the fourth input.
Optionally, the fourth input may include: an input to at least one of the N shooting mode sub-controls, or an input to a shooting mode control. The fourth input may be any feasible input such as a click input, a slide input, and a long press input.
Illustratively, the user can adjust the shooting sequence of the shooting modes according to actual needs. For example, the user may click on any one of the first sub-control and the second sub-control to adjust the display order of the first sub-control and the second sub-control, thereby adjusting the shooting order of the corresponding shooting mode. For example, after the user clicks the first sub-control, the display positions of the first sub-control and the second sub-control are exchanged.
For example, as shown in fig. 5(a), a "double exposure" icon 51 and a "silhouette" icon 52 are displayed on the shooting preview interface, and after the user drags the icon 52 to the left over the icon 51, as shown in fig. 5(b), the two icons are combined into one icon 53, and after the user clicks the icon, the above-mentioned combination shooting mode is entered; when the user performs two slide inputs in opposite directions (e.g., slides the two fingers apart) on the icon 53, the icon 53 returns to the state in which the two icons shown in fig. 5(a) are displayed separately.
With reference to fig. 5(a) and 5(b), after clicking the combination icon 53, the user enters a combination shooting mode, as shown in fig. 6(a), a preview interface is shot to display a control 61 and a control 62, in this display state, the shooting device defaults to shooting a silhouette portrait first and then shooting a general scene, and after clicking the control 61, the user exchanges the display positions of the control 61 and the control 62, as shown in fig. 6(b), and then shoots the general scene first and then shoots the silhouette portrait.
Therefore, the shooting device can start the combined shooting mode through the combination of the identifiers, so that a user can flexibly select the current shooting mode, and the image shot firstly can be subjected to local double exposure, or the image shot later can be subjected to local double exposure, so that the image required by the user can be obtained, and the flexibility and the interestingness of user operation are improved.
In the shooting method provided by the embodiment of the present application, the execution subject may be a shooting device, or a control module in the shooting device for executing the shooting method. The embodiment of the present application takes an example in which a shooting device executes a shooting method, and the shooting device provided in the embodiment of the present application is described.
The embodiment of the present application provides a shooting device 600, as shown in fig. 7, the device 600 includes: a receiving module 601, a photographing module 602, and a synthesizing module 603, wherein: the receiving module 601 is configured to receive a first input of a user when a first image is displayed on the shooting preview interface; the shooting module 602 is configured to control a camera to shoot a second image in response to the first input received by the receiving module 601; the synthesizing module 603 is configured to perform multiple exposure fusion processing on the first image and the second image based on contour information of a target exposure object in a target image, and output a third image; wherein the target image includes: the first image or the second image.
Optionally, in an embodiment of the present application, the first image is: shooting a target exposure object in a shooting preview interface to obtain an image, wherein the first image comprises silhouette portrait information; the above-mentioned device still includes: a determination module; the determining module is configured to determine a background image from the first image and the second image according to the shooting order of the first image and the second image; the synthesis module is specifically configured to perform multiple exposure fusion processing on the first image and the second image according to the contour information of the target exposure object with the background image determined by the determination module as a background, and output a third image.
Optionally, in this embodiment of the application, the determining module is further configured to determine an exposure area in the target image; wherein, the target exposure object is a shooting object in an exposure area.
Optionally, in an embodiment of the present application, the determining module is specifically configured to, in a case that the target image includes silhouette information, identify a background image and a silhouette in the target image; the determining module is specifically configured to determine at least one first image area in the target image according to the image information of the background image and the silhouette portrait; the display module is configured to display at least one recommended identifier, where the recommended identifier is used to indicate a first image area; the receiving module is further configured to receive a second input of the target recommendation identifier in the at least one recommendation identifier from the user; the determining module is further configured to determine, in response to the second input, the first image area indicated by the target recommendation identifier as an exposure area.
Optionally, in an embodiment of the present application, the receiving module is specifically configured to receive a sliding input of a user on a target image; the determining module is specifically configured to determine, in response to the sliding input received by the receiving module, a second image area surrounded by an input track of the sliding input as the exposure area.
Optionally, in an embodiment of the present application, the apparatus further includes: a display module; the display module is configured to display a first identifier and a second identifier, where the first identifier is used to indicate a first shooting mode, and the second identifier is used to indicate a second shooting mode; the receiving module is further configured to receive a third input for the first identifier and the second identifier; the display module is further configured to respond to a third input received by the receiving module and display a shooting mode control, where the shooting mode control is used to indicate a shooting mode for N times of shooting; the shooting mode control comprises N shooting mode sub-controls; and the shooting module is used for controlling the camera to shoot N images according to the display information of the N shooting mode sub-controls.
Optionally, in an embodiment of the present application, the apparatus further includes: an update module; the receiving module is further configured to receive a fourth input of the shooting mode control from the user; the updating module is configured to update a display position of at least one shooting mode sub-control of the N shooting mode sub-controls in response to the fourth input received by the receiving module.
In the photographing device provided by the embodiment of the application, when a first image is displayed on a photographing preview interface, the photographing device receives a first input of a user, photographs a preview screen of the photographing preview interface to obtain a second image, and then performs multiple exposure fusion processing on the first image and the second image based on contour information of a target exposure object in the target image to obtain a third image; wherein the target image comprises at least one of: a first image, a second image. By the method, the shooting device can identify the target exposure object in the image and carry out multiple exposure fusion processing on the first image and the second image according to the contour information of the target exposure object in the first image or the second image, so that the image with multiple exposure effect in the image area where the exposure object is located can be generated quickly and conveniently, and the shooting effect is optimized.
The shooting device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The photographing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The shooting device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to 5, and is not described here again to avoid repetition.
Optionally, as shown in fig. 8, an electronic device 700 is further provided in this embodiment of the present application, and includes a processor 701, a memory 702, and a program or an instruction stored in the memory 702 and executable on the processor 701, where the program or the instruction is executed by the processor 701 to implement each process of the foregoing shooting method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 9 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The user input unit 107 is configured to receive a first input from a user when a first image is displayed on the shooting preview interface; the input unit 104 is configured to control the camera to capture a second image in response to the first input received by the user input unit 107; the processor 110 is configured to perform multiple exposure fusion processing on the first image and the second image based on contour information of a target exposure object in a target image, and output a third image; wherein the target image includes: the first image or the second image.
Optionally, in an embodiment of the present application, the first image is: shooting a target exposure object in a shooting preview interface to obtain an image, wherein the first image comprises silhouette portrait information; the processor 110 is configured to determine a background image from the first image and the second image according to the shooting order of the first image and the second image; the processor 110 is specifically configured to perform multiple exposure fusion processing on the first image and the second image according to the contour information of the target exposure object with the determined background image as a background, and output a third image.
Optionally, in this embodiment of the present application, the processor 110 is further configured to determine an exposure area in the target image; wherein, the target exposure object is a shooting object in an exposure area.
Optionally, in this embodiment of the application, the processor 110 is specifically configured to, in a case that the target image includes silhouette information, identify a background image and a silhouette in the target image; the determining module is specifically configured to determine at least one first image area in the target image according to the image information of the background image and the silhouette portrait; the display unit 106 is configured to display at least one recommended identifier, where the recommended identifier is used to indicate a first image area; the user input unit 107 is further configured to receive a second input of a target recommendation identifier in the at least one recommendation identifier from the user; the processor 110 is further configured to determine, in response to the second input, the first image area indicated by the target recommendation identifier as an exposure area.
Optionally, in this embodiment of the application, the user input unit 107 is specifically configured to receive a sliding input of a user on a target image; the processor 110 is specifically configured to determine, in response to the sliding input received by the user input unit 107, a second image area surrounded by an input track of the sliding input as an exposure area.
Optionally, in this embodiment of the application, the display unit 106 is configured to display a first identifier and a second identifier, where the first identifier is used to indicate a first shooting mode, and the second identifier is used to indicate a second shooting mode; the user input unit 107 is further configured to receive a third input for the first identifier and the second identifier; the display unit 106 is further configured to display a shooting mode control in response to a third input received by the receiving module, where the shooting mode control is used to indicate a shooting mode for N times of shooting; the shooting mode control comprises N shooting mode sub-controls; and the shooting module is used for controlling the camera to shoot N images according to the display information of the N shooting mode sub-controls.
Optionally, in this embodiment of the application, the user input unit 107 is further configured to receive a fourth input of the shooting mode control from the user; the processor 110 is configured to update a display position of at least one shooting mode sub-control of the N shooting mode sub-controls in response to the fourth input received by the user input unit 107.
In the electronic device provided by the embodiment of the application, when a first image is displayed on a shooting preview interface, a shooting device receives a first input of a user, shoots a preview picture of the shooting preview interface to obtain a second image, and then performs multiple exposure fusion processing on the first image and the second image based on contour information of a target exposure object in a target image to obtain a third image; wherein the target image comprises at least one of: a first image, a second image. By the method, the electronic equipment can identify the target exposure object in the image and carry out multiple exposure fusion processing on the first image and the second image according to the contour information of the target exposure object in the first image or the second image, so that the image with multiple exposure effect in the image area where the exposure object is located can be generated quickly and conveniently, and the shooting effect is optimized.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above shooting method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
The embodiment of the present application provides a computer program product, which is stored in a non-volatile storage medium and executed by at least one processor to implement the processes of the above-mentioned shooting method embodiment, and can achieve the same technical effects.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (15)

1. A photographing method, characterized in that the method comprises:
receiving a first input of a user in the case that the photographing preview interface includes a first image;
responding to the first input, and controlling a camera to shoot a second image;
performing multiple exposure fusion processing on the first image and the second image based on contour information of a target exposure object in a target image, and outputting a third image;
wherein the target image comprises: the first image or the second image.
2. The method of claim 1, wherein the first image is: shooting a target exposure object in the shooting preview interface to obtain an image, wherein the first image comprises silhouette portrait information;
the multi-exposure fusion processing is performed on the first image and the second image based on the contour information of the target exposure object in the target image, and a third image is output, and the method comprises the following steps:
determining a background image from the first image and the second image according to the shooting sequence of the first image and the second image;
and performing multiple exposure fusion processing on the first image and the second image according to the contour information of the target exposure object by taking the background image as a background, and outputting a third image.
3. The method according to claim 1, wherein before performing multiple exposure fusion processing on the first image and the second image based on contour information of a target exposure object in a target image and outputting a third image, the method further comprises:
determining an exposure area in the target image;
wherein the target exposure object is a photographic object within the exposure area.
4. The method of claim 3, wherein determining an exposure area in the target image comprises:
under the condition that the target image comprises silhouette portrait information, identifying a background image and a silhouette portrait in the target image;
determining at least one first image area in a target image according to the image information of the background image and the silhouette portrait, and displaying at least one recommendation identifier, wherein the recommendation identifier is used for indicating one first image area;
receiving a second input of a target recommendation identifier in the at least one recommendation identifier from the user;
in response to the second input, determining a first image area indicated by the target recommendation mark as an exposure area.
5. The method of claim 3, wherein determining an exposure area in the target image comprises:
receiving a sliding input of a user on the target image;
in response to the slide input, a second image area surrounded by an input track of the slide input is determined as an exposure area.
6. The method of claim 1, wherein prior to receiving the first input from the user, the method further comprises:
displaying a first mark and a second mark, wherein the first mark is used for indicating a first shooting mode, and the second mark is used for indicating a second shooting mode;
receiving a third input of the first identification and the second identification by a user;
displaying a shooting mode control for indicating a shooting mode for N times of shooting in response to the third input; the shooting mode control comprises N shooting mode sub-controls;
and controlling the camera to shoot N images according to the display information of the N shooting mode sub-controls.
7. The method of claim 6, wherein after displaying the shooting mode control, further comprising:
receiving a fourth input of the shooting mode control by a user;
and updating the display position of at least one shooting mode sub-control in the N shooting mode sub-controls in response to the fourth input.
8. A camera, characterized in that the camera comprises: receiving module, shooting module and synthesis module, wherein:
the receiving module is used for receiving a first input of a user under the condition that a first image is displayed on the shooting preview interface;
the shooting module is used for responding to the first input received by the receiving module and controlling a camera to shoot a second image;
the synthesis module is used for carrying out multiple exposure fusion processing on the first image and the second image based on the contour information of a target exposure object in a target image and outputting a third image;
wherein the target image comprises: the first image or the second image.
9. The apparatus of claim 8, wherein the first image is: shooting a target exposure object in the shooting preview interface to obtain an image, wherein the first image comprises silhouette portrait information;
the device further comprises: a determination module;
the determining module is used for determining a background image from the first image and the second image according to the shooting sequence of the first image and the second image;
the synthesis module is specifically configured to perform multiple exposure fusion processing on the first image and the second image according to the contour information of the target exposure object with the background image determined by the determination module as a background, and output a third image.
10. The apparatus of claim 8,
the determining module is further used for determining an exposure area in the target image;
wherein the target exposure object is a photographic object within the exposure area.
11. The apparatus of claim 10, further comprising: a display module;
the determining module is specifically configured to identify a background image and a silhouette portrait in the target image under the condition that the target image includes silhouette portrait information;
the determining module is specifically configured to determine at least one first image area in the target image according to the image information of the background image and the silhouette portrait;
the display module is used for displaying at least one recommendation identifier, and the recommendation identifier is used for indicating one first image area;
the receiving module is further configured to receive a second input of the target recommendation identifier in the at least one recommendation identifier from the user;
the determining module is further configured to determine, in response to the second input, a first image area indicated by the target recommendation identifier as an exposure area.
12. The apparatus of claim 10,
the receiving module is specifically configured to receive a sliding input of a user on the target image;
the determining module is specifically configured to determine, in response to the sliding input received by the receiving module, a second image area surrounded by an input track of the sliding input as an exposure area.
13. The apparatus of claim 8, further comprising: a display module;
the display module is used for displaying a first identifier and a second identifier, wherein the first identifier is used for indicating a first shooting mode, and the second identifier is used for indicating a second shooting mode;
the receiving module is further configured to receive a third input for the first identifier and the second identifier;
the display module is further configured to display a shooting mode control in response to the third input received by the receiving module, where the shooting mode control is used to indicate a shooting mode for N times of shooting; the shooting mode control comprises N shooting mode sub-controls;
and the shooting module is used for controlling the camera to shoot N images according to the display information of the N shooting mode sub-controls.
14. The apparatus of claim 13, further comprising: an update module;
the receiving module is further configured to receive a fourth input of the shooting mode control from the user;
the updating module is configured to update a display position of at least one shooting mode sub-control of the N shooting mode sub-controls in response to the fourth input received by the receiving module.
15. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the photographing method according to any one of claims 1-7.
CN202111442093.XA 2021-11-30 Shooting method and device and electronic equipment Active CN114143461B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111442093.XA CN114143461B (en) 2021-11-30 Shooting method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111442093.XA CN114143461B (en) 2021-11-30 Shooting method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN114143461A true CN114143461A (en) 2022-03-04
CN114143461B CN114143461B (en) 2024-04-26

Family

ID=

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024041394A1 (en) * 2022-08-26 2024-02-29 华为技术有限公司 Photographing method and related apparatus

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103293825A (en) * 2013-06-26 2013-09-11 深圳市中兴移动通信有限公司 Multiple exposure method and device
CN105208288A (en) * 2015-10-21 2015-12-30 维沃移动通信有限公司 Photo taking method and mobile terminal
CN106851125A (en) * 2017-03-31 2017-06-13 努比亚技术有限公司 A kind of mobile terminal and multiple-exposure image pickup method
CN110611768A (en) * 2019-09-27 2019-12-24 北京小米移动软件有限公司 Multiple exposure photographic method and device
CN111866388A (en) * 2020-07-29 2020-10-30 努比亚技术有限公司 Multiple exposure shooting method, equipment and computer readable storage medium
CN113382169A (en) * 2021-06-18 2021-09-10 荣耀终端有限公司 Photographing method and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103293825A (en) * 2013-06-26 2013-09-11 深圳市中兴移动通信有限公司 Multiple exposure method and device
CN105208288A (en) * 2015-10-21 2015-12-30 维沃移动通信有限公司 Photo taking method and mobile terminal
CN106851125A (en) * 2017-03-31 2017-06-13 努比亚技术有限公司 A kind of mobile terminal and multiple-exposure image pickup method
CN110611768A (en) * 2019-09-27 2019-12-24 北京小米移动软件有限公司 Multiple exposure photographic method and device
CN111866388A (en) * 2020-07-29 2020-10-30 努比亚技术有限公司 Multiple exposure shooting method, equipment and computer readable storage medium
CN113382169A (en) * 2021-06-18 2021-09-10 荣耀终端有限公司 Photographing method and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024041394A1 (en) * 2022-08-26 2024-02-29 华为技术有限公司 Photographing method and related apparatus

Similar Documents

Publication Publication Date Title
US8390672B2 (en) Mobile terminal having a panorama photographing function and method for controlling operation thereof
CN112135046B (en) Video shooting method, video shooting device and electronic equipment
CN112135049B (en) Image processing method and device and electronic equipment
CN109040474B (en) Photo display method, device, terminal and storage medium
CN112492212B (en) Photographing method and device, electronic equipment and storage medium
CN112532881B (en) Image processing method and device and electronic equipment
CN112887617B (en) Shooting method and device and electronic equipment
CN113794829B (en) Shooting method and device and electronic equipment
US20200120269A1 (en) Double-selfie system for photographic device having at least two cameras
CN112333386A (en) Shooting method and device and electronic equipment
KR20230026472A (en) Photography Methods, Devices and Electronics
CN112637515A (en) Shooting method and device and electronic equipment
CN113329172A (en) Shooting method and device and electronic equipment
CN113194256B (en) Shooting method, shooting device, electronic equipment and storage medium
CN112702531B (en) Shooting method and device and electronic equipment
CN114143461B (en) Shooting method and device and electronic equipment
CN114143461A (en) Shooting method and device and electronic equipment
CN114697530B (en) Photographing method and device for intelligent view finding recommendation
CN112653841B (en) Shooting method and device and electronic equipment
CN113923368A (en) Shooting method and device
CN111654623B (en) Photographing method and device and electronic equipment
CN114245017A (en) Shooting method and device and electronic equipment
CN114285922A (en) Screenshot method, screenshot device, electronic equipment and media
CN113873168A (en) Shooting method, shooting device, electronic equipment and medium
CN112165584A (en) Video recording method, video recording device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant