CN114143461B - Shooting method and device and electronic equipment - Google Patents
Shooting method and device and electronic equipment Download PDFInfo
- Publication number
- CN114143461B CN114143461B CN202111442093.XA CN202111442093A CN114143461B CN 114143461 B CN114143461 B CN 114143461B CN 202111442093 A CN202111442093 A CN 202111442093A CN 114143461 B CN114143461 B CN 114143461B
- Authority
- CN
- China
- Prior art keywords
- image
- shooting
- target
- input
- exposure
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 74
- 238000007499 fusion processing Methods 0.000 claims abstract description 36
- 230000004044 response Effects 0.000 claims abstract description 24
- 230000002194 synthesizing effect Effects 0.000 claims description 6
- 230000015572 biosynthetic process Effects 0.000 claims 2
- 238000003786 synthesis reaction Methods 0.000 claims 2
- 238000004891 communication Methods 0.000 abstract description 6
- 230000000694 effects Effects 0.000 description 30
- 238000010586 diagram Methods 0.000 description 8
- 230000004927 fusion Effects 0.000 description 8
- 210000003128 head Anatomy 0.000 description 8
- 239000003550 marker Substances 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 3
- 241000272814 Anser sp. Species 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 239000004744 fabric Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 210000000746 body region Anatomy 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 230000002427 irreversible effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
Abstract
The application discloses a shooting method, a shooting device and electronic equipment, and belongs to the technical field of communication. The method comprises the following steps: receiving a first input of a user in the case that the photographing preview interface includes a first image; controlling the camera to shoot a second image in response to the first input; based on the contour information of a target exposure object in the target image, performing multiple exposure fusion processing on the first image and the second image, and outputting a third image; wherein the target image includes: a first image or a second image.
Description
Technical Field
The application belongs to the field of image pickup, and particularly relates to a shooting method, a shooting device and electronic equipment.
Background
With the popularity of photography in recent years, users increasingly pay attention to diversification of photographing effects of electronic devices (e.g., smartphones). At present, double exposure is an ingenious photographing method, repeated exposure is carried out twice during photographing, black cloth is used for shielding between the two exposures, and images photographed twice are overlapped on the same film, so that pictures are more abundant.
In the related art, double exposure can be performed in a front-back exposure mode, namely, two front and back cameras of the electronic equipment are simultaneously started to shoot, and a photo is synthesized and output by clicking the front and back effects after shooting. However, since the above-mentioned photographing process is difficult for the user to find a suitable composition angle and is limited by the current photographing environment, the content fusion in the synthesized whole picture content is poor, so that the image content of the finally synthesized image is hard, and the photographing effect is poor.
Disclosure of Invention
The embodiment of the application aims to provide a shooting method, a shooting device and electronic equipment, which can solve the problem of poor shooting effect caused by poor content fusion in the synthesized whole picture content.
In order to solve the technical problems, the application is realized as follows:
In a first aspect, an embodiment of the present application provides a photographing method, including: receiving a first input of a user in the case that the photographing preview interface includes a first image; controlling the camera to shoot a second image in response to the first input; based on the contour information of a target exposure object in the target image, performing multiple exposure fusion processing on the first image and the second image, and outputting a third image; wherein the target image includes: a first image or a second image.
In a second aspect, an embodiment of the present application provides a photographing apparatus, including: the device comprises: the device comprises a receiving module, a shooting module and a synthesizing module, wherein: the receiving module is used for receiving a first input of a user when the first image is displayed on the shooting preview interface; the shooting module is used for responding to the first input received by the receiving module and controlling the camera to shoot a second image; the synthesizing module is used for carrying out multiple exposure fusion processing on the first image and the second image based on the outline information of the target exposure object in the target image and outputting a third image; wherein the target image includes: a first image or a second image.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction implementing the steps of the method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored on a non-volatile storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, under the condition that a first image is displayed on a shooting preview interface, a shooting device receives a first input of a user, shoots a preview picture of the shooting preview interface to obtain a second image, and then carries out multiple exposure fusion processing on the first image and the second image based on contour information of a target exposure object in a target image to obtain a third image; wherein the target image includes at least one of: a first image, a second image. According to the method, the shooting device can identify the target exposure object in the image, and the first image and the second image are subjected to multiple exposure fusion processing according to the outline information of the target exposure object in the first image or the second image, so that the image with multiple exposure effects in the image area where the exposure object is located can be quickly and conveniently generated, and the shooting effect is optimized.
Drawings
Fig. 1 is a flowchart of a photographing method according to an embodiment of the present application;
fig. 2 is a schematic diagram of an interface to which a photographing method according to an embodiment of the present application is applied;
FIG. 3 is a second schematic diagram of an interface applied by a photographing method according to an embodiment of the present application;
FIG. 4 is a third schematic diagram of an interface to which a photographing method according to an embodiment of the present application is applied;
FIG. 5 is a diagram illustrating an interface to which a photographing method according to an embodiment of the present application is applied;
FIG. 6 is a diagram of an interface applied by a photographing method according to an embodiment of the present application;
Fig. 7 is a schematic structural diagram of a photographing apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 9 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The following terms are used to explain the terms in the embodiments of the present application:
Double exposure: double exposure is an ingenious photographic technique in the film era, repeated exposure is carried out twice during shooting, black cloth is used for shielding between the two exposures, and images shot twice are overlapped on the same film, so that the pictures are more abundant. However, the irreversible nature of film exposure and the inability to review the film present significant difficulties in sheeting. In the digital era, shooting fans often import the shot pictures into a computer, and then overlap the pictures by using image processing software to achieve the effect of double exposure.
Backlighting photography: the back light is an illumination with artistic appeal and strong expressive force, which can make the picture produce artistic effect completely different from the actual light seen by naked eyes at present. Backlighting is one means for photographing light. Backlight in a broad sense shall include both full backlight and side backlight. The basic characteristics of the device are as follows: from the optical position, the total back light is light irradiated from the back of the subject toward the camera, and is also called "backlight"; the side backlight is light emitted from the left and right 135 deg. back side of the camera to the subject, the light receiving surface of the subject is 1/3, and the backlight surface is 2/3.
Silhouette portraits: the silhouette portrait uses the principle of backlight to shoot the main body into a high contrast image with serious underexposure, and the silhouette of the main body is highlighted, so that a strong visual impact effect is brought to people.
The embodiment of the application provides a shooting method which can be applied to electronic equipment, and fig. 1 shows a flow chart of the shooting method provided by the embodiment of the application. As shown in fig. 1, the photographing method provided by the embodiment of the present application may include the following steps 201 to 203:
Step 201: the photographing device receives a first input of a user in a case where the photographing preview interface includes a first image.
In the embodiment of the application, the shooting preview interface displays a preview image acquired by a camera of the electronic device.
Optionally, in an embodiment of the present application, the shooting preview interface is a shooting preview interface of the electronic device in a target shooting mode, where the target shooting mode includes at least one of the following: a double exposure mode, a silhouette portrait mode, and a normal mode.
For example, if the target shooting mode is a double exposure mode, entering a double exposure shooting process to obtain an image with a double exposure effect; and when the target shooting mode is a silhouette image mode, entering a silhouette image shooting process to obtain an image with clear outline and strong light-dark contrast.
In the embodiment of the present application, the first image may be an image obtained by shooting by the shooting device when the first preview screen is displayed on the shooting preview interface, or the first image may be an image selected by a user in an electronic device (for example, an album or gallery).
In a specific implementation, the photographing device defaults to enter a double exposure mode, then the photographing device can detect whether a preview picture in a photographing preview interface meets photographing conditions corresponding to the silhouette portrait mode, and prompt a user to enter the silhouette portrait mode to photograph a silhouette image (namely, the first image) if the user does not enter the silhouette portrait mode, and then follow-up photographing is performed by adopting a double exposure photographing process; if the user confirms that the image-capturing process enters the image-capturing mode, entering the image-capturing process, automatically returning to the double exposure mode after the image-capturing process is completed, displaying the captured image-capturing image in a floating manner on a capturing preview interface, and storing the captured image-capturing image in an album; if the preview picture in the shooting preview interface does not meet the shooting condition corresponding to the silhouette portrait mode, prompting a user to select a silhouette portrait from the album.
Alternatively, in an embodiment of the present application, the first image may be a silhouette image or a double exposure image.
In a specific implementation, the photographing device may detect whether the first preview screen corresponding to the first image includes a photographing subject (e.g., a portrait), and prompt the user to select an image (e.g., a silhouette portrait) including the photographing subject from the album as the first image if the photographing subject is not recognized.
When the method is needed to be described, due to the characteristic of strong silhouette contrast of the silhouette portrait, the double exposure has a unique visual effect, so that the silhouette contrast and the portrait contrast are combined together in technical realization to provide diversity of selection for users to photograph, the interestingness in the photographing process is increased, the users can obtain a photograph effect with achievement, and the user satisfaction is improved.
Alternatively, in the embodiment of the present application, the photographing device may hover-display the first image with a preset transparency in a part or all of the interface area of the photographing preview interface.
In a specific implementation, when the shooting preview interface is displayed, the shooting device may display the collected preview image on the shooting preview interface, and display the first image in a superimposed manner with a preset transparency on an upper layer of the preview image, so that a user may view the collected preview image through the first image.
Optionally, in the embodiment of the present application, the first input may be a touch input of the user on the shooting preview interface, or other feasible inputs, which is not limited in the embodiment of the present application.
Illustratively, the first input user described above is a click input, a slide input, a press input, or the like. Further, the clicking operation may be any number of clicking operations. The above-described sliding operation may be a sliding operation in any direction, for example, an upward sliding, a downward sliding, a leftward sliding, a rightward sliding, or the like, which is not limited in the embodiment of the present application.
Step 202: the photographing device controls the camera to photograph the second image in response to the first input.
Optionally, in the embodiment of the present application, when the preview image is displayed on the capturing device, the capturing device may control the camera to capture the capturing object corresponding to the preview image, so as to obtain the second image.
Alternatively, the second image may be an image obtained by the photographing device in the double exposure mode, or an image obtained by the photographing device in the normal photographing mode. The second image may be a general image including natural landscapes, for example.
Alternatively, the photographing device may automatically enter the double exposure mode in advance, or be triggered by the user to enter the double exposure mode, and then, after receiving the first input of the user to trigger photographing the second image, perform double exposure photographing, that is, perform double exposure fusion processing on the second image and the first image, and output the third image with double exposure effect.
In addition, unlike the conventional photographing flow, in the case of performing double exposure photographing, the photographing device may acquire the second image, and perform image composition with the first image using the second image as a double-exposed background image or an upper image to obtain a double-exposure image including image contents of two images.
Step 203: the photographing device performs multiple exposure fusion processing on the first image and the second image based on contour information of a target exposure object in the target image, and outputs a third image.
Wherein the target image includes a first image or a second image.
In the embodiment of the present application, the target exposure object may be a background image or a foreground image in the target image. Illustratively, the above-described foreground image may be a photographic subject in a target image, such as a person, a building, a tree, or the like; the background image may be a large-area image area other than the foreground image in the target image.
The target exposure object may be selected by a user in a target image or automatically determined by a photographing device, for example. For example, the target exposure object may be an object in an exposure area in the target image, and for a detailed description of the exposure area, reference is made to the following text, and no reference is made thereto.
In the embodiment of the application, the contour information is the contour of the shooting target in the obtained target image by extracting the edge of the target image. For example, in the case where the target exposure object is a subject in the target image, the profile information of the target exposure object is profile information of the subject. For example, taking a target image as an example of a silhouette image, the silhouette image includes a silhouette image, sky and sea, and if the target exposure object is a portrait, the contour information of the target exposure object is the contour information of the silhouette image.
Optionally, in the embodiment of the present application, the photographing device may perform multiple exposure fusion processing on the first image and the second image according to a multiple exposure algorithm based on the profile information of the target exposure object in the target image, so as to obtain a third image with multiple exposure effects.
In a specific implementation, the photographing device may determine an image area where the target exposure object is located and a target image area of another image to be subjected to multiple exposure fusion, and perform multiple exposure fusion processing on the two image areas according to a multiple exposure algorithm, so as to obtain a third image with a local multiple exposure effect.
In an example, the target image is a first image, and the photographing device may perform multiple exposure fusion processing on the first image and the second image based on contour information of a target exposure object in the first image, so as to obtain a third image.
In a specific implementation, the photographing device may fuse the target image area of the second image according to the multiple exposure algorithm in the image area corresponding to the target exposure object in the first image based on the contour information of the target exposure object in the first image.
In example 1, a first image is taken as a silhouette image including silhouette image information, and a second image is taken as an example of a general object image. If the user wants to obtain an image in which the scenic content is superimposed on the silhouette image area of the image, as shown in fig. 2 (a), the user may first click the photographing button 20 to trigger the photographing device to photograph the image 21 (i.e., the first image) in the double exposure mode, and the photographing device may hover display the image 21 on the preview interface after photographing. In the case where the image 21 is displayed in a floating manner on the photographing preview interface, the user may move the electronic device to adjust the photographing angle, click the photographing button 20 to trigger the photographing device to photograph the image 22 including the scenic image content to be superimposed on the portrait region, and the photographing device may perform the double exposure fusion processing on the image 21 and the image 22 according to the outline 21a of the silhouette portrait in the single person image 21, as shown in fig. 2 (b), and after the processing is completed, the photographing device may output the image 23 in which the image content of the image 22 (i.e., the cloud and the wild goose in the portrait head region, the wild goose in the portrait body region, the mountain peak and the river) is superimposed on the image region where the silhouette portrait 22a of the image 21 is located.
Example 2, take the first image as an example of a silhouette comprising silhouette portrait information. When shooting in the double exposure mode, after the user clicks the shooting button, as shown in fig. 3 (a), the shooting device shoots the person a to obtain an image 31 including a silhouette portrait of the person a, shoots the person B and the person C in the shooting field of view to obtain an image 32, fuses a part of the image 32 in an image area 31a corresponding to the head in the silhouette portrait according to contour information of the head in the silhouette portrait, and finally outputs a double exposure image 33 in which the head area of the silhouette portrait fuses image contents including the person B and the person C, as shown in fig. 3 (B).
In another example, the target image is a second image, and the photographing device may perform multiple exposure fusion processing on the second image and the first image based on contour information of a target exposure object in the second image, so as to obtain a third image.
In the shooting method provided by the embodiment of the application, under the condition that the shooting preview interface displays the first image, the shooting device receives the first input of the user, shoots the preview picture of the shooting preview interface to obtain the second image, and then carries out multiple exposure fusion processing on the first image and the second image based on the outline information of the target exposure object in the target image to obtain the third image; wherein the target image includes at least one of: a first image, a second image. According to the method, the shooting device can identify the target exposure object in the image, and the first image and the second image are subjected to multiple exposure fusion processing according to the outline information of the target exposure object in the first image or the second image, so that the image with multiple exposure effects in the image area where the exposure object is located can be quickly and conveniently generated, and the shooting effect is optimized.
Optionally, in an embodiment of the present application, the first image is: and shooting the target exposure object in the shooting preview interface to obtain an image, wherein the first image comprises silhouette image information.
Alternatively, the step 203 may include the following steps 203a1 and 203b1:
step 203a1: the photographing device determines a background image from the first image and the second image according to the photographing sequence of the first image and the second image.
Step 203b1: the imaging device performs multiple exposure fusion processing on the first image and the second image according to contour information of a target exposure object by using the background image as a background, and outputs a third image.
For example, the photographing device may determine, as the background image, an image obtained by photographing first, or determine, as the background image, an image obtained by photographing later, from among the first image and the second image, according to a preset background image determination manner.
Optionally, the photographing device may further determine the background image from the first image and the second image according to a photographing mode of the first image and the second image.
The shooting modes may include, for example: a double exposure mode, a silhouette portrait mode, and a normal mode. The shooting mode may be fixed, or may be automatically switched, for example. For example, when shooting is performed in the silhouette image mode for the first time to obtain a first image, the shooting is performed in the double exposure mode when shooting is performed for the next time.
For example, the photographing device may determine, according to a preset background image determining manner, an image photographed in a silhouette portrait mode as a background image and an image photographed in a normal mode as an image to be superimposed, from among the first image and the second image.
Alternatively, the photographing device may superimpose another image on the exposure area including the target exposure object in the background image with the determined background image as the background, and perform multiple exposure fusion processing on the target image and the another image according to the profile information of the target exposure object in the target object.
In one example, if the background image is determined to be the first image, the photographing device uses the first image as the background image, and superimposes the second image on the exposure area of the first image according to the contour information of the target exposure object in the first image to perform the secondary exposure fusion processing, so as to obtain the third image with the local double exposure effect.
In another example, if the background image is determined to be the second image, the photographing device uses the second image as the background image, and superimposes the first image on the exposure area of the second image according to the contour information of the target exposure object in the second image to perform the secondary exposure fusion processing, so as to obtain the third image with the local double exposure effect.
Therefore, the shooting device can take the image shot earlier or the image shot later as a background image, and based on the outline information of the target exposure pair in the image, the exposure area of the image is subjected to multiple exposure fusion processing, so that any area can be freely selected in the image as the exposure area for overlapping other image contents, thereby meeting the actual shooting requirements of users and improving the shooting flexibility.
Further optionally, in the case that the first image is an image obtained after the capturing of the preview image of the capturing preview interface, before the obtaining of the first image, the capturing method provided in the embodiment of the present application further includes the following step 204:
Step 204: and the shooting device shoots the preview picture in the shooting preview interface according to the first shooting parameter to obtain a first image when the shooting scene corresponding to the preview image is a backlight scene.
Wherein the first shooting parameters include at least one of the following: exposure, contrast.
Illustratively, the backlight scene may be a backlight environment. For example, when photographing, if the camera photographs against (facing or diagonally opposite to) the light source, it is considered to be in a backlight environment.
In an exemplary embodiment, the photographing device may detect whether the current photographing environment is a backlight environment in real time or periodically according to the preview screen when entering the double exposure photographing mode by default, and perform photographing according to the first photographing parameter when the current photographing environment is a backlight environment and the photographing object including a portrait is identified.
For example, when the current environment is not a backlight environment or a photographic subject including a portrait is not recognized, one image may be selected as the first image in the album by the user.
The first shooting parameter is a shooting parameter in a silhouette portrait shooting mode.
In a specific implementation, the shooting device may divide blocks according to the resolution of the preview image, count the number of blocks of the highlight region, the duty ratio of the pixel value of the highlight region, and the duty ratio of the pixel value of the low highlight region in the preview image, and combine the illuminance information (e.g., luxindex values) and the sensitivity information (e.g., gain values) of the preview image to determine whether the current shooting environment meets the backlight environment characteristics, and prompt the user to enter the silhouette portrait shooting mode when meeting the backlight environment characteristics or the large backlight environment characteristics and identifying the portrait.
Illustratively, after entering the silhouette portrait shooting mode, the overall exposure automatically decreases, the contrast automatically increases, the shot subject in the preview screen is underexposed, and the background is normally exposed.
Therefore, the shooting device can identify the current shooting scene, enter a silhouette portrait shooting mode under the condition that the shooting object comprises a portrait in a backlight scene, and shoot by adopting the first shooting parameters in the mode so as to obtain the silhouette portrait meeting the requirements of the user, and improve the shooting flexibility.
Optionally, in an embodiment of the present application, the target image is a first image; and the first image and the preview screen are displayed in a superimposed manner on the shooting preview interface.
Optionally, in the step 203, multiple exposure fusion processing is performed on the first image and the second image based on the profile information of the target exposure object in the target image to obtain the target image, which may include the following step 203b:
step 203b: the shooting device carries out multiple exposure fusion processing on the first image and the second image based on the outline information of the target exposure object in the first image and the display position of the first image in the shooting preview interface so as to obtain the target image.
The preview screen includes, for example, a preview screen corresponding to the second image.
For example, in the case where the photographing preview interface displays the preview screen, the photographing device may suspend and display the first image on the photographing preview interface, and determine the target image area to be fused in the second image according to the display position of the first image in the photographing preview interface.
Illustratively, the target image area in the second image is: in the second image, an image area corresponding to a target picture area in a preview picture of the second image, wherein the target picture area is: in the preview screen of the second image, a screen area corresponding to an image area where a target exposure object of the target photographing object in the first image is located. For example, if the target exposure object in the first image is a head area of a silhouette portrait, and the head area is displayed in an area a of a shooting preview interface, the target screen area of the second image is: in the preview screen of the second image, the preview screen displayed in the region a of the shooting preview interface has the target image region: an image area corresponding to the preview screen displayed in the area a.
When the first image is displayed on the photographing preview interface, the preview screen is hidden by the first image, and thus the user cannot see the preview screen on the photographing preview interface with the naked eye.
Further optionally, in the embodiment of the present application, in a case where the first image and the preview screen are displayed in a superimposed manner on the shooting preview interface, the shooting method provided in the embodiment of the present application further includes the following steps A1 and A2:
step A1: the camera receives a fourth input from the user.
Step A2: the photographing device displays the first image in the photographing preview interface with a first transparency in response to the fourth input.
Wherein, the first transparency is larger than the transparency before adjustment.
Optionally, the fourth input may be a touch input, a voice input, or a gesture input of the user, which is not limited in any way.
Optionally, when displaying the preview screen of the second image, the photographing device may add or subtract the exposure value (ev value), and adjust the brightness of the second image, so as to achieve the purpose of adjusting the transparency of the first image, so that the user views the double exposure effect of the partial or whole image in the preview interface.
Therefore, under the condition that preview pictures of the first image and the second image are displayed in a superimposed mode on the shooting preview interface, the transparency of the first image can be adjusted, so that the preview picture below the first image is visible through the first image, a user can conveniently and intuitively check the effect picture of double exposure fusion of the first image and the second image, and the flexibility of user operation is improved.
Optionally, in the embodiment of the present application, before the step 203, the photographing method provided in the embodiment of the present application further includes the following step B1:
step B1: the photographing device determines an exposure area in the target image.
Optionally, the target exposure object is a photographic object in the exposure area.
Further alternatively, the above step B1 may include the following steps C1 to C4:
step C1: in the case where the target image includes silhouette image information, the photographing device recognizes a background image and a silhouette image in the target image.
Step C2: the shooting device determines at least one first image area in the target image according to the image information of the background image and the silhouette image, and displays at least one recommendation identifier.
Wherein the recommendation identifier is used for indicating a first image area.
Step C3: the photographing device receives a second input of a target recommendation identifier in the at least one recommendation identifier from a user.
Step C4: the photographing device determines a first image area indicated by the target recommendation identification as an exposure area in response to the second input.
Alternatively, the photographing device may identify the photographing subject and the background in the first image based on an image recognition algorithm, and then determine at least one image area (i.e., a first image area) in the first image in which multiple exposure (e.g., double exposure) is possible according to the number, position, size, etc. of the photographing subjects.
Alternatively, the recommendation mark may be a closed curve for drawing each first image area thereon, or the recommendation mark may be a shape displayed in superposition in each first image area, and the shape is the same as the shape of the first image area in superposition.
Illustratively, the first image is taken as a silhouette portrait. If the silhouette image includes a person image (a photographing subject), and a sky and a sea (i.e., a background), the photographing apparatus may distinguish the silhouette image from the background according to profile information of the silhouette image, identify the number of person images in the silhouette image, the proportion of the head and body of the person images, the orientation of the head of the person images, the position of the person images in the picture, and the proportion of the sky and the sea in the background, then determine at least one first image area according to the information such as the proportion of the silhouette image (i.e., the photographing subject) and the background, and superimpose a recommendation mark on each first image area so that a user can select an exposure area therefrom.
Optionally, the photographing device may generate recommendation information according to image information of the photographing subject and the background image, so that a user selects a double exposure mode and autonomously selects a suitable scene to photograph the second image according to the recommendation information, thereby obtaining an image with a better exposure fusion effect.
Illustratively, the shooting scene recommendation information may include any one of the following:
1) The whole picture is exposed, the texture of the shot object is complex, and the color is bright;
2) Exposing the silhouette part, wherein the texture of the shot object is complex, the color is bright, and the background part is a dark scene;
3) The background part is exposed, the texture of the shot object is complex, the color is bright, the background part is a dark scene, and the texture is simple.
Illustratively, the second input may be a touch input, a voice input, a gesture input, or the like.
Optionally, the target recommendation identifier may include one or more recommendation identifiers.
In one implementation, after a user clicks on a recommendation identifier, an image area corresponding to the recommendation identifier may be selected, and after the user's finger is pressed, the area is highlighted with a predetermined effect (e.g., zoom in, hover, etc.), and after the user's finger is lifted, the image area is displayed normally.
In another implementation, after the user clicks the target recommendation identifier, an image area corresponding to the target recommendation identifier may be selected, and then the user clicks the area again, so that the image area may be deselected.
For example, as shown in fig. 4 (b), if the recommended mark is the mark d, after the user clicks the mark d, the photographing device determines the image area where the mark d is located (i.e., the area where the portrait is located) as the exposure area.
Therefore, a user can select the image area which needs double exposure fusion in the first image according to actual requirements, so that local double exposure is performed on the image area in the first image, and the shooting flexibility is improved.
Further alternatively, the step B1 may include the following steps D1 and D4:
step D1: the camera receives a user's sliding input on the target image.
Step D2: the photographing device determines a second image area surrounded by an input track of the slide input as an exposure area in response to the slide input.
Alternatively, after receiving the sliding input of the user on the target image, the photographing device may determine the other area than the area surrounded by the sliding track in the target image as the exposure area.
For example, when the user performs a sliding input on the image area of the first image, the photographing device may display a corresponding sliding track on the image area according to the sliding input of the user, so that the user intuitively views the currently selected exposure area. Specifically, the photographing device may generate and display a line indicating the sliding track according to the sliding track, and cancel the display of the line indicating the sliding track when a predetermined condition is satisfied, where the predetermined condition includes any one of the following: after one-time sliding input is finished, after the lines form a closed shape, the lines and the interface edges form a closed graph, and the lines and the image edges form a closed graph.
For example, the user may draw a single graphic or a plurality of graphics through a sliding input, that is, the first area may be an area corresponding to the single graphic, or a plurality of areas corresponding to the plurality of graphics, or a closed graphic formed by overlapping or subtracting the plurality of graphics.
Illustratively, in the case of displaying the above-described graphics, the photographing device adjusts display parameters of the above-described graphics, such as size, position, angle, shape, and the like. For example, after the graphics rendering is completed, the user may select or a graphics region and zoom in, zoom out, rotate, or move the region to adjust the first region of the first image.
For example, after the above-described shape drawing is completed, the shape (closed line) inner area may be determined as the first area or the shape outer area may be determined as the first area by the user selection.
For example, as shown in fig. 4 (a), in the case where the first image, that is, the image 42, photographed in the silhouette portrait mode is displayed in the photographing preview interface 41, after the user clicks the "double exposure" photographing button, the photographing device photographs another image (that is, the second image), and the user may select an exposure area where the double exposure is fused with the other image in the image area of the image 42, as shown in fig. 4 (b), the photographing device displays the image 42 in the photographing preview interface, and the user performs a sliding input in the image area of the image 42, draws the graphic a and the graphic b in the image area of the image 42, and the graphic c formed with the image edge to select a double exposure position in the first image, and the exposure area may be an area formed by subtracting the graphic b from the graphic a, and an area formed by the graphic c (that is, a hatched area in fig. 4 (b)).
Referring to fig. 4 (b), the shooting preview interface further includes: the "select" button 43, the "hand-drawing" button 44, the "up" button 45, the "down" button 46, and the "eraser" button 47, after clicking the "select" button, the exposure area can be selected; clicking the "hand drawing" button can start hand drawing the graphic, clicking the "eraser" button can wipe off any drawn graphic.
In combination with fig. 4 (a) and fig. 4 (b), after determining the exposure area in the image 42, the photographing device may fuse the other image in the exposure area and perform local secondary exposure to obtain a local secondary exposure image (i.e., a third image).
Therefore, a user can draw an image area needing double exposure fusion in the first image according to actual requirements, so that local double exposure is carried out on the image area in the first image, and the flexibility of operation is improved.
Further alternatively, in the embodiment of the present application, in the case where the target image is the first image, the step 203 may include the following step 203c:
step 203c: and carrying out image fusion on the exposure area of the first image and the target image area of the second image in the exposure area of the first image based on the contour information of the target exposure object in the first image so as to obtain a third image.
For example, the photographing device may perform image fusion processing on the exposure region of the first image and the exposure region of the second image to obtain a third image fused by local double exposure.
Optionally, in the embodiment of the present application, before receiving the first input of the user in the step 201, the photographing method provided in the embodiment of the present application further includes the following steps E1 to E4:
Step E1: the shooting device displays the first mark and the second mark.
The first identifier is used for indicating a first shooting mode, and the second identifier is used for indicating a second shooting mode.
Step E2: the photographing device receives a third input of a user to the first identifier and the second identifier.
Step E3: the camera displays a shooting mode control in response to a third input.
The shooting mode control is used for indicating shooting modes of N times of shooting; the shooting mode control comprises N shooting mode sub-controls.
Step E4: and the shooting device controls the camera to shoot N images according to the display information of the N shooting mode sub-controls.
Wherein the N images include: the first image and the second image.
Alternatively, the third input may be an input that drags the second identifier to the first identifier, or an input that drags the first identifier to the second identifier. The third input may be any input with feasibility, such as a touch input, a voice input, or a gesture input.
The first and second identifiers are used for triggering the corresponding shooting mode. The first photographing mode may be a double exposure mode, and the second photographing mode may be a silhouette image mode, for example.
In one implementation, the double exposure shooting mode is entered when the user clicks the first marker alone, and the silhouette image shooting mode is entered when the user clicks the second marker alone.
In another implementation, the combined shooting mode is entered when the user drags the first marker to the second marker, or drags the second marker to the first marker. In the combined shooting mode, silhouette portrait shooting is performed by default, and then ordinary mode shooting is performed. Further, when the silhouette portrait is photographed first, the photographing device will identify the photographing subject in the preview image first, and after identifying the subject, the preview will automatically reduce the preview brightness and enhance the contrast, so that the subject is underexposed, and the background exposure is appropriate. If the main body is not identified, inputting prompt information, wherein the prompt information comprises: if the shooting subject is not identified, please select a silhouette image from the album to prompt the user to select a silhouette image from the album.
Optionally, after receiving the third input of the user, the photographing device enters a combined photographing mode, and displays the photographing mode control on a photographing preview interface, where the user may adjust, through the photographing mode control, a photographing sequence performed in each photographing mode in the combined photographing mode.
Optionally, the N shooting mode sub-controls include a first sub-control and a second sub-control, and in a default case, the first sub-control and the second sub-control are displayed on a shooting preview interface according to a left-to-right sequence, and the corresponding shooting sequence is that a silhouette portrait is shot in a silhouette portrait mode first, and then a silhouette portrait is shot in a common mode.
Further optionally, after the shooting mode control is displayed in the step E3, the shooting method provided by the embodiment of the present application further includes the following steps F1 and F2:
step F1: and receiving a fourth input of the shooting mode control by the user.
Step F2: and in response to the fourth input, updating the display position of at least one shooting mode sub-control in the N shooting mode sub-controls.
Optionally, the fourth input may include: input to at least one of the N photography mode sub-controls, or input to a photography mode control. Illustratively, the fourth input may be any one of a click input, a slide input, a long press input, and the like, which has feasibility.
For example, the user may adjust the shooting order of the shooting modes according to the actual requirement. For example, the user may click on any one of the first sub-control and the second sub-control to adjust the display order of the two, thereby adjusting the photographing order of their corresponding photographing modes. For example, after the user clicks the first sub-control, the display positions of the first sub-control and the second sub-control are swapped.
For example, as shown in fig. 5 (a), the shooting preview interface displays a "double exposure" icon 51, a "silhouette portrait" icon 52, and after the user drags the icon 52 to the left over the icon 51, as shown in fig. 5 (b), the two icons are combined into one icon 53, and after the user clicks the icon, the above-mentioned combined shooting mode is entered; after the user performs two opposite-direction sliding inputs (e.g., two-finger sliding) on the icon 53, the icon 53 returns to the state in which the two icons shown in fig. 5 (a) are displayed separately.
Referring to fig. 5 (a) and 5 (b), after the user clicks the combined icon 53, the user enters a combined shooting mode, as shown in fig. 6 (a), the preview interface is shot to display the control 61 and the control 62, in this display state, the shooting device defaults to shoot a silhouette person first, then shoot a general scene, after the user clicks the control 61, as shown in fig. 6 (b), the display positions of the control 61 and the control 62 are exchanged, and the general scene is shot first, then the silhouette person is shot.
Therefore, the shooting device can start the combined shooting mode through the combination of the marks, so that a user can flexibly select the current shooting mode, and the local double exposure of the image shot earlier or the local double exposure of the image shot later can be realized, so that the image required by the user can be obtained, and the flexibility and the interestingness of the user operation are improved.
It should be noted that, in the photographing method provided in the embodiment of the present application, the execution subject may be a photographing device, or a control module in the photographing device for executing the photographing method. In the embodiment of the present application, an example of a photographing method performed by a photographing device is described as a photographing device provided by the embodiment of the present application.
An embodiment of the present application provides a photographing apparatus 600, as shown in fig. 7, the apparatus 600 includes: a receiving module 601, a shooting module 602, and a synthesizing module 603, wherein: the receiving module 601 is configured to receive a first input from a user when a first image is displayed on a shooting preview interface; the shooting module 602 is configured to control the camera to shoot a second image in response to the first input received by the receiving module 601; the synthesizing module 603 is configured to perform multiple exposure fusion processing on the first image and the second image based on contour information of a target exposure object in the target image, and output a third image; wherein the target image includes: the first image or the second image.
Optionally, in an embodiment of the present application, the first image is: an image obtained after shooting a target exposure object in a shooting preview interface, wherein the first image comprises silhouette image information; the device further comprises: a determining module; the determining module is used for determining a background image from the first image and the second image according to the shooting sequence of the first image and the second image; the synthesizing module is specifically configured to perform multiple exposure fusion processing on the first image and the second image according to the contour information of the target exposure object by using the background image determined by the determining module as a background, and output a third image.
Optionally, in an embodiment of the present application, the determining module is further configured to determine an exposure area in the target image; wherein the target exposure object is a shooting object in an exposure area.
Optionally, in an embodiment of the present application, the determining module is specifically configured to identify a background image and a silhouette image in the target image when the target image includes silhouette image information; the determining module is specifically configured to determine at least one first image area in the target image according to the image information of the background image and the silhouette portrait; the display module is used for displaying at least one recommendation identifier, and the recommendation identifier is used for indicating a first image area; the receiving module is further configured to receive a second input of a target recommendation identifier in the at least one recommendation identifier from a user; the determining module is further configured to determine, in response to the second input, a first image area indicated by the target recommendation identifier as an exposure area.
Optionally, in an embodiment of the present application, the receiving module is specifically configured to receive a sliding input of a user on a target image; the determining module is specifically configured to determine, as the exposure area, a second image area surrounded by an input track of the sliding input in response to the sliding input received by the receiving module.
Optionally, in an embodiment of the present application, the apparatus further includes: a display module; the display module is used for displaying a first mark and a second mark, wherein the first mark is used for indicating a first shooting mode, and the second mark is used for indicating a second shooting mode; the receiving module is further used for receiving a third input for the first identifier and the second identifier; the display module is further configured to display a shooting mode control in response to the third input received by the receiving module, where the shooting mode control is used to indicate a shooting mode of N shots; the shooting mode controls comprise N shooting mode sub-controls; and the shooting module is used for controlling the camera to shoot N images according to the display information of the N shooting mode sub-controls.
Optionally, in an embodiment of the present application, the apparatus further includes: updating a module; the receiving module is further configured to receive a fourth input of a shooting mode control by a user; and the updating module is used for responding to the fourth input received by the receiving module and updating the display position of at least one shooting mode sub-control in the N shooting mode sub-controls.
In the photographing device provided by the embodiment of the application, under the condition that the photographing preview interface displays the first image, the photographing device receives the first input of a user, photographs the preview picture of the photographing preview interface to obtain the second image, and then performs multiple exposure fusion processing on the first image and the second image based on the outline information of the target exposure object in the target image to obtain the third image; wherein the target image includes at least one of: a first image, a second image. According to the method, the shooting device can identify the target exposure object in the image, and the first image and the second image are subjected to multiple exposure fusion processing according to the outline information of the target exposure object in the first image or the second image, so that the image with multiple exposure effects in the image area where the exposure object is located can be quickly and conveniently generated, and the shooting effect is optimized.
The shooting device in the embodiment of the application can be a device, and can also be a component, an integrated circuit or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and the non-mobile electronic device may be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., and the embodiments of the present application are not limited in particular.
The photographing device in the embodiment of the application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The photographing device provided by the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 5, and in order to avoid repetition, a detailed description is omitted here.
Optionally, as shown in fig. 8, the embodiment of the present application further provides an electronic device 700, including a processor 701, a memory 702, and a program or an instruction stored in the memory 702 and capable of running on the processor 701, where the program or the instruction implements each process of the above-mentioned shooting method embodiment when executed by the processor 701, and the process can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 9 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, and processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further include a power source (e.g., a battery) for powering the various components, and that the power source may be logically coupled to the processor 110 via a power management system to perform functions such as managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein, the user input unit 107 is configured to receive a first input of a user when the first image is displayed on the shooting preview interface; the input unit 104 is configured to control the camera to capture a second image in response to the first input received by the user input unit 107; the processor 110 is configured to perform multiple exposure fusion processing on the first image and the second image based on contour information of a target exposure object in the target image, and output a third image; wherein the target image includes: the first image or the second image.
Optionally, in an embodiment of the present application, the first image is: an image obtained after shooting a target exposure object in a shooting preview interface, wherein the first image comprises silhouette image information; the processor 110 is configured to determine a background image from the first image and the second image according to the shooting order of the first image and the second image; the processor 110 is specifically configured to perform multiple exposure fusion processing on the first image and the second image according to the contour information of the target exposure object by using the determined background image as a background, and output a third image.
Optionally, in an embodiment of the present application, the processor 110 is further configured to determine an exposure area in the target image; wherein the target exposure object is a shooting object in an exposure area.
Optionally, in an embodiment of the present application, the processor 110 is specifically configured to identify a background image and a silhouette image in the target image when the target image includes silhouette image information; the determining module is specifically configured to determine at least one first image area in the target image according to the image information of the background image and the silhouette portrait; the display unit 106 is configured to display at least one recommendation identifier, where the recommendation identifier is used to indicate a first image area; the user input unit 107 is further configured to receive a second input of a target recommendation identifier from the at least one recommendation identifier by a user; the processor 110 is further configured to determine, in response to the second input, a first image area indicated by the target recommendation identifier as an exposure area.
Optionally, in the embodiment of the present application, the user input unit 107 is specifically configured to receive a sliding input of a user on a target image; the processor 110 is specifically configured to determine, as the exposure area, a second image area surrounded by an input track of the sliding input in response to the sliding input received by the user input unit 107.
Optionally, in an embodiment of the present application, the display unit 106 is configured to display a first identifier and a second identifier, where the first identifier is used to indicate a first shooting mode, and the second identifier is used to indicate a second shooting mode; the user input unit 107 is further configured to receive a third input for the first identifier and the second identifier; the display unit 106 is further configured to display a shooting mode control in response to the third input received by the receiving module, where the shooting mode control is used to indicate a shooting mode of N shots; the shooting mode controls comprise N shooting mode sub-controls; and the shooting module is used for controlling the camera to shoot N images according to the display information of the N shooting mode sub-controls.
Optionally, in the embodiment of the present application, the user input unit 107 is further configured to receive a fourth input of a shooting mode control by a user; the above-mentioned further processor 110 is configured to update the display position of at least one of the N photographing mode sub-controls in response to the fourth input received by the user input unit 107.
In the electronic device provided by the embodiment of the application, when the first image is displayed on the shooting preview interface, the shooting device receives the first input of the user, shoots the preview picture of the shooting preview interface to obtain the second image, and then performs multiple exposure fusion processing on the first image and the second image based on the outline information of the target exposure object in the target image to obtain the third image; wherein the target image includes at least one of: a first image, a second image. According to the method, the electronic equipment can identify the target exposure object in the image, and perform multiple exposure fusion processing on the first image and the second image according to the outline information of the target exposure object in the first image or the second image, so that the image with multiple exposure effects in the image area where the exposure object is located can be quickly and conveniently generated, and further the shooting effect is optimized.
It should be appreciated that in embodiments of the present application, the input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g. a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above-described shooting method embodiment, and can achieve the same technical effects, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the shooting method embodiment, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a nonvolatile storage medium, which is executed by at least one processor to realize the respective processes of the above-described photographing method embodiments, and achieve the same technical effects.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.
Claims (13)
1. A photographing method, the method comprising:
displaying a first identifier and a second identifier, wherein the first identifier is used for indicating a first shooting mode, and the second identifier is used for indicating a second shooting mode; the first shooting mode is a double exposure mode, and the second shooting mode is a silhouette image mode;
receiving a third input of a user to the first identifier and the second identifier;
Responding to the third input, entering a combined shooting mode when the third input is the input of dragging the second mark to the first mark or the input of dragging the first mark to the second mark, and displaying a shooting mode control on a shooting preview interface, wherein the shooting mode control is used for indicating the shooting mode of N times of shooting; the shooting mode controls comprise N shooting mode sub-controls;
Controlling a camera to shoot N images according to the display sequence of the N shooting mode sub-controls, wherein the display sequence of the N shooting mode sub-controls corresponds to the shooting sequence of the shooting mode;
receiving a first input of a user in the case that the photographing preview interface includes a first image;
controlling the camera to shoot a second image in response to the first input;
performing multiple exposure fusion processing on the first image and the second image based on contour information of a target exposure object in the target image, and outputting a third image;
wherein the target image includes: the first image or the second image.
2. The method of claim 1, wherein the first image is: an image obtained after shooting the target exposure object in the shooting preview interface, wherein the first image comprises silhouette image information;
the multi-exposure fusion processing is performed on the first image and the second image based on the outline information of the target exposure object in the target image, and a third image is output, including:
determining a background image from the first image and the second image according to the shooting sequence of the first image and the second image;
And taking the background image as a background, performing multiple exposure fusion processing on the first image and the second image according to the contour information of the target exposure object, and outputting a third image.
3. The method according to claim 1, wherein the multiple exposure fusion processing is performed on the first image and the second image based on contour information of a target exposure object in the target image, and before outputting a third image, the method further comprises:
determining an exposure area in the target image;
the target exposure object is a shooting object in the exposure area.
4. A method according to claim 3, wherein said determining an exposure area in said target image comprises:
Identifying a background image and a silhouette image in the target image in the case that the target image includes silhouette image information;
Determining at least one first image area in a target image according to the image information of the background image and the silhouette portrait, and displaying at least one recommendation identifier, wherein the recommendation identifier is used for indicating one first image area;
Receiving a second input of a target recommendation identifier in the at least one recommendation identifier by a user;
In response to the second input, a first image area indicated by the target recommendation identification is determined as an exposure area.
5. A method according to claim 3, wherein said determining an exposure area in said target image comprises:
Receiving a sliding input of a user on the target image;
in response to the sliding input, a second image area surrounded by an input track of the sliding input is determined as an exposure area.
6. The method of claim 1, wherein after displaying the capture mode control, further comprising:
Receiving a fourth input of a user to the shooting mode control;
And in response to the fourth input, updating the display position of at least one shooting mode sub-control in the N shooting mode sub-controls.
7. A photographing apparatus, the apparatus comprising: receiving module, shooting module, synthesis module and display module, wherein:
The receiving module is used for receiving a first input of a user under the condition that a first image is displayed on the shooting preview interface;
The shooting module is used for responding to the first input received by the receiving module and controlling a camera to shoot a second image;
the synthesis module is used for carrying out multiple exposure fusion processing on the first image and the second image based on the contour information of the target exposure object in the target image and outputting a third image;
Wherein the target image includes: the first image or the second image;
The display module is used for displaying a first mark and a second mark, wherein the first mark is used for indicating a first shooting mode, and the second mark is used for indicating a second shooting mode; the first shooting mode is a double exposure mode, and the second shooting mode is a silhouette image mode;
The receiving module is further configured to receive a third input for the first identifier and the second identifier;
The display module is further configured to, in response to the third input received by the receiving module, enter a combined shooting mode when the third input is an input for dragging the second identifier to the first identifier or an input for dragging the first identifier to the second identifier, and display a shooting mode control on a shooting preview interface, where the shooting mode control is used to indicate a shooting mode of N shots; the shooting mode controls comprise N shooting mode sub-controls;
The shooting module is further used for controlling the camera to shoot N images according to the display information of the N shooting mode sub-controls, wherein the display sequence of the N shooting mode sub-controls corresponds to the shooting sequence of the shooting modes.
8. The apparatus of claim 7, wherein the first image is: an image obtained after shooting the target exposure object in the shooting preview interface, wherein the first image comprises silhouette image information;
The apparatus further comprises: a determining module;
the determining module is used for determining a background image from the first image and the second image according to the shooting sequence of the first image and the second image;
The synthesizing module is specifically configured to perform multiple exposure fusion processing on the first image and the second image according to the contour information of the target exposure object by using the background image determined by the determining module as a background, and output a third image.
9. The apparatus of claim 7, wherein the apparatus further comprises: a determining module;
the determining module is used for determining an exposure area in the target image;
the target exposure object is a shooting object in the exposure area.
10. The apparatus of claim 9, wherein the device comprises a plurality of sensors,
The determining module is specifically configured to identify a background image and a silhouette image in a target image when the target image includes silhouette image information;
The determining module is specifically configured to determine at least one first image area in the target image according to the image information of the background image and the silhouette portrait;
The display module is further used for displaying at least one recommendation identifier, wherein the recommendation identifier is used for indicating one first image area;
The receiving module is further used for receiving a second input of a target recommendation identifier in the at least one recommendation identifier by a user;
The determining module is further configured to determine, in response to the second input, a first image area indicated by the target recommendation identifier as an exposure area.
11. The apparatus of claim 9, wherein the device comprises a plurality of sensors,
The receiving module is specifically used for receiving sliding input of a user on the target image;
the determining module is specifically configured to determine, as an exposure area, a second image area surrounded by an input track of the sliding input in response to the sliding input received by the receiving module.
12. The apparatus of claim 7, wherein the apparatus further comprises: updating a module;
The receiving module is further used for receiving a fourth input of the shooting mode control by a user;
And the updating module is used for responding to the fourth input received by the receiving module and updating the display position of at least one shooting mode sub-control in the N shooting mode sub-controls.
13. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the shooting method as claimed in any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111442093.XA CN114143461B (en) | 2021-11-30 | 2021-11-30 | Shooting method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111442093.XA CN114143461B (en) | 2021-11-30 | 2021-11-30 | Shooting method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114143461A CN114143461A (en) | 2022-03-04 |
CN114143461B true CN114143461B (en) | 2024-04-26 |
Family
ID=80389778
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111442093.XA Active CN114143461B (en) | 2021-11-30 | 2021-11-30 | Shooting method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114143461B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115988312A (en) * | 2022-08-25 | 2023-04-18 | 维沃移动通信有限公司 | Shooting method and device, electronic equipment and storage medium |
CN115529413A (en) * | 2022-08-26 | 2022-12-27 | 华为技术有限公司 | Shooting method and related device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103293825A (en) * | 2013-06-26 | 2013-09-11 | 深圳市中兴移动通信有限公司 | Multiple exposure method and device |
CN105208288A (en) * | 2015-10-21 | 2015-12-30 | 维沃移动通信有限公司 | Photo taking method and mobile terminal |
CN106851125A (en) * | 2017-03-31 | 2017-06-13 | 努比亚技术有限公司 | A kind of mobile terminal and multiple-exposure image pickup method |
CN110611768A (en) * | 2019-09-27 | 2019-12-24 | 北京小米移动软件有限公司 | Multiple exposure photographic method and device |
CN111866388A (en) * | 2020-07-29 | 2020-10-30 | 努比亚技术有限公司 | Multiple exposure shooting method, equipment and computer readable storage medium |
CN113382169A (en) * | 2021-06-18 | 2021-09-10 | 荣耀终端有限公司 | Photographing method and electronic equipment |
-
2021
- 2021-11-30 CN CN202111442093.XA patent/CN114143461B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103293825A (en) * | 2013-06-26 | 2013-09-11 | 深圳市中兴移动通信有限公司 | Multiple exposure method and device |
CN105208288A (en) * | 2015-10-21 | 2015-12-30 | 维沃移动通信有限公司 | Photo taking method and mobile terminal |
CN106851125A (en) * | 2017-03-31 | 2017-06-13 | 努比亚技术有限公司 | A kind of mobile terminal and multiple-exposure image pickup method |
CN110611768A (en) * | 2019-09-27 | 2019-12-24 | 北京小米移动软件有限公司 | Multiple exposure photographic method and device |
CN111866388A (en) * | 2020-07-29 | 2020-10-30 | 努比亚技术有限公司 | Multiple exposure shooting method, equipment and computer readable storage medium |
CN113382169A (en) * | 2021-06-18 | 2021-09-10 | 荣耀终端有限公司 | Photographing method and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN114143461A (en) | 2022-03-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112492212B (en) | Photographing method and device, electronic equipment and storage medium | |
CN112135046B (en) | Video shooting method, video shooting device and electronic equipment | |
CN112135049B (en) | Image processing method and device and electronic equipment | |
JP2021145209A (en) | Electronic apparatus | |
CN114143461B (en) | Shooting method and device and electronic equipment | |
CN112738402B (en) | Shooting method, shooting device, electronic equipment and medium | |
CN112887617B (en) | Shooting method and device and electronic equipment | |
CN113794829B (en) | Shooting method and device and electronic equipment | |
CN112437232A (en) | Shooting method, shooting device, electronic equipment and readable storage medium | |
CN112532881B (en) | Image processing method and device and electronic equipment | |
CN112702531B (en) | Shooting method and device and electronic equipment | |
CN113329172B (en) | Shooting method and device and electronic equipment | |
CN112637515B (en) | Shooting method and device and electronic equipment | |
CN112333386A (en) | Shooting method and device and electronic equipment | |
CN112738403A (en) | Photographing method, photographing apparatus, electronic device, and medium | |
CN113794831B (en) | Video shooting method, device, electronic equipment and medium | |
CN113923368B (en) | Shooting method and device | |
CN112653841B (en) | Shooting method and device and electronic equipment | |
CN112291471B (en) | Shooting control method, shooting control device, electronic device and readable storage medium | |
CN114070998B (en) | Moon shooting method and device, electronic equipment and medium | |
CN111654623A (en) | Photographing method and device and electronic equipment | |
CN114040099B (en) | Image processing method and device and electronic equipment | |
CN112887621B (en) | Control method and electronic device | |
CN113923367B (en) | Shooting method and shooting device | |
CN116782022A (en) | Shooting method, shooting device, electronic equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |