CN113905182A - Shooting method and equipment - Google Patents

Shooting method and equipment Download PDF

Info

Publication number
CN113905182A
CN113905182A CN202010576174.8A CN202010576174A CN113905182A CN 113905182 A CN113905182 A CN 113905182A CN 202010576174 A CN202010576174 A CN 202010576174A CN 113905182 A CN113905182 A CN 113905182A
Authority
CN
China
Prior art keywords
image
target
synthesized
user
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010576174.8A
Other languages
Chinese (zh)
Other versions
CN113905182B (en
Inventor
丁匡正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010576174.8A priority Critical patent/CN113905182B/en
Publication of CN113905182A publication Critical patent/CN113905182A/en
Application granted granted Critical
Publication of CN113905182B publication Critical patent/CN113905182B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application provides a shooting method and equipment, relates to the technical field of electronics, and can set proper exposure parameters for a subject interested by a user according to the intention or the requirement of the user in a shooting view-finding stage, so that the exposure condition of a target image synthesized by an image shot according to the exposure parameters can better meet the intention or the intention of the user, and the personalized shooting requirement of the user is met. The scheme comprises the following steps: the electronic equipment enters a target shooting mode of a camera application; after the operations of a user on a first target object and a second target object on a preview image are respectively detected, respectively determining a first target exposure parameter and a second target exposure parameter; displaying a target image after detecting a shooting operation of a user; the target image is generated according to the first image to be synthesized and the second image to be synthesized; the first image to be synthesized is obtained by shooting according to the first target exposure parameter, and the second image to be synthesized is obtained by shooting according to the second target exposure parameter. The embodiment of the application is used for the shooting process.

Description

Shooting method and equipment
Technical Field
The embodiment of the application relates to the technical field of electronics, in particular to a shooting method and equipment.
Background
In a scene with a large light ratio (i.e., a strong light-dark contrast), because the camera sensor has no tolerance of human eyes and is high, if electronic equipment such as a mobile phone or a tablet computer is correctly exposed according to a bright object to be shot, a dark object to be shot is seriously underexposed. Similarly, if the electronic device is correctly exposed with respect to a dark subject, then a bright subject will be overexposed. While both under-exposed and over-exposed images can severely lose details of the object.
In the prior art, in a scene with a large light ratio, a high-dynamic range (HDR) mode is adopted for exposure adjustment and image synthesis. However, this approach easily results in distortion of the synthesized image and poor post-adjustment. In addition, the mode is difficult to meet the increasing diversified and personalized shooting requirements of users, so that the user experience is poor.
Disclosure of Invention
The embodiment of the application provides a shooting method and equipment, which can set a proper exposure parameter for a subject interested by a user according to the intention or the requirement of the user in a shooting view-finding stage, so that the exposure condition of a target image synthesized by an image shot according to the exposure parameter can better meet the intention or the intention of the user, the diversified and personalized shooting requirements of the user are met, the image quality of the synthesized image can be improved, the shot image is more natural, and the user experience is improved.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in one aspect, an embodiment of the present application provides a shooting method, including: the electronic equipment enters a target shooting mode of the camera application and displays a preview interface; after detecting the operation of a user on a first target object on a preview image, the electronic equipment determines a first target exposure parameter according to the first target object; after detecting the operation of the user on a second target object on the preview image, the electronic equipment determines a second target exposure parameter according to the second target object; after the electronic equipment detects the shooting operation of a user, displaying a target image, wherein the target image is generated according to a first image to be synthesized and the second image to be synthesized; the first image to be synthesized is obtained by shooting according to the first target exposure parameter, and the second image to be synthesized is obtained by shooting according to the second target exposure parameter.
In the scheme, the electronic equipment can set the appropriate exposure parameter for the target object interested by the user according to the intention or the requirement of the user under the preview interface of the target shooting mode, so that the exposure condition of the target image synthesized by the image shot according to the exposure parameter can better meet the intention or the intention of the user, the diversified and personalized shooting requirement of the user can be met, the image quality of the synthesized image can be improved, the shot image is more natural, and the user experience is improved.
In one possible design, after the electronic device enters the target shooting mode of the camera application, the method further includes: the electronic device prompts the user to select a plurality of target objects.
In this way, the user can select the target object of which the exposure condition needs to be adjusted according to the prompt of the electronic equipment.
On the other hand, an embodiment of the present application provides a shooting method, including: the electronic equipment opens a camera application and displays a photographed preview interface; after detecting the operation of a user on a first target object on a preview image, the electronic equipment determines a first target exposure parameter according to the first target object; after detecting the operation of the user on a second target object on the preview image, the electronic equipment determines a second target exposure parameter according to the second target object; the electronic equipment enters a target shooting mode; after the electronic equipment detects the shooting operation of a user, displaying a target image, wherein the target image is generated according to a first image to be synthesized and a second image to be synthesized; the first image to be synthesized is obtained by shooting according to the first target exposure parameter, and the second image to be synthesized is obtained by shooting according to the second target exposure parameter.
According to the scheme, the electronic equipment can automatically enter the target shooting mode after detecting the operation of a user on a plurality of target objects, and set the appropriate exposure parameters for the target objects interested by the user according to the intention or the requirement of the user, so that the exposure condition of the target image synthesized by the image shot according to the exposure parameters can better meet the intention or the intention of the user, the diversified and personalized shooting requirements of the user are met, the image quality of the synthesized image can be improved, the shot image is more natural, and the user experience is improved.
In one possible design, entering the object capture mode by the electronic device includes: the electronic equipment prompts a user whether to enter a target shooting mode; the electronic device enters a target shooting mode in response to an instruction operation by a user.
Therefore, the electronic equipment can determine whether to enter the target shooting mode according to the instruction of the user, and directly and automatically enter the target shooting mode under the condition of avoiding mistaken touch.
In one possible design, the method further includes: before the electronic equipment displays the target image, the first image to be synthesized and the second image to be synthesized are respectively displayed.
Therefore, by displaying the first image to be synthesized and the second image to be synthesized, the user can intuitively compare the difference between the target image and the image to be synthesized, and better user experience is obtained.
In one possible design, the method further includes: after determining a first target exposure parameter according to a first target object, the electronic equipment obtains one or more frames of first preview images according to the first target exposure parameter, and displays the first preview images on a preview interface; and after determining a second target exposure parameter according to a second target object, the electronic equipment obtains one or more frames of second preview images according to the second target exposure parameter, and displays the second preview images on a preview interface.
In the scheme, the first preview image or the second preview image is displayed on the preview interface, so that a user can intuitively determine whether the current first exposure parameter and the current second exposure parameter meet the own will or expectation, and the user can conveniently adjust the exposure parameters according to the requirements.
In one possible design, the electronic device determines a first target exposure parameter from a first target object, including: the electronic equipment automatically measures light according to the reflected light of the shot object corresponding to the first target object; the electronic device determines a first target exposure parameter based on the result of the automatic photometry.
In one possible design, the method further includes: after the electronic equipment detects that a user operates a first target object on a preview image, displaying an exposure adjusting control on a preview interface; the electronic device adjusts the first target exposure parameter in response to user operation of the exposure adjustment control.
In one possible design, after detecting an operation of a user on a first target object on a preview image, the electronic device determines a first target exposure parameter according to the first target object, including: after the electronic equipment detects that a user operates a first target object on a preview image, displaying an exposure setting control corresponding to the first target object on a preview interface; the electronic device determines a first target exposure parameter in response to a user operation with respect to the exposure setting control. By displaying the exposure setting control, the user can conveniently adjust the exposure parameters.
In one possible design, the pixel value of a first subject on the target image is the same as the pixel value of a first subject on the first image to be synthesized, where the first subject is a subject corresponding to the first target object; the pixel value of a second main body on the target image is the same as the pixel value of a second main body on a second image to be synthesized, and the second main body is a main body corresponding to a second target object; and the pixel values of other pixel points except the first main body and the second main body on the target image are weighted average values of the pixel values of corresponding pixel points on the first image to be synthesized and the second image to be synthesized after registration.
In the scheme, the pixel values of the first main body and the second main body are set according to the will or the requirement of the user, so that the exposure condition of the synthesized target image can better meet the will or the requirement of the user, and the diversified and personalized shooting requirement of the user is met.
In one possible design, the method further includes: the electronic equipment obtains a third target exposure parameter according to the first target exposure parameter and the second target exposure parameter; and the target image is generated according to the first image to be synthesized, the second image to be synthesized and the third image to be synthesized, and the third image to be synthesized is obtained by shooting according to the third target exposure parameter.
In this arrangement, the electronic apparatus may combine the final target image with the other images to be combined other than the first image to be combined and the second image to be combined.
In one possible design, the method further includes: and after the electronic equipment detects the shooting operation of the user, displaying a third image to be synthesized.
Therefore, the user can conveniently and intuitively compare the difference between the third image to be synthesized and other images to be synthesized and the target image.
In one possible design, the pixel value of a first subject on the target image is the same as the pixel value of the first subject on the first image to be synthesized, and the first subject is a subject corresponding to the first target object; the pixel value of a second main body on the target image is the same as the pixel value of a second main body on a second image to be synthesized, and the second main body is a main body corresponding to a second target object; the pixel values of other pixel points on the target image except the first main body and the second main body are the same as the pixel values of corresponding pixel points on the third image to be synthesized.
According to the scheme, a third target exposure parameter is obtained according to the first target exposure parameter and the second target exposure parameter, a third image to be synthesized is obtained according to the third target exposure parameter, and the pixel values of other pixel points on the target image except for the first main body and the second main body are the same as the pixel values of corresponding pixel points on the third image to be synthesized, so that more details in the image can be reserved, and the image quality of the target image is better. The pixel values of the first main body and the second main body are set according to the will or the requirement of the user, so that the exposure condition of the synthesized target image can better meet the will or the requirement of the user, and the diversified and personalized shooting requirement of the user is met.
In one possible design, the first target object is a first target subject, the first subject being the first target subject; or the first target object is a first target area, and the first main body is a main body where the first target area is located, or the first main body is a main body with the largest area ratio among a plurality of main bodies included in the first target area.
That is, the first target object may be a subject or may be a region; the first subject is a subject to which the first target object corresponds.
In one possible design, the operation on the first target object includes a single click operation, a double click operation, a pressure press operation, a long press operation, or an operation that circumscribes the first target object.
That is, the embodiment of the present application may select the first target object in various ways.
In one possible design, the operation on the first target object includes a double-click operation, a pressure-per-operation, a long-press operation, or an operation delineating the first target object on the first target object.
That is, the embodiment of the present application may select the first target object in various ways.
In one possible design, the first target exposure parameter includes one or more of sensitivity ISO, aperture, shutter time, or exposure value EV.
That is, the first target exposure parameter may be characterized by parameters in multiple dimensions.
In another aspect, an embodiment of the present application provides an electronic device, including: a screen for displaying an interface; one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs including instructions that, when executed by the electronic device, cause the electronic device to perform the photographing method provided by the embodiment of the present application.
In still another aspect, the present application provides a computer-readable storage medium including computer instructions, which, when executed on a computer, cause the computer to execute the shooting method provided by the present application.
In still another aspect, the present application provides a computer program product, which when run on a computer, causes the computer to execute the shooting method provided by the present application.
Drawings
Fig. 1 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a shooting method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a set of display interfaces provided by an embodiment of the present application;
FIG. 4 is a schematic view of another display interface provided in an embodiment of the present application;
FIG. 5 is a schematic view of another display interface provided in an embodiment of the present application;
FIG. 6 is a schematic view of another set of display interfaces provided by embodiments of the present application;
FIG. 7 is a schematic view of another set of display interfaces provided by embodiments of the present application;
FIG. 8 is a schematic view of another display interface provided in an embodiment of the present application;
FIG. 9 is a schematic view of another set of display interfaces provided by embodiments of the present application;
FIG. 10 is a schematic diagram of a set of images provided by an embodiment of the present application;
FIG. 11 is a schematic view of another image provided by an embodiment of the present application;
FIG. 12 is a schematic view of another set of images provided by an embodiment of the present application;
FIG. 13 is a schematic view of another set of images provided in accordance with an embodiment of the present application;
fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "a plurality" means two or more than two.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
The embodiment of the application provides a shooting method, which can be applied to electronic equipment, and can set a proper exposure parameter for a subject interested by a user according to the intention or the requirement of the user in a shooting preview state, so that the exposure condition of a target image synthesized by an image shot according to the exposure parameter can better meet the intention or the intention of the user, the diversified and personalized shooting requirement of the user can be met, the image quality of the synthesized image can be improved, the shot image is more natural, and the user experience is improved.
In the prior art, the HDR mode is usually adopted for exposure adjustment and image synthesis. In the HDR mode, multiple images are automatically shot according to different exposure parameters and are automatically synthesized, and the exposure parameters cannot be manually selected, so that the synthesized image is seriously distorted, the post-adjustment is complex, and the diversified and personalized shooting requirements of users are difficult to meet.
For example, the electronic device in the embodiment of the present application may be a mobile phone, a tablet computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), or another mobile terminal, or may be a professional camera or another device.
Fig. 1 shows a schematic structural diagram of an electronic device 100. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1. For example, the display screen 194 may be used to display a preview interface, a capture interface, and the like in the professional HDR mode.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The camera 193 may also include various types. For example, camera 193 may include a tele camera, a wide camera, or a super wide camera, etc., having different focal lengths. The long-focus camera is small in field angle and suitable for shooting scenes in a small range at a far position; the wide-angle camera has a larger field angle; the field angle of the ultra-wide-angle camera is larger than that of the wide-angle camera, and the ultra-wide-angle camera can be used for shooting a large-range picture such as a panorama. In some embodiments, the telephoto camera with the smaller field angle may be rotated so that scenes in different ranges may be photographed.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
In the embodiment of the present application, the photosensitive element may detect the magnitude of the light entering amount, and the ISP may perform photometry according to the magnitude of the light entering amount, thereby determining the target exposure parameter. The camera 193 may acquire an image according to target exposure parameters, thereby generating a preview image or an image to be synthesized.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
In an embodiment of the application, the processor 110 may execute instructions stored in the internal memory 121 to combine the images to be combined into a final target image in the professional HDR mode.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
In the embodiment of the present application, in the preview state, the detection means such as the touch sensor 180K may be used to detect an operation of the user selecting a plurality of target objects. The photosensitive element can detect the size of the light entering amount, and the ISP can be used for performing automatic photometry according to the size of the light entering amount corresponding to the target object, so as to determine the target exposure parameter. The camera 193 may be used to acquire images according to target exposure parameters to generate an image to be synthesized. The processor 110 may execute instructions stored in the internal memory 121 to synthesize the images to be synthesized into a final target image. The display screen 194 may be used to display interfaces such as a preview interface and a capture interface.
The shooting method provided by the embodiment of the application will be described below by taking the electronic device as a mobile phone and taking the screen of the mobile phone as the touch screen.
Referring to fig. 2, an embodiment of the present application provides a shooting method, including:
200. the handset enters the professional HDR mode of the camera application.
In some embodiments, the mobile phone may preset a professional HDR mode in the camera application and enter the professional HDR mode for shooting in response to the user's operation to enter the professional HDR mode.
For example, after detecting an operation of clicking the camera icon 301 shown in (a) of fig. 3 by the user, the mobile phone starts a camera application and enters a photographing mode shown in (b) of fig. 3. Illustratively, after detecting that the user clicks the professional HDR control 302 shown in (b) of fig. 3, the mobile phone enters the professional HDR mode, and displays a preview interface shown in (c) of fig. 3. And the mobile phone displays the preview image on the preview interface. As another example, after detecting that the user clicks the control 303 shown in (b) in fig. 3, the mobile phone displays an interface shown in (d) in fig. 3; after detecting the operation of clicking the control 304 by the user, the mobile phone enters the professional HDR mode, and displays a preview interface of the professional HDR mode as shown in (c) in fig. 3.
For another example, the mobile phone displays an interface of a desktop or non-camera application, enters the professional HDR mode after detecting a voice instruction of the user to enter the professional HDR mode, and displays a preview interface of the professional HDR mode as shown in (c) in fig. 3.
It should be noted that the mobile phone may also enter the professional HDR mode in response to other operations such as a user's touch operation, a voice instruction, or a shortcut gesture, and the embodiment of the present application does not limit the operation of triggering the mobile phone to enter the professional HDR mode.
It should be noted that the professional HDR mode may also have other names, for example, a manual HDR mode or an enhanced HDR mode, and the name is not limited in the embodiments of the present application.
In some embodiments, after the mobile phone enters the professional HDR mode, the user may be prompted with the functions of the professional HDR mode to facilitate the user to better use the mode to take pictures. Illustratively, the handset may prompt the user to: in the professional HDR mode, a plurality of interested objects can be selected, the mobile phone determines appropriate exposure parameters according to each object selected by the mobile phone, a to-be-synthesized image is shot according to each exposure parameter, and then a target image with an exposure condition meeting the desire of the mobile phone is synthesized according to the to-be-synthesized image.
201. The mobile phone generates target exposure parameters according to a target object, wherein the target object comprises a target main body or a target area.
After the mobile phone enters the professional HDR mode, a preview interface of the professional HDR mode is displayed in a preview state, and a preview image is presented in the preview interface. The mobile phone can perform photometry according to the light incoming quantity of light rays reflected to the photosensitive sensor of the camera by an object in the field angle range of the camera in the current shooting scene by adopting a default photometry mode, so as to calculate exposure parameters, acquire an image based on the exposure parameters, generate a preview image according to the acquired image and display the preview image on a preview interface.
For example, the default metering method may be global metering, central local metering, spot metering, or the like, and the default metering method is not limited in the embodiment of the present application. The default metering mode may be set by the user, and if the user does not set the default metering mode is the global metering mode.
The amount of light entering refers to the total amount of light that is reflected by the subject (or subject) and enters the photosensor through the lens. For example, when the default metering mode is the global metering mode, the mobile phone may automatically calculate the global exposure parameter according to the total light entering amount of all the photographed objects within the field angle range in the current photographing scene, so as to acquire an image based on the global exposure parameter, generate a preview image according to the acquired image, and display the preview image on a preview interface; for another example, when the default metering mode is the central partial metering mode, the mobile phone may automatically calculate an exposure parameter according to the light entering amount of the photographed object in the shooting scene corresponding to the preset middle area on the preview image, acquire an image based on the exposure parameter, generate a preview image according to the acquired image, and display the preview image on the preview interface. In the following embodiments of the present application, a default metering method is described as an example of global metering.
After the mobile phone enters the professional HDR mode, the user can select a target object on the preview interface. In some embodiments, the mobile phone may prompt the user to select a plurality of target objects by displaying information or voice broadcasting. After the mobile phone detects the operation of selecting the target object by the user, the target exposure parameters are determined according to the target object and stored. The target exposure parameter is a parameter that enables a target object on an image to be correctly exposed. The exposure parameters may include parameters capable of reflecting the degree of image exposure, for example, the exposure parameters include target exposure parameters, which may include aperture, sensitivity ISO, shutter time, and the like. The embodiment of the present application does not limit the specific parameters of the target exposure parameters. After entering the professional HDR mode, the user may select target objects multiple times on the preview interface, where each target object may correspond to a set of target exposure parameters.
It should be noted that, a user may select any number of target objects greater than 1 according to a requirement, and the number of target objects selected by the user is not limited in the embodiment of the present application. The following description will be given taking an example in which the user selects two target objects in the preview state.
For example, after entering the professional HDR mode, the user selects a first target object in the preview image, and the mobile phone performs automatic photometry according to the first target object selected by the user. That is, the mobile phone automatically measures light according to the amount of light entering light reflected to the light-sensitive sensor by the object in the corresponding field angle range of the first target object in the shooting scene on the preview image. And the mobile phone generates and stores a first target exposure parameter according to the automatic photometry result. And then, the user selects a second target object on the image displayed on the preview interface, the mobile phone performs automatic photometry according to the second target object, and generates and stores a second target exposure parameter according to an automatic photometry result.
In the embodiment of the present application, if the target object is bright (for example, sky, white wrapping paper, etc.), it means that the amount of incident light reflected by the target object into the camera is large, and in order to prevent overexposure of the target object, the amount of incident light needs to be reduced. Therefore, when the mobile phone determines that the light inlet amount corresponding to the target object is larger through automatic photometry, the target object can be indicated to be brighter, and the exposure value corresponding to the target exposure parameter corresponding to the target object set by the mobile phone can be smaller, for example, the corresponding aperture F value is larger, the shutter time is shorter, or the ISO value is lower. Therefore, the situation that the target object is overexposed is not easy to occur in the image shot by the mobile phone according to the target exposure parameters. Conversely, if the target object is relatively dark (e.g., a human face in the backlight, a gray mask, etc.), it means that the amount of light entering the camera by being reflected by the target object is small, and it is necessary to increase the amount of light entering the camera in order to prevent the target object from being underexposed. Therefore, when the mobile phone determines that the light inlet amount corresponding to the target object is smaller through automatic photometry, it may indicate that the target object is darker, and the exposure value corresponding to the target exposure parameter corresponding to the target object set by the mobile phone may be larger, for example, the corresponding aperture F value is smaller, the shutter time is longer, or the ISO value is higher. Therefore, the situation that the target object is underexposed is not easy to occur in the image shot by the mobile phone according to the target exposure parameter.
For example, in the backlight portrait shooting scene shown in fig. 4, if the first target object selected by the user is the sky 401 in fig. 4, the sky 401 includes white clouds and the sun is relatively bright, so that the exposure value corresponding to the first target exposure parameter generated by the mobile phone for the target object is small, for example, the aperture F value is 1.8, the ISO value is 200, and the shutter time is 1/50 seconds. If the second target object selected by the user is the person 402 in fig. 4, and the current field of view is the sun, the person 402 is dark in a backlight scene, and therefore the mobile phone generates a larger exposure value corresponding to the second target exposure parameter for the target object, for example, the aperture F value is 1.4, the ISO is 300, and the shutter time is 1/30 seconds.
In an embodiment of the present application, the target object may be a target subject or a target area. For example, when a user wants to adjust or control the exposure degree, exposure condition, or exposure parameter of a certain subject, the subject may be selected as a target object, or an area may be selected as a target object on an image of the subject. For another example, when the user wants to adjust the exposure degree, the exposure condition, or the exposure parameter of a certain area, the area may be selected as the target object. The mobile phone automatically photometers according to the target object to obtain matched target exposure parameters. Therefore, the target exposure parameter matches the target object selected by the user, and is an exposure parameter that meets the user's exposure will.
For example, the target object may be a target subject, such as person 402 in FIG. 4. When the target object is a target body, the mobile phone performs panoramic segmentation on the preview image in a preview state, and the mobile phone determines the target body of which the exposure parameter needs to be adjusted according to the selection operation of the user on the preview image and the panoramic segmentation result. In this case, the mobile phone performs automatic photometry according to the area where the target main body is located, so as to obtain a corresponding target exposure parameter according to an automatic photometry result.
As another example, the target object may be a target area. For example, the target area may only include a part of an object, and at this time, the mobile phone performs automatic photometry according to the whole object, so as to obtain a corresponding target exposure parameter according to an automatic photometry result; or, the mobile phone automatically photometers according to the corresponding object part in the target area, so as to obtain the corresponding target exposure parameter according to the automatic photometry result. For another example, the target area may include a plurality of objects, and at this time, the mobile phone automatically measures light according to a main object of the plurality of objects in the target area, so as to obtain a corresponding target exposure parameter according to an automatic light measurement result. Specifically, the mobile phone performs active photometry according to the portion of the main object in the target area. For example, the main object may be an object with the largest area ratio in the target region, the most central object in the target region, or a completely displayed object in the target region.
After the mobile phone generates the target exposure parameters according to the target object, images can be collected according to the target exposure parameters, and preview images are generated and displayed according to the collected images.
For example, after the user selects the first target object, the mobile phone generates a first target exposure parameter according to the first target object, and acquires an image according to the first target exposure parameter to generate and display a first preview image. Under the condition that the shooting scene is not switched, the mobile phone can continuously acquire images according to the first target exposure parameters so as to generate and display a first preview image. And generating a second target exposure parameter according to the second target object and acquiring an image according to the second target exposure parameter by the mobile phone until the user selects the second target image so as to generate and display a second preview image.
In some embodiments, the target object is selected in different manners, and the display manner of the preview image obtained by the mobile phone according to the target exposure parameter is also different. For example, after the user selects a first target object in a first preset manner (e.g., clicks the first target object), the mobile phone generates a first target exposure parameter according to the first target object, and acquires a frame of image according to the first target exposure parameter to generate and display a frame of first preview image. And then, the mobile phone acquires an image according to the global exposure parameters, so as to generate and display a preview image. After the user selects a second target image (for example, clicks a second target object) in a first preset mode, the mobile phone generates a second target exposure parameter according to the second target object, and acquires a frame of image according to the second target exposure parameter so as to generate and display a frame of second preview image. And then, the mobile phone acquires an image according to the global exposure parameters, so as to generate and display a preview image.
For another example, after the user selects the first target object, the mobile phone generates the first target exposure parameter according to the first target object. The mobile phone continuously acquires images according to the first target exposure parameter within a preset time (for example, 1s) after the user selects the first target object, so as to generate and display a first preview image. If the operation that the user selects the second target object is not detected within the preset time, acquiring an image by the mobile phone according to the global exposure parameters after the preset time so as to generate and display a preview image; and generating and displaying a second preview image according to the second target exposure parameter until the second target object selected by the user is detected. If the mobile phone detects that the user selects the second target object within the preset time after the user selects the first target object, the mobile phone generates a second target exposure parameter according to the second target object, and generates and displays a second preview image according to the second target exposure parameter.
For another example, after the user selects the first target object in the second preset mode (for example, long-time pressing of the first target object), the mobile phone enters the lock mode. In this way, after the user selects the first target object, a corresponding first preview image is presented on the preview interface; the corresponding second preview image is not presented on the preview interface until after the user selects the second target object.
In addition, the user may select the target object in the preview image by a manner of circling the target object by double-clicking, pressing, drawing a circle, drawing a box, or the like, by a manner of voice indication (for example, voice indication selects the middle-most character as the target object), or by another manner.
As shown in fig. 5, in some embodiments, when the user selects a certain position of the preview image by clicking or double-clicking, the mobile phone expands a preset number of pixels outwards with the clicked position as the center, and the formed area is the target area. For example, a region that expands 5 pixels outward, centered on the click, is determined as the target region 502 selected by the user. In other embodiments, when the user selects the target area in the preview image by circling the area, such as by drawing a circle or a box, the selected target area is the area circled by the user or circled within the box. For example, when the user draws a box, the box is the target area 501 selected by the user. In other embodiments, when the user clicks a certain position, the subject to which the position belongs is the target object. For example, when the user clicks on the position of the deer, the deer is the target object 503.
In other embodiments, the mobile phone may further prompt the first target exposure parameter and the second target exposure parameter to the user through displaying a prompt message, voice broadcasting, or other manners, so that the user can more intuitively correspond each target exposure parameter to the preview image, and the user can conveniently know the numerical condition of the target exposure parameter generated after selecting the target object. Illustratively, as shown in fig. 6 (a), after the user selects the target area 501, the mobile phone displays a first target exposure parameter 601 on the first preview image. As shown in fig. 6 (b), after the user selects the target area 502, the mobile phone displays the second target exposure parameter 602 on the second preview image. The method for prompting the target exposure parameter for the user is not limited in the embodiment of the application.
In other embodiments, the user may also manually adjust the target exposure parameters according to personal needs, will, experience, or preferences, so as to generate target exposure parameters that better meet the user's will, and improve the user experience.
For example, in the preview interface, when the user selects a first target object in the preview image, the mobile phone may automatically calculate the first target exposure parameter according to the light incoming amount of the first target object in the manner described above. And the mobile phone acquires a first preview image according to the first target exposure parameter and presents the first preview image on a preview interface. Moreover, an exposure adjustment control is displayed on the preview interface, and a user can adjust the first target exposure parameter automatically calculated by the current mobile phone according to the requirement, will, experience or preference of the user, so that the adjusted new first target exposure parameter is obtained. Wherein the first target exposure parameters include aperture, ISO, and shutter time.
For example, as shown in (a) of fig. 7, the exposure adjustment control includes adjustment controls corresponding to parameters of aperture, ISO, and shutter time, respectively. The user may directly input the parameter value in the adjustment control corresponding to each parameter to adjust the first target exposure parameter, or may select the parameter value in the pull-down menu of the adjustment control corresponding to each parameter to adjust the first target exposure parameter. In addition, the user can adjust the values of the parameters through the sliding scroll bars corresponding to the parameters of the aperture, the ISO and the shutter time, so as to adjust the first target exposure parameter. For another example, the first target exposure parameter may further include an exposure value EV, as shown in (b) of fig. 7, the exposure adjustment control is an adjustment rod corresponding to the exposure value, the user may adjust a gear of the exposure value by sliding the adjustment rod, and the mobile phone determines the size of the corresponding group of exposure parameters according to the adjusted gear of the exposure value. The method for adjusting the first target exposure parameter is not limited in the embodiments of the present application.
In other embodiments, after the user selects the target object in the preview image, the user can directly set the target exposure parameter according to personal needs, will, experience, or preference, without the need of the mobile phone to automatically photometry according to the target object so as to calculate the target exposure parameter.
For example, in the preview interface, after the user selects the first target object in the preview image, the mobile phone may display an exposure setting control on the preview interface, and the user may set the first target exposure parameter through the exposure setting control. Illustratively, as shown in fig. 8, the exposure setting control includes a slide scroll bar corresponding to each of the parameters of aperture, ISO, and shutter time. The user can set each parameter value by sliding the sliding scroll bar corresponding to each parameter, thereby setting the first target exposure parameter. In addition, the user may directly input the parameter value in the setting control corresponding to each parameter to set the first target exposure parameter, or may select the parameter value in the pull-down menu of the setting control corresponding to each parameter to set the first target exposure parameter. The embodiment of the present application does not limit the manner of setting the first target exposure parameter.
When the user selects the second target object in the preview image, the user may adjust or set the second target exposure parameter for the second target area according to his own needs, experience, will, or taste. The process of adjusting or setting the second target exposure parameter by the user is similar to the process of adjusting or setting the first target exposure parameter, and is not described herein again.
In the embodiment of the application, in response to a selection operation of a user on different target objects, the mobile phone may automatically generate target exposure parameters corresponding to each target object for each target object, or the user may manually set the target exposure parameters corresponding to each target object for each target object. The mobile phone can display the preview images generated according to the exposure parameters of the targets in the preview interface, so that the user can visually see the effect of each preview image, and the user can conveniently compare the exposure effects of the preview images before and after the exposure parameters are adjusted according to the target objects.
In some other embodiments, the first preview image is not a preview image directly generated from an image acquired according to the first target exposure parameter, but is a synthesized image of images respectively obtained according to the global exposure parameter and the first target exposure parameter. For example, the mobile phone acquires the image 1 by using the first target exposure parameter, acquires the image 2 by using the global exposure parameter, and replaces the pixels of the first target object in the image 2 with the pixels of the first target object in the image 1, thereby generating the first preview image. That is, the image of the first target object on the first preview image is obtained according to the first target exposure parameter, and the image other than the first target object is obtained according to the global exposure parameter. The first preview image generated and displayed in the mode can enable the user to visually compare the change situation of the exposure degree in the first target object before and after the user selects the first target object so as to set the target exposure parameter, and is convenient for the user to determine whether the first target object shot by adopting the first target exposure parameter meets the desire and intention of the user. Similar to the first preview image, the second preview image may also be an image obtained from the second target exposure parameter and the global exposure parameter.
In other embodiments of the present application, after the mobile phone enters the photographing mode, the professional HDR mode may be automatically started according to an operation of a user selecting a plurality of target objects on a preview image of the photographing mode. For example, in a photographing mode as shown in (a) in fig. 9, the mobile phone displays a preview image of a photographic subject on a preview interface; the mobile phone detects that a user selects a first target object 901 in a preview image; when the mobile phone detects that the user selects the second target object 902 in the preview image within the preset time again, as shown in (b) in fig. 9, the mobile phone automatically enters the professional HDR mode, and displays a preview interface of the professional HDR mode as shown in (d) in fig. 9.
In this case, after detecting an operation of selecting a plurality of target objects by the user, the mobile phone generates and stores corresponding target exposure parameters according to the selected target objects. After the shooting operation of the user is detected, the mobile phone shoots an image by adopting the target exposure parameters and synthesizes a target image which meets the permission and the intention of the user.
In the embodiment of the application, the mobile phone can automatically recognize that the user wants to take a picture in the professional HDR mode by detecting that the user continuously selects the target object for multiple times. Therefore, the mobile phone generates respective corresponding target exposure parameters according to a plurality of target objects selected by the user, so as to shoot and synthesize a target image which meets the desire and intention of the user.
In some embodiments, the handset may also prompt the user on a preview interface of the photo mode whether to initiate the professional HDR mode before automatically initiating the professional HDR mode. For example, the handset may display a prompt box as shown in (c) of fig. 9 to prompt the user whether to enter the professional HDR mode. After the user selects "yes", the handset automatically starts the professional HDR mode and displays a preview interface of the professional HDR mode as shown in (d) of fig. 9.
It should be noted that, in the embodiment of the present application, a user may select any number of target objects greater than 1 according to a requirement, and the number of target objects selected by the user is not limited in the embodiment of the present application. The user may select the region in the preview image by a method of circling the target object by double-click, pressure press, long press, circle or square drawing, or other methods.
202. And after detecting the shooting operation of the user, the mobile phone shoots the image to be synthesized according to the target exposure parameters.
For example, after detecting that the user clicks the shooting control 1001 shown in fig. 10 (a), the mobile phone shoots a first image according to the first target exposure parameter for storage, and shoots a second image according to the second target exposure parameter for storage. That is, the image to be synthesized includes the first image and the second image. It can be understood that the mobile phone may also perform shooting in response to other operations of the user, such as a touch operation, a voice instruction, or a shortcut gesture, and the specific form of the shooting operation is not limited in the embodiment of the present application.
For example, when the first target object is the target region 501, the first target object is a part of the sky, and in order to prevent overexposure of the first target object, it is necessary to reduce the light-incoming amount, so that the exposure value corresponding to the first target exposure parameter determined by the mobile phone through automatic photometry is low. Thus, as shown in (b) of fig. 10, the first image captured with the first target exposure parameter 601 is dark. When the second target object is the target area 502, since the second target object is a part of a person, it is necessary to increase the light input amount in order to prevent the second target object from being under-exposed, and therefore the exposure value corresponding to the second target exposure parameter determined by the mobile phone through automatic photometry is high. Thus, as shown in (c) of fig. 10, the first image captured with the second target exposure parameters 602 is brighter. Subsequently, the mobile phone can synthesize a final target image according to the first image and the second image.
In the embodiment of the application, the number of images shot after the mobile phone detects that the shooting operation of the user is consistent with the number of target objects selected in the previous step, that is, the number of the target exposure parameters generated in the previous step. The number of the shot images is not limited in the embodiment of the application.
In the shooting process, the mobile phone continuously shoots a plurality of images. It should be noted that the continuous shooting here means that the mobile phone continuously shoots at the fastest continuous shooting speed, and the continuous shooting speed is determined by hardware, for example, several tens of shots per second may be taken. For example, taking two target objects as an example, the mobile phone continuously captures a first image and a second image during the capturing process, and the capturing interval between the capturing of the first image and the capturing of the second image is short and can be ignored. Therefore, the objects in the first image and the second image can be considered to be substantially identical. The substantial object matching is understood to mean that the number of objects, the content of the objects, and the positions of the objects included in the first image and the second image are substantially matched, so that the phenomena of ghosting, blurring, and the like caused by the position shift of the objects when the images are synthesized from the first image and the second image can be avoided as much as possible.
In some embodiments, the mobile phone may further obtain a third target exposure parameter by performing a weighted average on the first target exposure parameter and the second target exposure parameter. And the mobile phone shoots a third image according to the third target exposure parameter and stores the third image. That is, the image to be synthesized includes the first image, the second image, and the third image. Subsequently, the mobile phone can synthesize a final target image according to the first image, the second image and the third image. It should be noted that the mobile phone may generate the third target exposure parameter before detecting the shooting operation of the user, or may generate the third exposure parameter after detecting the shooting operation of the user.
For example, the first target exposure parameter is: aperture F value of 1.8, ISO of 200, shutter time of 1/50 seconds, second target exposure parameters: the aperture F value was 1.4, ISO was 300, and shutter time was 1/30 seconds. The weights corresponding to the first target exposure parameter and the second target exposure parameter are equal, and the mobile phone can calculate to obtain a third target exposure parameter by calculating a weighted average value of the first target exposure parameter and the second target exposure parameter. The third target exposure parameter is: the aperture F value was 1.6, ISO was 250, and shutter time was 1/40 seconds. And the mobile phone shoots a third image according to the third target exposure parameter and stores the third image. The weight value in the weighted average value is determined by the user according to the actual requirement. The weight values in the embodiment of the present application are not limited, for example, the weight values corresponding to the first target exposure parameter and the second target exposure parameter may be respectively 0.5 and 0.5, or respectively 0.2 and 0.8, or respectively 0.4 and 0.6, and the like.
Illustratively, when the mobile phone obtains the third target exposure parameter and captures the third image, the mobile phone continuously captures the first image, the second image and the third image during the capturing process. The shooting interval of the mobile phone for shooting the first image, the second image and the third image can be ignored. Therefore, it is considered that the objects in the first image, the second image, and the third image are substantially identical, and the substantial coincidence of the objects means that the number of objects, the content of the objects, and the positions of the objects included in the first image, the second image, and the third image are substantially identical, so that it is possible to avoid as much as possible the phenomena such as ghost, blur, and the like caused by the positional deviation of the objects when image-combining is performed from the first image, the second image, and the third image thereafter.
203. And the mobile phone synthesizes a target image according to the image to be synthesized.
After the image to be synthesized is shot according to the target exposure parameters, the mobile phone can synthesize the shot image to be synthesized to generate a final target image.
In the image synthesis process, the mobile phone firstly carries out panoramic segmentation on the images to be synthesized respectively to obtain segmentation maps of the images to be synthesized. In the panorama segmentation, all objects including a background in an image are segmented to distinguish different objects on different images. Illustratively, as shown in fig. 11, a segmentation map of an image to be synthesized includes an object 1 (sky), an object 2 (mountain), an object 3 (tree), an object 4 (person), an object 5 (animal), and the like, which are segmented. For example, the segmentation map may be a mask map, and the regions where different objects are located may be distinguished by different colors.
There are various schemes for obtaining the target image by the mobile phone according to the stored image to be synthesized, which will be described below.
The first scheme is as follows:
the images to be synthesized comprise a first image, a second image and a third image, and the mobile phone synthesizes the stored first image, the stored second image and the stored third image to obtain a target image. After the mobile phone respectively performs panoramic segmentation on the first image, the second image and the third image, a corresponding first segmentation graph, a second segmentation graph and a third segmentation graph are obtained. The mobile phone determines a first subject according to the first target object and determines a second subject according to the second target object.
The first subject is determined based on the first target object. If the first target object is a target subject, the first subject is the first target object; if the first target object is a target area, the first subject is a subject corresponding to the target area. In some embodiments, in a preview state, after determining the first target area, the mobile phone may determine a main body corresponding to the target area, where the main body is the first main body. The main body corresponding to the target area is a main body in which the target area is located or a main body occupying the largest ratio in the target area.
In other embodiments, in the preview state, the mobile phone may save the coordinates of the target area; after obtaining the image to be synthesized, the mobile phone may determine a main body corresponding to the target area according to the coordinates of the target area, where the main body is the first main body. The main body corresponding to the target area is a main body in which the target area is located or a main body occupying the largest ratio in the target area.
The pixel point of the first main body on the first image, the pixel point of the second main body on the second image and other pixel points except the first main body and the second main body on the third image can be synthesized into the target image. Illustratively, (a) in fig. 12 is a first image captured with the first target exposure parameter 601, fig. 12 (b) is a second image captured with the second target exposure parameter 602, and fig. 12 (c) is a third image captured with the third target exposure parameter. The mobile phone extracts, according to the position of the first subject in the first segmentation chart, a pixel point of the first subject 1201 at the position on the first image as shown in (a) in fig. 12. The cell phone extracts, according to the position of the second main body in the second segmentation map, a pixel point of the second main body 1202 at the position on the second image as shown in (b) in fig. 12. The cell phone replaces the pixel point of the first subject at the position on the third image shown in (c) in fig. 12 with the pixel point of the first subject 1201 on the first image according to the position of the first subject on the third segmentation map. The cell phone replaces the pixel points of the second main body at the position on the third image as shown in (c) in fig. 12 with the pixel points of the second main body 1202 on the second image according to the position of the second main body on the third segmentation chart. The mobile phone retains the pixel points of the third image shown in (c) of fig. 12 except for the first subject and the second subject, thereby synthesizing to obtain the target image. That is, the target image is an image formed by replacing the pixel points of the first main body of the first image and the pixel points of the second main body of the second image with the pixel points of the first main body of the first image and the pixel points of the second main body of the second image, respectively, in the third image.
It should be noted that, if the target object selected by the user is a target area and the target area is a part of a certain subject, the pixel points of the whole subject are replaced instead of only replacing the pixel points in the target area. Therefore, the problem of image tearing or image blurring caused by serious different exposures of different areas in the same main body can be avoided, and the finally synthesized target image is more natural. Moreover, in general, the reflection condition and the exposure condition in different areas of the same subject are similar, so that the target exposure parameters in the previously selected target area can be applied to the whole subject, and the whole subject can be replaced when the image is synthesized.
That is, the pixel value of the first subject corresponding to the first target object on the target image is the same as the pixel value of the first subject on the first image. The pixel value of the second subject corresponding to the second target object on the target image is the same as the pixel value of the second subject on the second image. The pixel values of other pixel points on the target image except the first main body and the second main body are the same as the pixel values of corresponding pixel points on the third image.
In order to ensure that the synthesized image effect is more natural, in the image synthesis process, the mobile phone performs smooth transition processing on the edges of the replaced first main body and the replaced second main body. Illustratively, in a smooth transition process, when the mobile phone replaces a pixel point of the first main body on the third image with a pixel point of the first main body on the first image, 9 pixel points may be smoothed outwards from an edge position of the replaced first main body on the third image, and pixel values of the 9 pixel points may be sequentially adjusted according to a pixel value of the first main body on the replaced first image at the edge position and an original pixel value of the edge position on the third image. For example, for a certain edge position a of the first subject on the target image, the pixel values of adjacent 9 pixel points of the position a radiating toward the outside of the first subject in a certain direction on the first image are a1, a2, a3, a4, a5, a6, a7, a8 and a9 in this order; the pixel values of the adjacent 9 pixel points radiating in the same direction toward the outside of the first subject on the third image at the position a are b1, b2, b3, b4, b5, b6, b7, b8, and b9 in this order. Thus, the fused pixel values of the 9 adjacent pixel points radiated by the position A along the same direction outside the first main body on the target image are sequentially as follows: 0.9 a1+0.1 b1, 0.8 a2+0.2 b2, 0.7 a3+0.3 b3, 0.6 a4+0.4 b4, 0.5 a5+0.5 b5, 0.4 a6+0.6 b6, 0.3 a7+0.7 b7, 0.2 a8+0.8 b8, and 0.1 a9+0.9 b 9. Therefore, the brightness of the position, close to the first main body, on the target image is close to the brightness of the first main body on the first image, and the brightness of the position, far away from the first main body, on the target image is close to the brightness of the third image, so that the influence of the replaced part of the first image on the brightness of the un-replaced part on the third image is naturally faded from inside to outside.
It should be noted that, in the smooth transition process, the number of pixels that are smooth outward at the edge of the replaced main body is not limited. Generally, the larger the area of the replaced main body is, the larger the number of outward smooth pixels is, so that smooth transition is more natural. In addition, in the smooth transition processing, the pixel values of the outward smooth pixel points are gradually changed, and the manner of the pixel values of the outward smooth pixel points is not limited.
Scheme II:
the image to be synthesized comprises a first image and a second image, and the mobile phone synthesizes the stored first image and the second image to obtain a target image. After the mobile phone respectively performs panoramic segmentation on the first image and the second image, a corresponding first segmentation image and a corresponding second segmentation image are obtained. The mobile phone determines a first subject according to the first target object and determines a second subject according to the second target object.
Illustratively, (a) in fig. 13 is a first image captured with the above-described first target exposure parameters 601, and (b) in fig. 13 is a second image captured with the above-described second target exposure parameters 602. In some embodiments, the mobile phone extracts a pixel point of the first body 1301 on the first image as shown in (a) in fig. 13 according to the position of the first body in the first segmentation map. The cell phone replaces the pixel points of the first subject on the second image as shown in (b) in fig. 13 with the pixel points of the first subject 1301 on the first image as shown in (a) in fig. 13 according to the position of the first subject on the second segmentation map. The cell phone retains the pixel points at the location of the second body 1302 on the second image. The cell phone registers the first image and the second image except the first subject and the second subject. Then, the mobile phone sets the pixel values of the pixel points on the second image except the first main body and the second main body as the weighted average of the pixel values of the pixel points on the registered first image and the registered second image except the first main body and the second main body, so that the obtained second image is the target image. For example, the target image may be seen in a schematic view as shown in (c) of fig. 13. It should be noted that the weighting coefficient may be arbitrarily defined according to the requirement, and this is not limited in the embodiment of the present application.
That is to say, the pixel value of the first subject on the target image is the same as the pixel value of the first subject on the first image, the pixel value of the second subject on the target image is the same as the pixel value of the second subject on the second image, and the pixel values of other pixel points on the target image except the first subject and the second subject are the weighted average of the pixel values of the corresponding pixel points on the registered first image and the registered second image.
It can be understood that the mobile phone may also extract the pixel point of the second main body on the second image according to the position of the second main body in the second segmentation map. And the mobile phone replaces the pixel points of the second main body on the first image with the pixel points of the second main body on the second image according to the position of the second main body on the first segmentation graph. The mobile phone keeps the pixel point at the position of the first main body on the first image. The cell phone registers the first image and the second image except the first subject and the second subject. Then, the mobile phone sets the pixel values of the pixel points on the first image except the first main body and the second main body as the weighted average of the pixel values of the corresponding pixel points on the registered first image and the registered second image except the first main body and the second main body, so that the obtained first image is the target image. It should be noted that the weighting coefficient may be arbitrarily defined according to the requirement, and this is not limited in the embodiment of the present application. That is to say, the pixel point of the first main body on the target image is from the first main body on the first image, the pixel point of the second main body on the target image is from the second main body on the second image, and the pixel values of other pixel points on the target image are the weighted average of the pixel values of the corresponding pixel points on the registered first image and second image.
Also, in order to ensure that the synthesized image effect is more natural, the mobile phone may perform smooth transition processing on the edge of the replaced first body in the image synthesis process. The specific manner of the smooth transition processing is similar to that in the first scheme, and is not described herein again.
Comparing the first scheme with the second scheme, it can be seen that the advantages of the first scheme over the second scheme are: for portions other than the first subject and the second subject, the target image retains original pixels in the third image, i.e., the portions other than the first subject and the second subject are pixels in the third image acquired according to a third target exposure parameter obtained from a weighted average of the first target exposure parameter and the second target exposure parameter. Since the target object that the user wants to adjust the exposure parameters is usually an excessively bright or dark portion in the image, the entire image to be synthesized obtained according to the target exposure parameters corresponding to the target object may also be overexposed or underexposed, thereby causing a loss of partial image details in the image to be synthesized. In this way, in the subsequent image synthesis process, more image details may be lost in the target image obtained by weighted averaging the pixel values in the image to be synthesized, where part of the image details are lost. Therefore, for the parts except the first subject and the second subject, compared with adopting the weighted average value of the pixel value in the first image and the pixel value in the second image, the original pixel on the third image is directly reserved, so that the target image can reflect more details of the object to be shot in the current shooting scene, and the dynamic range of the target image is larger.
The advantage of scheme two over scheme one is that: in the second scheme, only two images to be synthesized need to be shot, and the shooting time is short, while in the first scheme, three images need to be shot, and the shooting time is long. The shorter the photographing time is, the less easily the subject to be photographed moves. Therefore, the second scheme can better avoid the problems of image ghosting or image blurring and the like caused by the mismatching of the main body due to the movement of the object to be shot.
In some embodiments of the present application, after detecting the shooting operation of the user, the mobile phone may sequentially display the image to be synthesized and the final target image obtained by shooting according to each target exposure parameter. The mobile phone can conveniently and intuitively see that the first image and the second image have the over-exposure or under-exposure phenomenon through comparison by displaying the first image shot according to the first target exposure parameter, the second image shot according to the second target exposure parameter and the finally synthesized target image, so that the effect is poor; and the final synthesized target image has a better effect. According to the shooting method provided by the embodiment of the application, in the shooting view-finding stage, the user sets the appropriate target exposure parameter for the subject interested by the user in the image according to personal requirements or wishes, so that the image to be synthesized is shot according to the target exposure parameter, the target image synthesized according to the image to be synthesized can better meet the wishes or the wishes of the user, the personalized shooting requirements of the user are met, the image quality of the synthesized image can be improved, the shot image is more natural, and the user experience is improved.
In other embodiments of the present application, after detecting the shooting operation of the user, the mobile phone may directly display the final target image without displaying the image to be synthesized that is obtained by shooting according to each target exposure parameter.
In some embodiments, since the mobile phone needs a certain synthesis processing time to generate the target image according to the image to be synthesized, the user may be prompted to "process", "synthesize", or display information such as a rotating ring during the synthesis processing of the target image, so that the user can know that the mobile phone is not stuck or failed.
In summary, compared with the prior art, the shooting method in the embodiment of the present application has the following beneficial effects:
first, in the embodiment of the present application, the pixel values of the portions of the finally synthesized target image other than the first subject and the second subject are the pixel values of the corresponding portions of the image captured based on the third target exposure parameter, which is obtained by weighted averaging of the first target exposure parameter and the second target exposure parameter, instead of only weighted averaging of the pixel values. Therefore, the target image obtained by the shooting method of the embodiment of the application can keep more details in the shooting object.
In one conventional HDR shooting technique, global photometry is performed first during mobile phone shooting, and assuming that an exposure value obtained by photometry is EV, a first image is shot according to +1EV, a second image is shot according to-1 EV, and a third image is shot according to EV. The final image is synthesized from these three images. Much detail in the image is missing because the second image taken according to-1 EV may be partially under-exposed; the first image taken according to +1EV may be partially over-exposed and as such may lack much detail in the image. The weighted average of the pixels of the two images without details to obtain the target image may lack more details and even cause image blurring. In the embodiment of the application, the pixels of one or more subjects interested by the user are directly replaced by the pixels of the other image to be synthesized on the certain image to be synthesized, so that more details can be reserved, the image is clearer, and the dynamic range is improved.
In the conventional HDR image capturing technology, a pixel value of a final image is obtained by weighting pixel values. Pixels of different regions in the same subject in the target image may be from different images, and pixels of adjacent regions in the same subject may be from different images, thereby possibly causing pixels of different regions in the same subject to be fractured. The brightness difference of different regions (such as adjacent regions) in the same subject is large, so that the synthesized target image is unnatural and has poor image effect.
In another prior art, a mobile phone obtains a preview image through global photometry, and divides pixel points of the image into a first pixel set with higher brightness and a second pixel set with lower brightness according to the size of a pixel value. The user can adjust the first target exposure parameter and the second target exposure parameter respectively corresponding to the first pixel set and the second pixel set respectively. And obtaining a first pixel set in the final image by adopting the first target exposure parameter, and obtaining a second pixel set by adopting the second target exposure parameter. Where the pixel values in each set of pixels may be scattered, rather than distributed completely centrally. Therefore, in the final image, different pixel values in the same subject or in adjacent regions may belong to different pixel sets, so that different target exposure parameters are used for obtaining the final image, and the pixel values in different regions of the same subject may have larger difference, so that pixels in different regions of the same subject are split, and the difference in brightness of different regions (such as adjacent regions) in the same subject is larger, thereby resulting in unnatural synthesized target image and poor image effect. In the embodiment of the application, pixel replacement is performed on the target main body, so that the problem of image splitting possibly existing in the prior art is solved.
Second, in the embodiment of the present application, the target exposure parameter is determined according to the target object selected by the user, and is in accordance with the will or the desire of the user. Therefore, the target image finally synthesized by the image to be synthesized, which is obtained according to the target exposure parameter, can better meet the will or the promissory of the user, and meets the personalized shooting requirement of the user, thereby improving the shooting experience of the user.
In the existing HDR shooting technology, the exposure of each part of an image is completely automatically generated by a mobile phone and cannot be selected manually, so that the exposure does not always meet the expectation of a user and serious distortion may occur. For example, under normal visual effects, which may be a darker part, the cell phone automatically adjusts to be brighter, which does not conform to the visual effects, resulting in distortion of the image. In the embodiment of the application, the exposure parameters can be adjusted or set manually by the user, so that the final target image meets the desire or intention of the user, the image quality is improved, and the image is more natural.
Thirdly, in the embodiment of the application, after the mobile phone detects the shooting operation of the user, the target image is automatically shot and synthesized, and the target image is filmed once without post adjustment. The difficulty of post-processing images by a user is simplified.
In some prior art, the user is often required to perform post-processing on the image to obtain the target image according with the user's desire or desire. The post-processing of the image is complicated and the difficulty is high. In the embodiment of the application, after the mobile phone detects the shooting operation of the user, the target image is automatically shot and synthesized, and the target image is filmed once without post adjustment. The difficulty of post-processing images by a user is simplified.
In the above description, the user has selected two target objects as an example, and the user may select two or more N areas. In this case, the mobile phone generates N corresponding target exposure parameters according to the N target objects selected by the user, and acquires N images to be synthesized by using the N target exposure parameters, thereby generating a final target image according to the N images to be synthesized. Or, the mobile phone may determine the (N + 1) th exposure parameter according to the N target exposure parameters, so as to obtain N +1 images to be synthesized according to the N +1 exposure parameters, and generate a final target image according to the N +1 images to be synthesized.
In the shooting method provided by the embodiment of the application, the mobile phone can determine the target exposure parameters corresponding to different target subjects according to the will of the user, and obtain the image to be synthesized according to the target exposure parameters. When the images are synthesized, the pixels in the images to be synthesized are replaced by taking the main body as a unit, so that the exposure degree of different main bodies on the target images can be flexibly adjusted.
The image synthesizing process described in the above embodiment is performed with pixel replacement performed in units of subjects. In other embodiments, pixel replacement may also be performed in units of regions. For example, when a user wants to adjust the exposure of a certain target area, the user can select the target area, and the mobile phone generates an exposure parameter corresponding to the target area through automatic photometry. And then, shooting the image to be synthesized by the mobile phone according to the generated exposure parameters, and only replacing the pixel value in the target area on the image to be synthesized without replacing the pixel value of the main body corresponding to the target area when synthesizing the target image. Therefore, the mobile phone can flexibly adjust the exposure degrees of different areas on the target image according to the wishes of the user. When the same main body has light and shade difference, a user can replace only a part of pixel values in the main body through the selection area, so that the requirements of the user on finer-grained exposure adjustment and higher shooting are met.
In other embodiments, the subject on the image may include multiple components, for example, a person may include components such as a head, body, arms, hands, legs, and feet. The mobile phone can also perform component segmentation on the acquired image, so as to divide areas where different components are located. The cell phone can also perform pixel replacement in units of parts. For example, the target object may be a target component. When a user wants to adjust the exposure of a certain part, the user can select the target part or the target area where the part is located, and the mobile phone can automatically perform photometry to generate the exposure parameter corresponding to the target part. Then, the mobile phone shoots an image to be synthesized according to the generated exposure parameters, and only the pixel value in the target component on the image to be synthesized is replaced when the target image is synthesized. Therefore, the mobile phone can flexibly adjust the exposure degree of different parts on the target image according to the will of the user.
In other embodiments, the mobile phone may save the image to be combined while saving the final combined target image. For example, the mobile phone may store a sequence of images including the image to be synthesized and the target image sequentially obtained during the capturing process in the professional HRR mode. The mobile phone can respond to the operation of the user and sequentially play the images in the image sequence.
It should also be noted that, in the above description, the electronic device is taken as a mobile phone as an example, when the electronic device is other devices with a shooting function (for example, a tablet computer, a wearable device, an AR/VR device, or a camera device), the shooting method provided in the embodiment of the present application may also be adopted to perform image synthesis so as to generate the target image, and details are not repeated here.
It will be appreciated that in order to implement the above-described functions, the electronic device comprises corresponding hardware and/or software modules for performing the respective functions. The present application is capable of being implemented in hardware or a combination of hardware and computer software in conjunction with the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, with the embodiment described in connection with the particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In this embodiment, the electronic device may be divided into functional modules according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in the form of hardware. It should be noted that the division of the modules in this embodiment is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
For example, in one division, referring to fig. 14, electronic device 1400 includes a mode determination module 1401, a parameter acquisition module 1402, an image acquisition module 1403, an image segmentation module 1404, and an image synthesis module 1405. The mode determination module is used for determining to enter the professional HDR mode according to the operation of indicating to enter the professional HDR mode by the user or determining to enter the professional HDR mode according to the operation of selecting the target object by the user. The parameter acquisition module is used for acquiring each target exposure parameter according to a target object indicated by a user in the professional HDR mode. The image acquisition module is used for shooting an image to be synthesized according to the target exposure parameters acquired by the parameter acquisition module in the professional HDR mode. The image segmentation module is used for performing panoramic segmentation on the image to be synthesized or the preview image so as to distinguish image areas where different objects are located, for example, determining an image area where a main body corresponding to the target object is located. The image synthesis module is used for carrying out pixel replacement on an image area where a main body corresponding to the target object is located, and synthesizing the image to be synthesized into a final target image.
Embodiments of the present application also provide an electronic device including one or more processors and one or more memories. The one or more memories are coupled to the one or more processors for storing computer program code comprising computer instructions which, when executed by the one or more processors, cause the electronic device to perform the associated method steps described above to implement the photographing method in the above embodiments.
Embodiments of the present application further provide a computer-readable storage medium, in which computer instructions are stored, and when the computer instructions are run on an electronic device, the electronic device is caused to execute the above related method steps to implement the shooting method in the above embodiments.
Embodiments of the present application further provide a computer program product, which when running on a computer, causes the computer to execute the above related steps to implement the shooting method performed by the electronic device in the above embodiments.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component or a module, and may include a processor and a memory connected to each other; the memory is used for storing computer execution instructions, and when the device runs, the processor can execute the computer execution instructions stored in the memory, so that the chip can execute the shooting method executed by the electronic equipment in the above-mentioned method embodiments.
The electronic device, the computer-readable storage medium, the computer program product, or the chip provided in this embodiment are all configured to execute the corresponding method provided above, so that the beneficial effects achieved by the electronic device, the computer-readable storage medium, the computer program product, or the chip may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Through the description of the above embodiments, those skilled in the art will understand that, for convenience and simplicity of description, only the division of the above functional modules is used as an example, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical functional division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, that is, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (20)

1. A photographing method, characterized by comprising:
the electronic equipment enters a target shooting mode of the camera application and displays a preview interface;
after the electronic equipment detects that a user operates a first target object on a preview image, determining a first target exposure parameter according to the first target object;
after the electronic equipment detects that a user operates a second target object on the preview image, determining a second target exposure parameter according to the second target object;
after the electronic equipment detects the shooting operation of a user, displaying a target image, wherein the target image is generated according to a first image to be synthesized and a second image to be synthesized; the first image to be synthesized is obtained by shooting according to the first target exposure parameter, and the second image to be synthesized is obtained by shooting according to the second target exposure parameter.
2. The method of claim 1, wherein after the electronic device enters a target capture mode of a camera application, the method further comprises:
the electronic device prompts a user to select a plurality of target objects.
3. A photographing method, characterized by comprising:
the electronic equipment opens a camera application and displays a photographed preview interface;
after the electronic equipment detects that a user operates a first target object on a preview image, determining a first target exposure parameter according to the first target object;
after the electronic equipment detects that a user operates a second target object on the preview image, determining a second target exposure parameter according to the second target object;
the electronic equipment enters a target shooting mode;
after the electronic equipment detects the shooting operation of a user, displaying a target image, wherein the target image is generated according to a first image to be synthesized and a second image to be synthesized; the first image to be synthesized is obtained by shooting according to the first target exposure parameter, and the second image to be synthesized is obtained by shooting according to the second target exposure parameter.
4. The method of claim 3, wherein the electronic device enters a target capture mode comprising:
the electronic equipment prompts a user whether to enter the target shooting mode;
the electronic equipment responds to the instruction operation of a user to enter the target shooting mode.
5. The method according to any one of claims 1-4, further comprising:
before the electronic equipment displays the target image, the first image to be synthesized and the second image to be synthesized are respectively displayed.
6. The method according to any one of claims 1-5, further comprising:
after the electronic equipment determines a first target exposure parameter according to the first target object, acquiring one or more frames of first preview images according to the first target exposure parameter, and displaying the first preview images on a preview interface;
and after determining a second target exposure parameter according to the second target object, the electronic equipment obtains one or more frames of second preview images according to the second target exposure parameter, and displays the second preview images on a preview interface.
7. The method of any of claims 1-6, wherein the electronic device determines a first target exposure parameter from the first target object, comprising:
the electronic equipment automatically measures light according to the reflected light of the shot object corresponding to the first target object;
and the electronic equipment determines the first target exposure parameter according to the result of automatic photometry.
8. The method of claim 7, further comprising:
after the electronic equipment detects that a user operates a first target object on a preview image, displaying an exposure adjusting control on a preview interface;
and the electronic equipment responds to the operation of a user for the exposure adjusting control to adjust the first target exposure parameter.
9. The method according to any one of claims 1 to 6, wherein the electronic device determines the first target exposure parameter according to the first target object after detecting the operation of the user on the first target object on the preview image, and the method comprises:
after the electronic equipment detects that a user operates a first target object on a preview image, displaying an exposure setting control corresponding to the first target object on a preview interface;
the electronic device determines the first target exposure parameter in response to the user operating the exposure setting control.
10. The method according to any one of claims 1 to 9, wherein a pixel value of a first subject on the target image is the same as a pixel value of the first subject on the first image to be synthesized, the first subject being a subject corresponding to the first target object;
the pixel value of a second subject on the target image is the same as the pixel value of the second subject on the second image to be synthesized, and the second subject is a subject corresponding to the second target object;
the pixel values of other pixel points on the target image except the first main body and the second main body are weighted average values of pixel values of corresponding pixel points on the first image to be synthesized and the second image to be synthesized after registration.
11. The method according to any one of claims 1-9, further comprising:
the electronic equipment obtains a third target exposure parameter according to the first target exposure parameter and the second target exposure parameter;
the target image is generated according to the first image to be synthesized, the second image to be synthesized and a third image to be synthesized, and the third image to be synthesized is obtained according to the third target exposure parameter shooting.
12. The method of claim 11, further comprising:
and after the electronic equipment detects the shooting operation of the user, displaying the third image to be synthesized.
13. The method according to claim 11 or 12, wherein a pixel value of a first subject on the target image is the same as a pixel value of the first subject on the first image to be synthesized, the first subject being a subject corresponding to the first target object;
the pixel value of a second subject on the target image is the same as the pixel value of the second subject on the second image to be synthesized, and the second subject is a subject corresponding to the second target object;
and the pixel values of other pixel points on the target image except the first main body and the second main body are the same as the pixel values of corresponding pixel points on the third image to be synthesized.
14. The method of claim 10 or 13, wherein the first target object is a first target subject, the first subject being the first target subject;
or the first target object is a first target area, and the first main body is a main body where the first target area is located, or the first main body is a main body with the largest area ratio among a plurality of main bodies included in the first target area.
15. The method of claim 1, wherein the operation on the first target object comprises a single click operation, a double click operation, a pressure press operation, a long press operation, or an operation delineating the first target object on the first target object.
16. The method of claim 3, wherein the operation on the first target object comprises a double-click operation, a pressure-on operation, a long-press operation, or an operation delineating the first target object on the first target object.
17. The method according to any one of claims 1 to 16, wherein the first target exposure parameter comprises one or more of sensitivity ISO, aperture, shutter time, or exposure value EV.
18. An electronic device, comprising:
a screen for displaying an interface;
one or more processors;
a memory;
and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the electronic device, cause the electronic device to perform the photographing method of any of claims 1-17.
19. A computer-readable storage medium, comprising computer instructions which, when run on a computer, cause the computer to perform the photographing method according to any one of claims 1 to 17.
20. A computer program product, characterized in that it causes a computer to carry out the shooting method according to any one of claims 1-17, when said computer program product is run on the computer.
CN202010576174.8A 2020-06-22 2020-06-22 Shooting method and equipment Active CN113905182B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010576174.8A CN113905182B (en) 2020-06-22 2020-06-22 Shooting method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010576174.8A CN113905182B (en) 2020-06-22 2020-06-22 Shooting method and equipment

Publications (2)

Publication Number Publication Date
CN113905182A true CN113905182A (en) 2022-01-07
CN113905182B CN113905182B (en) 2022-12-13

Family

ID=79186713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010576174.8A Active CN113905182B (en) 2020-06-22 2020-06-22 Shooting method and equipment

Country Status (1)

Country Link
CN (1) CN113905182B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114422682A (en) * 2022-01-28 2022-04-29 安谋科技(中国)有限公司 Photographing method, electronic device, and readable storage medium
CN115242983A (en) * 2022-09-26 2022-10-25 荣耀终端有限公司 Photographing method, electronic device, computer program product, and readable storage medium
CN116347220A (en) * 2023-05-29 2023-06-27 合肥工业大学 Portrait shooting method and related equipment
CN116389885A (en) * 2023-02-27 2023-07-04 荣耀终端有限公司 Shooting method, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110292242A1 (en) * 2010-05-27 2011-12-01 Canon Kabushiki Kaisha User interface and method for exposure adjustment in an image capturing device
CN107370960A (en) * 2016-05-11 2017-11-21 联发科技股份有限公司 Image processing method
CN108377341A (en) * 2018-05-14 2018-08-07 Oppo广东移动通信有限公司 Photographic method, device, terminal and storage medium
CN108462833A (en) * 2018-03-26 2018-08-28 北京小米移动软件有限公司 Image pickup method, device and computer readable storage medium
CN109495693A (en) * 2017-09-11 2019-03-19 佳能株式会社 Picture pick-up device and method, image processing equipment and method and storage medium
US20190222769A1 (en) * 2018-01-12 2019-07-18 Qualcomm Incorporated Systems and methods for image exposure

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110292242A1 (en) * 2010-05-27 2011-12-01 Canon Kabushiki Kaisha User interface and method for exposure adjustment in an image capturing device
CN107370960A (en) * 2016-05-11 2017-11-21 联发科技股份有限公司 Image processing method
CN109495693A (en) * 2017-09-11 2019-03-19 佳能株式会社 Picture pick-up device and method, image processing equipment and method and storage medium
US20190222769A1 (en) * 2018-01-12 2019-07-18 Qualcomm Incorporated Systems and methods for image exposure
CN108462833A (en) * 2018-03-26 2018-08-28 北京小米移动软件有限公司 Image pickup method, device and computer readable storage medium
CN108377341A (en) * 2018-05-14 2018-08-07 Oppo广东移动通信有限公司 Photographic method, device, terminal and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114422682A (en) * 2022-01-28 2022-04-29 安谋科技(中国)有限公司 Photographing method, electronic device, and readable storage medium
CN114422682B (en) * 2022-01-28 2024-02-02 安谋科技(中国)有限公司 Shooting method, electronic device and readable storage medium
CN115242983A (en) * 2022-09-26 2022-10-25 荣耀终端有限公司 Photographing method, electronic device, computer program product, and readable storage medium
CN116389885A (en) * 2023-02-27 2023-07-04 荣耀终端有限公司 Shooting method, electronic equipment and storage medium
CN116389885B (en) * 2023-02-27 2024-03-26 荣耀终端有限公司 Shooting method, electronic equipment and storage medium
CN116347220A (en) * 2023-05-29 2023-06-27 合肥工业大学 Portrait shooting method and related equipment
CN116347220B (en) * 2023-05-29 2023-07-21 合肥工业大学 Portrait shooting method and related equipment

Also Published As

Publication number Publication date
CN113905182B (en) 2022-12-13

Similar Documents

Publication Publication Date Title
CN113905182B (en) Shooting method and equipment
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
WO2019179179A1 (en) Image processing method and apparatus
WO2020034737A1 (en) Imaging control method, apparatus, electronic device, and computer-readable storage medium
WO2020057198A1 (en) Image processing method and device, electronic device and storage medium
CN110198417A (en) Image processing method, device, storage medium and electronic equipment
CN109005366A (en) Camera module night scene image pickup processing method, device, electronic equipment and storage medium
CN110445989B (en) Image processing method, image processing device, storage medium and electronic equipment
CN113630545B (en) Shooting method and equipment
CN108040204B (en) Image shooting method and device based on multiple cameras and storage medium
CN113810590A (en) Image processing method, electronic device, medium, and system
CN110581957B (en) Image processing method, image processing device, storage medium and electronic equipment
CN114390212B (en) Photographing preview method, electronic device and storage medium
CN116055895B (en) Image processing method and device, chip system and storage medium
EP4228236A1 (en) Image processing method and electronic device
CN116437198B (en) Image processing method and electronic equipment
CN113891008B (en) Exposure intensity adjusting method and related equipment
CN115767262A (en) Photographing method and electronic equipment
CN114071009B (en) Shooting method and equipment
US11989863B2 (en) Method and device for processing image, and storage medium
CN117395495B (en) Image processing method and electronic equipment
CN116051368B (en) Image processing method and related device
CN116055855B (en) Image processing method and related device
CN115426458B (en) Light source detection method and related equipment thereof
EP4231621A1 (en) Image processing method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant