CN114079726A - Shooting method and equipment - Google Patents

Shooting method and equipment Download PDF

Info

Publication number
CN114079726A
CN114079726A CN202010815068.0A CN202010815068A CN114079726A CN 114079726 A CN114079726 A CN 114079726A CN 202010815068 A CN202010815068 A CN 202010815068A CN 114079726 A CN114079726 A CN 114079726A
Authority
CN
China
Prior art keywords
target
faces
face
camera
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010815068.0A
Other languages
Chinese (zh)
Other versions
CN114079726B (en
Inventor
张金雷
吴亮
王妙锋
吴杰
徐川善
何永龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010815068.0A priority Critical patent/CN114079726B/en
Publication of CN114079726A publication Critical patent/CN114079726A/en
Application granted granted Critical
Publication of CN114079726B publication Critical patent/CN114079726B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application provides a shooting method and equipment, relates to the technical field of electronics, and can focus according to depth information of multiple target faces, so that as many target faces as possible are within the depth of field range of a camera, and as many target faces as possible can be clearly imaged on a target image obtained by shooting, and the shooting experience of multi-person group photo is improved. The scheme is as follows: after the electronic equipment opens a camera application, displaying a first preview image, wherein the first preview image comprises n target faces, the n target faces comprise a first target face and a second target face, the first target face image is clear, and the second target face image is fuzzy; determining target focusing positions according to the depth information of the n target faces; and after the camera focuses to the target focusing position, displaying a second preview image on a preview interface, wherein the second preview image comprises n target faces, and the images of the first target face and the second target face are clear. The embodiment of the application is used for image shooting.

Description

Shooting method and equipment
Technical Field
The embodiment of the application relates to the technical field of electronics, in particular to a shooting method and equipment.
Background
In the current image shooting process, electronic equipment such as a mobile phone or a tablet computer generally adopts an automatic focusing method, a middle face or a maximum face to be shot is taken as a focusing target according to a focusing strategy, and a motor in a camera is pushed according to depth information of the focusing target, so that the camera is focused to the position of the focusing target.
Therefore, when the object to be shot comprises a plurality of people with different front and back positions, only one face image is clear on the shot image, for example, the face image at the middle position is clear or the image of the largest face nearest to the camera is clear, and other face images are obviously blurred, so that the multi-person group photo experience of the user is poor. Illustratively, referring to fig. 1, in a multi-person group scene, the image of the face 1 is clear, and the images of the faces 2 and 3 are blurred.
Disclosure of Invention
The embodiment of the application provides a shooting method and equipment, which can enable as many target faces as possible to be within the depth of field range of a camera in a multi-person group photo scene, so that as many target faces as possible are imaged clearly on a target image obtained by shooting, and the shooting experience of multi-person group photo is improved.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in one aspect, an embodiment of the present application provides a shooting method applied to an electronic device, where the electronic device includes a camera. The method comprises the following steps: after the electronic equipment opens the camera application, a first preview image is displayed on a photographing preview interface, the first preview image comprises n target faces, n is an integer larger than 1, the n target faces comprise a first target face and a second target face, the first target face image is clear, and the second target face image is fuzzy. And the electronic equipment determines the target focusing position according to the depth information of the n target faces. And after the camera is focused to the target focusing position, the electronic equipment displays a second preview image on the preview interface, wherein the second preview image comprises n target faces, and the images of the first target face and the second target face on the second preview image are clear.
In the scheme, under the shooting preview state of multi-person group photo, the electronic equipment can perform focusing adjustment according to the depth information of the target faces, so that the target faces as many as possible can clearly form images on a preview interface after focusing adjustment, and the shooting experience of multi-person group photo is improved.
In one possible design, the method further includes: after the electronic equipment detects the shooting operation of a user, a target image is obtained through shooting, the target image comprises n target faces, and the images of the first target face and the second target face on the target image are clear.
According to the scheme, after focusing adjustment is carried out on the electronic equipment according to the depth information of the target faces, the target image can be shot according to the adjusted focusing position, so that the target faces can be clearly imaged on the target image as many as possible, and the shooting experience of multi-person group photo is improved.
In another possible design, face frames of n target faces are displayed on the first preview image, and the area of the face frame of the target face is greater than or equal to a first preset value.
That is to say, the target face may be a face automatically determined by the electronic device, and the face frame of the target face is large, and the target face is close to the camera.
For example, in one approach, a face frame of a non-target face is not displayed on the first preview image; in another scheme, a face frame of a non-target face is also displayed on the first preview image, but the face frame of the non-target face is smaller than a first preset value.
In another possible design, the n target faces are faces designated by the user based on the first preview image.
That is, the target face may not be automatically determined by the electronic device, but rather selected by the user.
In another possible design, the electronic device prompts the user that a focusing adjustment is being performed to make more faces clearly imaged while the camera is focusing to the target focusing position.
Therefore, the electronic equipment can prompt the user of the focusing adjustment, so that the user can know that the focusing adjustment is currently carried out and the blockage does not happen, and the user is helped to know more reasons for clear face imaging.
In another possible design, the electronic device determines the target focusing position according to the depth information of the n target faces, including: and after the electronic equipment responds to the preset operation of the user and enters a group photo mode, determining the target focusing position according to the depth information of the n target faces.
According to the scheme, after the electronic equipment enters the group photo mode, the target focusing position is determined according to the depth information of the n target faces, and therefore focusing adjustment is automatically carried out.
In another possible design, the electronic device sequentially compares the depth information d of the n target faces from small to large1,d2,...,dnThe sequence obtained after sequencing is d'1,d'2,...,d'n. The electronic equipment determines the target focusing position according to the depth information of the n target faces, and the method comprises the following steps: the electronic device derives the focal distance from d 'in the sequence'1And starting to move backwards one by one until the mth target face is found, and satisfying the formula I:
m=minm
st.d'm-d'1≤ΔL1&&d'n-d'm≤ΔL2
wherein m.ltoreq.n, st. denotes "satisfy",&&denotes "AND", min denotes "minimum value", Δ L1Representing the front depth of field, Δ L, of the camera2Representing the back depth of field, d 'of the camera'mAnd the position of the corresponding mth target face is the target focusing position. Therefore, when the target focusing position where the mth target face corresponding to the target focusing distance is located is focused, the 1 st to m-1 st target faces are in the front depth of field range, the m +1 th to nth target faces are in the rear depth of field range, and all the target faces can be clearly imaged in the depth of field range.
If d 'satisfying formula I does not exist'mThen the focus distance is d 'from the sequence'1And starting to move backwards one by one until the mth target face is found, and satisfying the formula two:
m=minm
st.d'm-d'1≤ΔL1
wherein m is less than or equal to n, d'mAnd the position of the corresponding mth target face is the target focusing position.
Therefore, when the target focusing position corresponding to the target focusing distance is focused, the target focusing position is the mth target face, and the imaging of the mth target face can be clearest; at least the front m-1 target faces can be within the front depth of field range so as to be capable of imaging clearly; there may be one or more target faces in m +1 to n within the back depth of field and thus also able to be imaged clearly.
In another possible design, the electronic device determines the target focusing position according to the depth information of the n target faces, including: and when the electronic equipment meets a first preset condition, determining a target focusing position according to the depth information of the n target faces. The first preset condition includes: the first target face on the first preview image is the first target face closest to the camera, the focusing position corresponding to the first preview image is located at the position of the first target face, and d'n-d'1>ΔL2. That is, if d 'is the first target face to be focused'n-d'1>ΔL2That is, the n target faces cannot be imaged clearly within the back depth of field range, so that the target focusing position can be determined according to the depth information of the n target faces, and focusing adjustment is performed, so that as many target faces as possible are imaged clearly within the depth of field range. Alternatively, the first preset condition includes: d 'is the focusing position corresponding to the first preview image'sThe position of the corresponding s-th target face is n ≤ and d's-d'1>ΔL1||d'n-d's>ΔL2. That is, if d 'is found when the s-th target face is focused's-d'1>ΔL1||d'n-d's>ΔL2That is, the n target faces are not all within the depth of field range and cannot be imaged clearly, so that the target focusing position can be determined according to the depth information of the n target faces, and focusing adjustment is performed, so that as many target faces as possible are imaged clearly within the depth of field range.
In another possible design, the electronic device determines the target focusing position according to the depth information of the n target faces, including: and when the focusing distance of the electronic equipment is respectively the depth information corresponding to each target face in the n target faces, counting the number of the target faces of which the depth information is in the depth field range. If the focusing distance is dmWhen the number of target faces with depth information within the depth field range is the largest, d ismAnd the position of the corresponding target face is the target focusing position.
Thus, when the camera is focused to dmWhen the corresponding target face is located, the number of the target faces in the depth of field range of the camera is the largest, and the number of the target faces capable of being clearly imaged is the largest.
In another possible design, the electronic device determines the target focusing position according to the depth information of the n target faces, including: the electronic equipment calculates depth information d 'corresponding to a first target face nearest to the camera'1And the sum of the foreground depths u. The electronic device calculates an image distance v from 1/f to 1/u +1/v, where f denotes a focal length of the camera. And the electronic equipment calculates a driving code value k of a camera motor according to the image distance v, wherein the positions corresponding to k codes are target focusing positions.
Therefore, the first target face is located at the edge position of the foreground deep range close to the direction of the camera, so that the number of the target faces in the foreground deep range is the largest under the condition that the first target face is guaranteed to be clearly imaged, the number of the target faces in the back depth range is larger, and the number of the target faces which can be clearly imaged in the whole depth range is larger.
In another possible design, the electronic device determines the target focusing position according to the depth information of the n target faces, includingComprises the following steps: the electronic equipment calculates depth information d corresponding to each target face in n target faces1,d2,...,dnRespectively sum with the foreground depths to obtain u1,u2,...,un. The electronic equipment respectively connects u with u1,u2,...,unThe number of target faces in the front depth range is determined as the object distance. If the object distance is according to the depth information dmCalculated object distance umWhen the number of the target faces in the front depth range is the largest, the electronic equipment is 1/u according to 1/fm+1/vmCalculating the corresponding image distance vmAnd f denotes a focal length of the camera. The electronic equipment is according to the image distance vmAnd calculating a driving code value k of a camera motor, wherein the positions corresponding to k codes are the target focusing positions.
Thus d'mThe corresponding mth target face is close to the edge position of the camera in the front depth of field range, and the number of the target faces in the foreground depth range is the largest, so that the number of the target faces in the rear depth of field range is large, and the number of the target faces which can be clearly imaged in the whole depth of field range is large.
In another aspect, an embodiment of the present application provides a shooting device, which is included in an electronic device. The device has the function of realizing the behavior of the electronic equipment in any one of the above aspects and possible designs, so that the electronic equipment executes the shooting method executed by the electronic equipment in any one of the possible designs of the above aspects. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes at least one module or unit corresponding to the above functions. For example, the apparatus may comprise a display unit, a determination unit, a processing unit, and the like.
In another aspect, an embodiment of the present application provides an electronic device, including: the camera is used for collecting images; a screen for displaying an interface, one or more processors; and a memory having code stored therein. When executed by an electronic device, cause the electronic device to perform the photographing method performed by the electronic device in any of the possible designs of the above aspects.
In another aspect, an embodiment of the present application provides an electronic device, including: one or more processors; and a memory having code stored therein. When executed by an electronic device, cause the electronic device to perform the photographing method performed by the electronic device in any of the possible designs of the above aspects.
In another aspect, an embodiment of the present application provides a computer-readable storage medium, which includes computer instructions, when the computer instructions are executed on an electronic device, cause the electronic device to perform the shooting method in any one of the possible designs of the foregoing aspects.
In still another aspect, the present application provides a computer program product, which when run on a computer, causes the computer to execute the shooting method performed by the electronic device in any one of the possible designs of the above aspect.
In another aspect, an embodiment of the present application provides a chip system, which is applied to an electronic device. The chip system includes one or more interface circuits and one or more processors; the interface circuit and the processor are interconnected through a line; the interface circuit is used for receiving signals from a memory of the electronic equipment and sending the signals to the processor, and the signals comprise computer instructions stored in the memory; the computer instructions, when executed by the processor, cause the electronic device to perform the method of capturing in any of the possible designs of the above aspects.
For the advantageous effects of the other aspects, reference may be made to the description of the advantageous effects of the method aspects, which is not repeated herein.
Drawings
FIG. 1 is a diagram illustrating an image effect obtained by shooting in the prior art;
fig. 2A is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure;
fig. 2B is a schematic diagram of a software structure of an electronic device according to an embodiment of the present disclosure;
fig. 3A is a schematic structural diagram of a camera provided in the embodiment of the present application;
FIG. 3B is a schematic illustration of an imaging system provided by an embodiment of the present application;
fig. 4 is a flowchart of a shooting method provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of a set of interfaces provided by an embodiment of the present application;
fig. 6A is a schematic interface diagram provided in an embodiment of the present application;
FIG. 6B is a schematic view of another interface provided by an embodiment of the present application;
FIG. 6C is a schematic view of another interface provided by an embodiment of the present application;
FIG. 7A is a schematic view of another interface provided by an embodiment of the present application;
FIG. 7B is a schematic view of another interface provided by an embodiment of the present application;
FIG. 8 is a schematic view of another set of interfaces provided by embodiments of the present application;
FIG. 9 is a schematic view of another set of interfaces provided by embodiments of the present application;
FIG. 10 is a schematic view of another set of interfaces provided by embodiments of the present application;
fig. 11 is a schematic diagram of a ranging method according to an embodiment of the present disclosure;
fig. 12 is a schematic diagram of another distance measuring method according to an embodiment of the present disclosure;
fig. 13A is a schematic view illustrating a focusing effect according to an embodiment of the present disclosure;
fig. 13B is a schematic view illustrating another focusing effect provided in the embodiment of the present application;
fig. 13C is a schematic view of another focusing effect provided in the embodiment of the present application;
fig. 13D is a schematic view illustrating another focusing effect provided in the embodiment of the present application;
fig. 13E is a schematic view of another focusing effect provided in the embodiment of the present application;
fig. 13F is a schematic view of another focusing effect provided in the embodiment of the present application;
fig. 13G is a schematic view illustrating another focusing effect provided in the embodiment of the present application;
fig. 14 is a schematic diagram of a group of shooting effects provided in an embodiment of the present application;
fig. 15 is a schematic view of another group of shooting effects provided in the embodiment of the present application;
fig. 16 is a schematic view of another set of interfaces provided by the embodiments of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "a plurality" means two or more than two.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
The embodiment of the application provides a shooting method, which can be applied to electronic equipment, and can focus according to depth information of a plurality of target objects when a shooting scene comprises the plurality of target objects, so that as many target objects as possible are within the depth of field range of a camera of the electronic equipment, and as many target objects as possible can be clearly imaged on a shot target image, and the shooting experience of multi-target object group photo is improved.
For example, when the target object is a target face, the shooting method provided by the embodiment of the application can enable as many target faces as possible to be within the depth of field range of the camera of the electronic device in a multi-person group photo scene, so that as many images of the target faces as possible on the target image obtained by shooting can be clearly imaged, and the shooting experience of multi-person group photo is improved.
For example, the electronic device may be a mobile phone, a tablet computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), or a professional camera, and the specific type of the electronic device is not limited in this embodiment.
For example, fig. 2A shows a schematic structural diagram of the electronic device 100. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, interfaces, etc. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. Referring to fig. 3A, the camera 193 includes a lens, a photosensitive element, a camera motor, and the like. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
As shown in fig. 3A, the lens assembly may include a plurality of lens sets (or lens sets) and an aperture. The lens is used for converging the light reflected by the object to be shot to the focal plane of the photosensitive element for imaging. The aperture is used for controlling the light quantity of the light which penetrates through the lens and enters the photosensitive surface of the photosensitive element. The camera motor can push one group or a plurality of groups of lenses to move or push the photosensitive elements to move so as to change the positions of the lenses or the photosensitive elements, thereby changing the focal distance and changing the focal position of the lens.
As shown in fig. 3B, the object to be photographed at the focus is imaged most clearly, the corresponding imaging plane is the focus plane, and the depth of field Δ L of the lens is the foreground depth Δ L1And the back field depth Δ L2And (4) summing. Wherein the foreground is deep Δ L1Corresponding to the focal front Δ L1Within the distance range, the object to be shot is within the range of clear imaging of the lens. Back field depth Δ L2Corresponding to the back of focus Δ L2The distance range of (1), the object to be photographed in the distance range is also within the range in which the lens is clearly imaged. According to the imaging principle, the lensThe depth of field is calculated as follows:
Figure BDA0002632379040000061
Figure BDA0002632379040000062
Figure BDA0002632379040000063
in formulas 1 to 3, δ represents an allowable circle diameter of dispersion; f represents the focal length of the lens, namely the distance from the optical back principal point of the lens to the focal point, and is the distance from the optical center of the lens group of the lens to the focal point of the light collection when parallel light is incident; f represents a photographing aperture value of the lens; l represents the focus distance. The formula 1-3 is a method for calculating the depth of field of a lens, which is currently recognized in the industry, and is defined according to the human eye definition. When the object to be shot is in the depth of field range, the object to be shot forms clear images; when the object to be shot is not in the depth of field range, the object to be shot is imaged in a fuzzy manner.
In the embodiments of the present application, the focus of the lens may also be referred to as the in-focus position, the in-focus point of the lens, or the in-focus point of the camera 193, or the like.
In embodiments of the present application, camera 193 may include one or more of the following: a tele camera, a wide camera, a super wide camera, a zoom camera, or a depth camera, etc. The long-focus camera has a small shooting range and is suitable for shooting distant scenes; the wide-angle camera has a large shooting range; the shooting range of the super wide-angle camera is larger than that of the wide-angle camera, and the super wide-angle camera is suitable for shooting scenes with larger pictures such as panorama and the like. The depth camera may be used to measure an object distance of an object to be photographed, that is, depth information of the object to be photographed, and may include, for example, a three-dimensional (3D) depth sensing camera, a time of flight (TOF) depth camera, a binocular depth camera, or the like.
The camera 193 may include a main shot and a sub shot, among others. The main shooting may be used to capture an image, and may include, for example, a telephoto camera, a wide-angle camera, a super wide-angle camera, a zoom camera, or the like. The side shots may be used for ranging or other auxiliary functions, and may include, for example, a depth camera.
The camera 193 may include a front camera and/or a rear camera. The front camera can comprise one or more main shots, and the rear camera can also comprise one or more main shots. When the image is shot, the target main shot used by the electronic equipment for collecting the image can be a default main shot or a main shot selected by a user.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
In the embodiment of the present application, the processor 110 may determine the target focusing distance and the target focusing position by executing the instructions stored in the internal memory 121, in combination with the depth information of the plurality of target objects, so that as many target objects as possible are within the depth of field of the camera of the electronic device to enable clear imaging.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor 180K may transmit the detected touch operation to the application processor to determine the type of the touch event, thereby implementing human-computer interaction. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
In the embodiment of the application, in a multi-target object group photo scene, a target main shooting can collect images, and detection components such as an auxiliary camera can measure depth information of a plurality of target objects. The processor 110 may determine the target focusing position according to the depth information of the plurality of target objects by operating the instructions stored in the internal memory 121, so that as many target objects as possible are within the depth of field range of the camera of the electronic device, and thus, as many images of the target objects as possible can be clearly imaged on the target image obtained by shooting, thereby improving the shooting experience of multi-target object group photo.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application takes an Android system with a layered architecture as an example, and exemplarily illustrates a software structure of the electronic device 100.
Fig. 2B is a block diagram of a software structure of the electronic device 100 according to the embodiment of the present application. The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, which are an application layer, an application framework layer, an Android runtime (Android runtime) and system library (Android runtime), a Hardware Abstraction Layer (HAL), and a kernel layer from top to bottom. The application layer may include a series of application packages.
As shown in fig. 2B, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2B, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The HAL layer is an interface layer between the operating system kernel and the hardware circuitry, which abstracts the hardware. The HAL layer comprises a focusing module used for determining a target focusing distance and a target focusing position according to depth information of a plurality of target objects measured by hardware, so that as many target objects as possible are within the depth of field range of the camera of the electronic equipment, and images of as many target objects as possible on a shot target image can be clearly imaged.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver. The core layer may also be referred to as a drive layer.
It is understood that in a scene with a plurality of target objects to be photographed, whether the imaging of the plurality of target objects is clear is related to the object distance of the target objects and the in-focus position and depth of field of the lens. In a shooting scene of a plurality of target objects, when the plurality of target objects correspond to different object distances, after the focusing position and the depth of field are determined, the object distance of the target object close to the lens is smaller and easily exceeds the foreground depth range of the lens, and the object distance of the target object far away from the lens is larger and easily exceeds the rear depth of field range of the lens, so that the target object beyond the depth of field forms a fuzzy image.
Furthermore, as can be seen from the above equations 1 to 3, the depth of field is proportional to the focal distance. Wherein the focusing distance L is the sum of the object distance and the image distance. Since the object distance is much larger than the image distance in the actual shooting process, the object distance is usually used instead of the focal distance L. That is, the depth of field is proportional to the object distance, and the larger the object distance, the larger the depth of field, i.e., the front depth of field Δ L1And the back field depth Δ L2The greater the sum; correspondingly, the smaller the object distance, the smaller the depth of field. Therefore, in a front shooting scene with a plurality of target objects to be shot, the object distance of the target objects to be shot is smaller, so the depth of field is smaller, more target objects are easy to exceed the depth of field range, and the target objects beyond the depth of field range are easy to imageThe blur is compared.
In the shooting method provided by the embodiment of the application, the electronic equipment can determine the target focusing distance and the target focusing position by combining the depth information of a plurality of target objects, so that as many target objects as possible are in the depth of field range of the camera of the electronic equipment, as many target objects as possible can be clearly imaged on a shot target image, and the shooting experience of multi-target object group shooting is improved.
The shooting method provided by the embodiment of the application can be applied to shooting scenes that the distances between a plurality of target objects and a camera are different and the difference of the depth information of the plurality of target objects is large under the condition that the target objects are close to the camera of the electronic equipment. For example, one target object is one meter away from the camera, and another target object is two meters away from the camera.
The shooting method provided by the embodiment of the application can be applied to a front multi-target object shooting scene and can also be applied to a rear multi-target object shooting scene. In a multi-target object shooting scene, the depth information of different target objects is easy to have larger difference, so that more target objects are easy to exceed a depth of field range, and the imaging of the target objects beyond the depth of field range is fuzzy. For example, the shooting method can be used for scenes such as a self-timer group photo of a plurality of persons, or a self-timer group photo of a person and a scene.
The following explains a shooting method provided in an embodiment of the present application, taking an electronic device as a mobile phone having a structure shown in fig. 2A and 2B as an example. The screen of the mobile phone is the touch screen, and can be used for interface display or man-machine interaction and other operations. Referring to fig. 4, a shooting method provided in an embodiment of the present application may include:
401. and starting a photographing function by the mobile phone, and displaying a preview image on a preview interface.
When a user wants to use the mobile phone to take a picture, the mobile phone can be started to take a picture. For example, the mobile phone may start a camera application, or start other applications with a photographing or video recording function (such as an AR application like a tremble or a river cyberverse), so as to start the photographing function of the mobile phone.
Illustratively, the mobile phone starts a camera application after detecting that the user clicks a camera icon 501 shown in (a) of fig. 5, so as to start a photographing function, and displays a preview interface shown in (b) of fig. 5. As another example, the mobile phone displays an interface of a desktop or a non-camera application, starts a photographing function after detecting a voice instruction of the camera application opened by the user, and displays a preview interface as shown in (b) of fig. 5. It should be noted that the mobile phone may also start the photographing function in response to other operations of the user, such as a touch operation, a voice instruction, or a shortcut gesture, and the operation of triggering the mobile phone to start the photographing function is not limited in the embodiment of the present application.
In a preview state, the mobile phone acquires an image by adopting the target camera, generates a preview image and displays the preview image on a preview interface. The target camera is a target main shot, and may be a main shot used by a default of the mobile phone or a main shot selected by a user, for example.
In some embodiments of the present application, just after the photographing function is started, only the object closest to the target camera of the mobile phone or the middle object on the preview image is imaged clearly, and other objects are imaged blurrily. For example, immediately after the photographing function is started, as shown in fig. 5 (b), the face closest to the target camera of the mobile phone on the preview image is clearly imaged, and other faces are blurred.
402. The mobile phone determines a plurality of target objects to be photographed based on the preview image.
After the mobile phone starts the photographing function, a plurality of target objects to be photographed can be determined through the preview image. The plurality of target objects may include one or more of the following types: a human face (human eye, or human), a landmark (e.g., a sign), an animal, a doll, a sight, a sculpture, an art or plant, and the like. The embodiment of the present application does not limit the specific type of the target object.
The target object may be automatically recognized by the mobile phone or may be specified by the user, and will be described below.
(1) The multiple target objects are automatically identified by the mobile phone
In some embodiments, one or more preset object types are set on the mobile phone, and an object belonging to the preset object type and identified by the mobile phone on the preview image is a target object. For example, the preset object type includes faces, and in a multi-person self-shooting scene, if the mobile phone recognizes that objects such as two faces, a cat, a tree, a building and the like are included on the preview interface, it is determined that the two faces are target objects.
For another example, the preset object types include faces and animals, and referring to fig. 6A, when the mobile phone recognizes that the preview interface includes objects such as two faces, one cat, one tree, a building, and the like, it is determined that the two faces and the one cat are target objects.
For another example, the preset object types include faces and buildings, and referring to fig. 6B, when the mobile phone recognizes that the preview interface includes objects such as two faces, a cat, a tree, a building, and the like, it is determined that the two faces and the building are target objects.
For another example, the preset object type includes a human face, or a famous building (such as a great wall, an eiffel tower, a pyramid, and the like), and when the mobile phone recognizes that the preview interface includes the human face, the pyramid, and the camel, the human face and the pyramid are determined as the target objects.
For another example, the preset object type includes a face and a sight spot signboard, referring to fig. 6C, in a self-photographing scene, the mobile phone recognizes that the preview interface includes a face, a sight spot signboard, a tree, a building and other objects, and then determines that the face and the sight spot signboard are target objects.
In other embodiments, the mobile phone may measure depth information of each object to be photographed within the field angle of the target camera, and determine the object to be photographed, whose depth information is less than or equal to a preset value 1, as the target object. That is, the target object is closer to the target camera, and the shooting method provided by the embodiment of the application can be used for shooting the shooting scene of the multi-target object closer to the target camera.
On one hand, the target object of interest which the user wants to shoot is usually closer to the target camera; on the other hand, subject to the capability limitations of the distance measuring device (e.g., depth camera), the object closer to the target camera can accurately measure its depth information. Thus, the target object may be closer to the target camera. Moreover, different distance measuring devices can accurately measure different measuring ranges of the depth information of the object to be shot.
In other embodiments, the target object in which the user is interested is usually closer to the target camera, and the area occupied by the image of the target object on the preview image is larger, so that the mobile phone can determine that the object occupying the area of the image larger than the preset value 2 on the preview image is the target object.
In other embodiments, one or more preset object types are set on the mobile phone, and an object that is identified by the mobile phone on the preview image and belongs to the preset object type and meets a preset condition is a target object. For example, the preset object type is a human face, and the preset condition is that the human face frame is greater than or equal to a preset value 3. That is, the mobile phone may detect a face frame on the preview image, and determine that a face with the face frame greater than or equal to a preset value is a target object. For example, a face detection module preset in the mobile phone may down-sample the image of the camera image sensor to reduce the amount of data to be processed. And the face detection module performs face detection on the down-sampled image and outputs a face frame. The face of the user, which is interested in, is usually close to the target camera, and the face frame is large; and the face of the passerby which is not interested by the user in the background is usually far away from the target camera, and the face frame is small. Therefore, in order to avoid that passers-by in the background are mistakenly recognized as target objects, the mobile phone can determine a plurality of faces with face frames larger than or equal to the preset value 3 as the target objects.
For another example, the preset object type is a human face, and the preset condition is that the human face depth information is less than or equal to a preset value 1. That is, the mobile phone may determine a face with depth information less than or equal to a preset value 1 as the target object. That is, the target object is closer to the target camera, and the shooting method provided by the embodiment of the application can be used for shooting a plurality of human face shooting scenes which are closer to the target camera.
When the face is close to the target camera, the face frame is also large. In some cases, when the face frame of a certain face is greater than or equal to the preset value 3, the depth information of the face is also less than or equal to the preset value 1.
In other embodiments, the mobile phone automatically determines a plurality of target objects after entering the group photo mode; and a plurality of target objects are not automatically determined in the ordinary photographing mode. In the group photo mode, the mobile phone may automatically determine a plurality of target objects by using the method described in the above embodiment.
The mobile phone may enter the group photo mode in various ways, and the embodiment of the present application does not limit the specific way. For example, referring to fig. 7A, after the camera application is opened by the mobile phone, a group photo mode control 701 is included on the preview interface. After detecting that the user clicks the group photo mode control 701, the mobile phone enters a group photo mode. For another example, after the mobile phone opens the camera application, if a plurality of objects to be photographed belonging to the preset object type are detected based on the preview image, the user is prompted whether to enter a group photo mode as shown in fig. 7B; the mobile phone enters a group photo mode after responding to the operation of the user selecting 'yes'. As another example, the mobile phone opens the camera application and enters the group photo mode in response to a voice command of the user to enter the group photo mode without starting the camera application.
In the preview state, if a certain object moves out of the picture of the preview image and belongs to the target object, the mobile phone deletes the object from the target object. And if a certain object moves into the picture of the preview image, the mobile phone identifies whether the object is a target object.
In some embodiments of the present application, after determining a plurality of target objects, the mobile phone may prompt the plurality of target objects to the user in a manner of displaying information or voice broadcasting, or the like. For example, the mobile phone may display a mark such as a circle or a box on the target object for prompting, or the mobile phone may prompt through text information. For example, referring to fig. 6A, the mobile phone prompts the target object through a box; for another example, referring to fig. 6B, the mobile phone prompts the target object through text information.
In some prior art, after the mobile phone recognizes a face, a face frame is displayed on a preview interface. In some technical solutions of the embodiments of the present application, when the target object is a face, a face frame of the target object displayed on the preview interface by the mobile phone is different from other face frames. For example, the color of the face frame of the target object on the preview interface is different from the color of other face frames, or the face frame of the target object on the preview interface is larger than other face frames, or the face frame of the target object is a circular frame and the other face frames are square frames. In other technical solutions of the embodiments of the present application, when the target object is a face, the mobile phone only displays a face frame of the target object on the preview interface, and does not display a face frame of a non-target object.
In some embodiments of the present application, after the mobile phone determines and prompts a plurality of target objects, the target objects may be added or deleted in response to a preset operation of the user. The preset operation may be a touch operation, a voice instruction operation, or a gesture operation, and the embodiment of the present application is not limited. For example, the touch operation may be a single click, a double click, a long press, a pressure press, or an operation of circling an object (e.g., an operation of circling or framing an object), or the like. For example, referring to fig. 8 (a), after the mobile phone detects that the user has pressed the cat 801 on the preview image shown in fig. 8 (a), the cat is added to the target object as shown in fig. 8 (b). As another example, after the mobile phone detects that the user has long pressed the face image 802 shown in fig. 8 (b) on the preview image, the face is deleted from the target object as shown in fig. 8 (c).
In other technical solutions, after determining and prompting a plurality of target objects, the mobile phone may enter a modification mode, and then modify the target objects in response to a preset operation of a user. For example, a control 1 is displayed on the preview interface, the mobile phone enters a modification mode after detecting an operation of clicking the control 1 by a user, and then adds or deletes a target object in response to a preset operation of the user. After the user completes the modification of the target object, the mobile phone can exit the modification mode. For example, in the modification mode, the preview interface includes a determination/completion control, and the mobile phone exits the modification mode after detecting an operation of clicking the determination/completion control by the user.
(2) A plurality of target objects are specified by a user
After the mobile phone starts the photographing function, a plurality of target objects can be determined in response to the preset operation of the user on the preview interface. The preset operation is used for designating some objects as target objects. The preset operation may be a touch operation, a voice instruction operation, or a gesture operation, and the embodiment of the present application is not limited. For example, the touch operation may be a single click, a double click, a long press, a pressure press, an operation of circling an object, or the like.
In some embodiments, the plurality of target objects specified by the user may be objects closer to the target camera. Depth information between target objects may vary widely.
For example, in the case shown in (a) in fig. 9, after the mobile phone detects an operation of double-clicking the face image 901 by the user, the corresponding face is determined as the target object as shown in (b) in fig. 9; after the mobile phone detects that the user double-clicks the face image 902 shown in (b) in fig. 9, the corresponding face is also determined as the target object as shown in (c) in fig. 9.
In other embodiments, after the mobile phone starts the photographing function, the user may be prompted whether to designate the target object. For example, referring to fig. 10 (a), the mobile phone may display a prompt message: "is a plurality of target objects specified so that the target objects are clearly imaged on the image obtained by shooting? After the mobile phone detects that the user clicks the yes control, the target object can be determined in response to the preset operation of the user on the preview interface. After the mobile phone detects that the user circles the face image 1001 and the face image 1002 shown in (b) of fig. 10, both corresponding faces are determined as target objects as shown in (c) of fig. 10.
As another example, when the target object is a preset object type and the preset object type is a human face, the mobile phone may display a prompt message: "are a plurality of faces detected and a plurality of target faces specified so that the target faces are clearly imaged on the image obtained by shooting? "
In other embodiments, after the mobile phone enters the group photo mode, a plurality of target objects are determined in response to preset operation of a user on a preview interface; the plurality of target objects are not determined in response to the preset operation of the user on the preview interface in the ordinary photographing mode.
In some embodiments, the mobile phone may further modify the specified multiple target objects in response to a preset operation of the user, for example, add or delete the target objects.
In the preview state in the case (2), if an object moves out of the screen of the preview image and belongs to the target object, the mobile phone deletes the object from the target object. And if a certain object moves into the picture of the preview image and the mobile phone detects that the user designates the object as the target object, adding the object into the target object.
403. The mobile phone acquires depth information of each target object in the plurality of target objects.
After the mobile phone determines a plurality of target objects, the spatial depth information of each target object can be acquired. In some embodiments, if the mobile phone has measured and obtained the depth information of each target object in step 402, the mobile phone does not measure and obtain the depth information of each target object in step 403, and the previously obtained depth information may be directly used. In other embodiments, the depth information of the target object in the preview state may change, so the mobile phone may periodically measure the depth information of each target object.
The method for obtaining the depth information of the target object by the mobile phone measurement may be various, for example, a binocular distance measurement method, a standard head model method, or a Phase Detection (PD) method. The embodiment of the present application does not limit the specific method for obtaining the depth information of the target object by measurement.
For example, referring to fig. 11, in the binocular ranging method, the mobile phone is based on a distance y between two cameras, and a photographing angle θ of a target object with respect to each camera1And theta2And combining the sine theorem of the triangle to obtain a calculation formula 4, and obtaining the depth information z of each target object by the calculation formula 4. Wherein, theta1And theta2Can be obtained by calculation according to the displacement difference of the same target object on the images obtained by the two cameras respectively。
Figure BDA0002632379040000131
For another example, when the target object is a human face, in the standard human head model method, the mobile phone may estimate the distance between different human faces by using the height of the human face (or the distance between human eyes). For example, as shown in fig. 12, assuming that the height of a face in a standard human head model is a, and the height of a face on an image acquired by a camera is a, the mobile phone may obtain a calculation formula 5 by using a known focal distance EFL of the camera and combining the proportional relationship, and may obtain depth information D of the target object by using the calculation formula 5.
Figure BDA0002632379040000132
In some embodiments, if a target object is too far away from the target camera and exceeds the measurement range of the ranging device, and the mobile phone cannot accurately measure the depth information of the target object, the mobile phone may discard the target object.
404. The mobile phone determines a target focusing position according to the depth information of the target objects, the target focusing position enables as many target objects as possible to be in the depth of field range of the target camera, and the target camera is focused to the target focusing position.
After the mobile phone acquires the depth information of the plurality of target objects, the target focusing position can be determined according to the depth information of the plurality of target objects. For example, the mobile phone may determine the target focus position directly according to the depth information of the plurality of target objects. Or, the mobile phone may determine the target focusing position according to the depth information of the plurality of target objects only when it is determined that the depth information of the plurality of target objects satisfies the preset condition.
In some embodiments, the preset conditions include: the depth information of at least two target objects is less than a preset threshold. That is, when there are at least two target objects close to the target camera, the mobile phone determines the target focusing position according to the depth information of the plurality of target objects by using the algorithm in the following embodiments, so that the target objects are imaged clearly as much as possible. When only one target object is close to the target camera or all the target objects are far from the target camera, the mobile phone is difficult to accurately measure the depth information of the far target object, so that the target focusing position is difficult to be determined according to the depth information of the target objects by adopting the algorithm in the following embodiment.
In other embodiments, the depth information of the object to be measured that can be accurately measured by the mobile phone ranging device has a measurement range [ a1, a2]]. The preset conditions include: dmaxLess than or equal to a2, and dminGreater than or equal to a 1. That is, when the depth information of the n target objects is within the measurement range that can be accurately measured by the mobile phone, the mobile phone determines the target focusing position according to the depth information of the target objects by using the algorithm in the following embodiments, so that the target objects as many as possible are clearly imaged. When d ismaxGreater than a2, or dminWhen the depth information is less than a1, the mobile phone cannot accurately measure the depth information of each target object, so that the focusing adjustment using the algorithm in the following embodiment may be abandoned, and the shooting may be continued according to the current focusing state.
In some other embodiments, the mobile phone only reserves a plurality of target objects with depth information within the measurement range [ a1, a2] of the mobile phone ranging device, and discards target objects with depth information outside the measurement range [ a1, a2], so that the target focusing position is determined by using the algorithm in the following embodiments according to the plurality of target objects with depth information capable of being accurately measured by the reserved mobile phone, and thus, as many target objects with depth information within the range [ a1, a2] are imaged clearly.
In the following, a method for determining a target focusing position of a mobile phone according to depth information of a plurality of target objects is exemplified by taking a plurality of target objects as target faces. The method can comprise the following steps:
(a) and the mobile phone calculates the front depth of field delta L by combining the device parameters of the target camera according to the formula 1 and the formula 2 respectively1Depth of field Δ L2And a depth of field Δ L. For example,the device parameters of the target camera may include an allowable circle of confusion δ, a lens focal length F, a photographing aperture value F of the lens, a focusing distance L, and the like.
(b) If the focusing position in the current focusing state is at the position of the first target face closest to the target camera (namely, the position with the minimum depth information), the mobile phone determines the object distance d corresponding to the target face with the maximum depth informationmaxObject distance d corresponding to target face with minimum depth informationminDifference from the back field depth Δ L2The magnitude relationship between them. Referring to FIG. 13A, if dmax-dmin≤ΔL2It can be shown that, in the case where the target face with the minimum depth information (i.e., the first target face) is the focusing position, other target faces can obtain a clear image within the back depth range. Therefore, the target focusing position is the current focusing position of the target face with the minimum depth information, and the mobile phone continues to shoot the image by adopting the current focusing position.
If d ismax-dmin>ΔL2Then the mobile phone pairs d according to the sequence from small to large1,d2,...,dnSorting is carried out, and the sorted mark is sequence d'1,d'2,...,d'n. Namely: d'1Is dmin,d'nIs dmax,d'n-d'1>ΔL2. Then, the handset executes the following step (d) or (e).
(c) If the focusing position in the current focusing state is not at the position of the first target face closest to the target camera (namely the depth information is minimum), for example, the face currently focused to the most middle position or the face closest to one of a plurality of target faces specified by the user is focused, the mobile phone pairs d in the order from small to large1,d2,...,dnSorting is carried out, and the mark is d 'after sorting'1,d'2,...,d'n. Namely: d'1Is dmin,d'nIs dmax. Referring to fig. 13B, if the mobile phone is currently focused on the depth information d 'corresponding to the s-th target face'sAnd d's-d'1≤ΔL1&&d'n-d's≤ΔL2Can indicate d'sUnder the condition that the corresponding s-th target face is at the focusing position, other target faces can obtain clear images within the front depth of field or the rear depth of field. The 1 st to s-1 st target faces may be in a front depth of field range, and the s +1 st to nth target faces may be in a rear depth of field range. Therefore, the target focusing position is the current focusing position of the s-th target face, and the mobile phone continues to shoot images by adopting the current focusing position. Wherein the content of the first and second substances,&&the expression and.
If d 'is not satisfied's-d'1≤ΔL1&&d'n-d's≤ΔL2I.e. d's-d'1>ΔL1||d'n-d's>ΔL2And | represents "or", the handset performs the following step (d) or (e).
In the steps (d) and (e), the mobile phone determines the target focusing distance by using the target face as the movement adjustment precision (or called adjustment granularity) of the focusing distance.
(d) The mobile phone outputs the focusing distance to d'1,d'2,...,d'nThe target faces are moved backwards one by one until the m (m is less than or equal to n) th target face is focused, and the following conditions can be met simultaneously: the distance from the nearest target face to the focusing position is less than or equal to the front depth of field, and the distance from the farthest face to the focusing position is less than or equal to the rear depth of field. Namely:
Figure BDA0002632379040000141
wherein st. denotes "satisfy",&&and is represented by min, and min is represented by min. Distance d 'satisfying minimum m-corresponding of formula 6'mNamely the target focal distance. Thus, referring to fig. 13C, when the target focusing position where the mth target face corresponding to the target focusing distance is located is focused, the 1 st to m-1 st target faces are within the front depth of field range, the m +1 th to nth target faces are within the rear depth of field range, and each target face is within the target focusing positionAre within the depth of field, so that clear imaging is possible.
D is defined as not satisfying (b)max-dmin≤ΔL2D 'to'n-d'1>ΔL2That is, when m is 1, formula 6 is not satisfied. Thus, in this case, the handset can be from d'2,...,d'nTo determine if there is d 'satisfying equation 6'm
If m satisfying equation 6 cannot be found, the handset will focus distance from d'1,d'2.., moving backwards one by one until finding the m (m is less than or equal to n) th target face, it can satisfy the condition that the distance from the nearest target face to the focusing position is less than or equal to the front depth of field. Namely:
Figure BDA0002632379040000142
distance d 'satisfying formula 7'mNamely the target focal distance. Thus, referring to fig. 13D, when a target focusing position corresponding to the target focusing distance is focused, the target focusing position is the mth target face, so that the image of the mth target face is clearest; at least the front m-1 target faces can be within the front depth of field range so as to be capable of imaging clearly; there may be one or more target faces in m +1 to n within the back depth of field and thus also able to be imaged clearly. If m satisfying equation 7 cannot be found, the cellular phone continues to capture an image using the current in-focus position.
(e) The focusing distance is set to be depth information d 'corresponding to each target face by the mobile phone'1,d'2,...,d'n. And when the focusing distance is the depth information corresponding to each target face, the mobile phone counts the number of the target faces with the depth information in the depth field range. If the focusing distance is d'mD 'when the number of target human faces with depth information in the depth field range is the maximum'mNamely the target focal distance. Thus, referring to fig. 13E, when the target camera focuses to the target focusing position corresponding to the target focusing distance, the target camera focuses to the mth targetThe position of the face, the number of the target faces in the field depth range of the target camera is the largest, and the number of the target faces capable of being clearly imaged is the largest.
In other embodiments, in step (e), the mobile phone sets the focus distance to the depth information d corresponding to each target face that is not subjected to size sorting respectively1,d2,...,dn. And when the focusing distance is the depth information corresponding to each target face, the mobile phone counts the number of the target faces with the depth information in the depth field range. If the focusing distance is dmWhen the number of target faces with depth information within the depth field range is the largest, d ismNamely the target focal distance. Thus, when the target camera focuses to the target focusing position corresponding to the target focusing distance, the target camera focuses to dmThe corresponding target face position, the maximum number of target faces in the range of the depth of field of the target camera and the maximum number of target faces capable of being clearly imaged. For example, in the above (b), if dmax-dmin>ΔL2Then the mobile phone sets the focus distance as the depth information d corresponding to each target face1,d2,...,dnAnd further determining the target focal distance.
In still other embodiments, the scheme described in step (d) or (e) need not satisfy dmax-dmin>ΔL2Or d's-d'1>ΔL1||d'n-d's>ΔL2And may be performed independently as a complete solution.
In other embodiments, the mobile phone may determine the target focal distance with the driving code value of the motor in the target camera as the movement adjustment accuracy of the focal distance. Wherein, the code value of the motor represents a quantized current magnitude, and the current magnitude can be converted into corresponding motor thrust, so that the camera motor can be pushed to move. The camera motor is fixed on the lens assembly, so that the camera motor can push the lens to move, thereby changing the image distance. The code value of the camera motor corresponds to the image distance. Specifically, there are various camera motor types, and there are various principles and mechanisms for converting the camera motor into a motor position change, which are not limited herein.
In the above step (b), if dmax-dmin>ΔL2Then the handset can perform the following step (f).
(f) The mobile phone calculates depth information d 'corresponding to a first target face closest to the target camera'1And the foreground is deep Δ L1And the sum u, and calculating the image distance v according to the 1/f-1/u + 1/v. And the mobile phone calculates a code value k of the camera motor according to the image distance v. Wherein, the front depth of field is d'1Is relatively, is d'1The corresponding foreground is deep, and the corresponding relation is
Figure BDA0002632379040000151
Referring to fig. 13F, the distance corresponding to the k codes is a target focusing distance, and the mobile phone can focus to a target focusing position corresponding to the target focusing distance. In this way, the first target face is within the front depth of field range and can be clearly imaged. In addition, the first target face is located at the edge position of the foreground deep range close to the direction of the target camera, so that the number of the target faces in the foreground deep range is the largest under the condition that the first target face is ensured to be clearly imaged, the number of the target faces in the rear depth of field range is larger, and the number of the target faces which can be clearly imaged in the whole depth of field range is larger.
In the above step (b), if dmax-dmin>ΔL2(ii) a Or, in the above step (c), if d's-d'1>ΔL1||d'n-d's>ΔL2The handset may perform step (f) or (g).
(g) Calculating depth information d 'corresponding to each target face by using mobile phone'1,d'2,...,d'nRespectively with the foreground depth DeltaL1Sum u1,u2,...,un. Mobile phone will u respectively1,u2,...,unAs the object distance, the number of target faces within the front depth of field range is determined. If the mobile phone determines that the object distance is according to the depth information d'mCalculated object distanceumWhile, the front depth of field Δ L1The number of the target faces in the range is the largest, and the mobile phone sets 1/f to 1/u +1/v and umCalculating the corresponding image distance vm. Handset v according tomAnd calculating a code value k of the camera motor. Wherein, d 'is substituted'1,d'2,...,d'nD 'is the foreground of the summation'1,d'2,...,d'nRespectively corresponding to the front depth of field. I.e. d'jThe corresponding relation between the foreground depth and the corresponding foreground depth is
Figure BDA0002632379040000161
j is an integer from 1 to n. Referring to fig. 13G, the distance corresponding to the k codes is a target focusing distance, and the mobile phone can focus to a target focusing position corresponding to the target focusing distance. Thus d'mThe corresponding mth target face is close to the edge position of the target camera in the front depth of field range, and the number of the target faces in the foreground depth range is the largest, so that the number of the target faces in the rear depth of field range is large, and the number of the target faces which can be clearly imaged in the whole depth of field range is large.
In other embodiments, in step (g), the handset calculates the front depth of field Δ L1Depth information d corresponding to each target face not subjected to sequencing1,d2,...,dnSum u1,u2,...,un. Mobile phone will u respectively1,u2,...,unAs the object distance, the number of target faces within the front depth of field range is determined. If the mobile phone determines that the object distance is according to the depth information dmCalculated object distance umWhile, the front depth of field Δ L1The number of the target faces in the range is the largest, and the mobile phone sets 1/f to 1/u +1/v and umCalculating the corresponding image distance vm. Handset v according tomAnd calculating a code value k of the camera motor. The distance corresponding to the k codes is the target focusing distance, and the mobile phone can focus to the target focusing position corresponding to the target focusing distance. Thus, dmThe corresponding target face is close to the edge position of the target camera in the front depth of field range, and the target face in the foreground depth rangeThe number of the target faces is the largest, so that the number of the target faces in the back depth of field range is large, and the number of the target faces which can be clearly imaged in the whole depth of field range is large.
In other embodiments, the scheme described in step (f) or (g) need not be performed again under the condition shown in step (b) or (c), but may be performed independently as a complete scheme.
The mobile phone can move the position of one or more groups of lenses in the lens of the target camera or move the position of a sensing element in the target camera through the camera motor, so that the focusing position of the lens is moved to a target focusing position corresponding to the target focusing distance.
In steps (f) and (g), the mobile phone may move the focus position of the lens to a target focus position corresponding to the k code values through the camera motor.
In the steps (d) and (e), the mobile phone can move the focusing position of the lens to the target face corresponding to the target focusing distance through the camera motor.
It should be noted that, because the motor actually adjusts the focusing position according to the code value, when the target focusing distance is not equal to the distance corresponding to a certain code value, the focusing position can be moved to the position corresponding to the code value closest to the target face corresponding to the target focusing distance. It can be understood that, because the precision of the code is small, the difference between the position corresponding to the code value closest to the target face corresponding to the target focal distance and the position of the target face corresponding to the target focal distance is small.
In some embodiments of the present application, in a preview state, the mobile phone may periodically measure the depth information of the target object, and when the mobile phone detects that the depth information of the target object changes, the mobile phone may calculate the target focusing distance according to the depth information of the target object again, and focus the target camera to a target focusing position corresponding to the real-time target focusing distance.
In some embodiments, in the preview state, when the mobile phone detects that the number of the target objects changes, the target focusing distance may be calculated again according to the depth information of the new target object, and the target camera is focused to the target focusing position corresponding to the real-time target focusing distance.
That is to say, the mobile phone may determine the real-time target focusing distance according to the number of the real-time target objects and the depth information in the preview state, and focus the target camera to the target focusing position corresponding to the real-time target focusing distance.
It should be noted that, if the mobile phone detects an operation of the user to focus on an object indicated by the preview image, the mobile phone collects an image by focusing on the position where the object indicated by the user is located, instead of determining the target focusing position by using the method provided in the embodiment of the present application.
405. And after focusing to the target focusing position, the mobile phone acquires a preview image, and displays the preview image on a preview interface, wherein the preview image has as many target objects as possible with clear imaging.
As shown in step 404, the target focus position may be such that as many target objects as possible are within the depth of field of the target camera of the mobile phone, thereby enabling as many target objects as possible to be imaged clearly. The mobile phone collects an image after focusing to a target focusing position, generates a preview image and displays the preview image on a preview interface, and the preview image has as many target objects as possible with clear imaging.
For example, when the target focusing position corresponds to the target focusing distance in equation 6, each target face on the preview image is within the range of the front depth of field and the rear depth of field, and clear imaging is possible. When the target focusing position corresponds to the target focusing distance in the formula 7, the imaging of the mth target face on the preview image is clearest; the front m target faces are in the front depth of field range, so that the imaging on the preview image is clear; one or more target faces in m +1 to n on the preview image may also be imaged sharp.
For example, a preview image displayed after the mobile phone focuses on the target focusing position can be seen in (a) of fig. 14. Compared to fig. 5 (b), on the preview image shown in fig. 14 (a) after the focus adjustment, more target faces can be clearly imaged.
It should be noted that, when the target object is a target face, the target face and other parts of the person are usually located at the same depth, and therefore, when the image of the target face is clear, the image of the entire person is also clear. The target object is a human face, and can also be understood as a human.
Moreover, some technologies can accurately identify human eyes and perform corresponding processing on human faces, so that the target object can be a human face, human eyes or the whole human, and the specific content of the target object is not limited in the embodiment of the application.
406. After the mobile phone detects the shooting operation of the user, the mobile phone shoots and obtains a target image under the condition of focusing to a target focusing position, and as many target objects as possible on the target image form clear images.
As shown in step 404, the target focus position may be such that as many target objects as possible are within the depth of field of the target camera of the mobile phone, thereby enabling as many target objects as possible to be imaged clearly. After the mobile phone detects the shooting operation of the user, the mobile phone shoots and obtains a target image under the condition of focusing to a target focusing position, and as many target objects as possible on the target image form clear images.
For example, after detecting an operation of clicking a shooting control by a user, the mobile phone shoots and obtains a target image under the condition that a target camera focuses on a target focusing position. It can be understood that the mobile phone may also execute the shooting operation in response to other touch operations, voice instruction operations, gesture operations, or the like, and the operation manner of triggering the mobile phone to perform the shooting operation is not limited in the embodiment of the present application.
For example, when the target focusing position corresponds to the target focusing distance in equation 6, each target face on the target image obtained by the mobile phone is within the range of the front depth of field and the rear depth of field, and the image can be clearly imaged. When the target focusing position corresponds to the target focusing distance in the formula 7, the imaging of the mth target face on the target image is clearest; the front m target faces are in the front depth of field range, so that the imaging on the preview image is clear; one or more target faces in m +1 to n on the target image may also be imaged sharply.
For example, the mobile phone captures a target image when focusing on the target focusing position (see (b) in fig. 14). Compared to fig. 5 (b), more faces of the target person can be clearly imaged on the target image obtained by shooting after the focus adjustment.
This is illustrated with another example. The preview image after the mobile phone starts the photographing function can be seen in (a) of fig. 15, in which the image of the face 1 is sharp and the images of the faces 2 and 3 are blurred. The mobile phone determines that the target objects comprise a human face 1, a human face 2 and a human face 3. After the mobile phone determines the target focusing distance according to the depth information of the plurality of target objects and focuses to the target focusing position, the displayed preview image can be shown in (b) of fig. 15, wherein the images of the face 1, the face 2 and the face 3 are all clear. After the mobile phone detects that the user clicks the shooting control shown in (b) of fig. 15, the obtained target image is shot, which can be seen in (c) of fig. 15. The images of the face 1, the face 2 and the face 3 on the target image are all clear.
In the process described in the above embodiment, just after the photographing function is started, only the object closest to the target camera of the mobile phone or the object in the middle of the preview image is usually imaged clearly, and other objects are imaged blurrily, similar to the prior art. In some other embodiments of the present application, immediately after the mobile phone starts the photographing function, a preview image in which only the object closest to the target camera of the mobile phone or the middle object is clearly imaged and other objects are blurred is not displayed (for example, a preview image in which all objects are blurred is displayed), and after focusing to the target focusing position according to the depth information of the target object, as many preview images in which the target object is clearly imaged are displayed on the preview interface as possible. In some embodiments, referring to (a) - (c) of fig. 16, the mobile phone may further prompt the user to "adjust focusing to make more faces clearly imaged …" during focusing to the target focusing position according to the depth information of the target object immediately after the photographing function is started, and then display as many preview images as possible of the target object imaged clearly on the preview interface after focusing to the target focusing position. The prompt can be convenient for a user to know that the user is currently focusing and adjusting without jamming, and helps the user to know more clear reasons for face imaging.
In some embodiments of the application, a target image with clear images of a plurality of target objects obtained by shooting with a mobile phone can be presented to a user in a manner of being distinguished from other images. For example, the target image may have a textual message "clear to many people" displayed thereon or other specific indicia displayed thereon.
In the shooting method described in the above step 401 and 406, the mobile phone may determine a plurality of target objects to be shot, and perform focusing adjustment according to the depth information of the plurality of target objects, so that as many target objects as possible are within the depth of field range of the camera of the electronic device, and thus as many target objects as possible can be clearly imaged on the shot target image, and the shooting experience of multi-target object group shooting is improved.
In the shooting method, the mobile phone calculates the depth of field by combining device parameters of the target camera, and further determines the target focusing distance according to the depth of field, so that the target objects as many as possible are within the depth of field range of the electronic equipment camera after focusing adjustment is carried out according to the target focusing distance. That is, the mobile phone can simultaneously consider the device specification and the focusing algorithm of the target camera, adjust the depth of field range of the shot while adjusting the focusing distance, and achieve the effect of optimizing that the images of the target object on the target image are as clear as possible.
In a prior art, a mobile phone performs fusion by using a plurality of frames of images focused to positions of different objects to obtain an image with a plurality of objects all forming clear images. The scheme needs to push the motor for multiple times, has long frame output time and is easy to cause motion blur; whereas multi-frame fusion also tends to form ghosts. Moreover, the algorithm of multi-frame fusion is high in complexity and easily conflicts with other multi-frame fusion algorithms (such as a high dynamic range algorithm). According to the method provided by the embodiment of the application, only one frame of target image is shot after the motor is pushed to the target focusing position, and the problems in the prior art do not exist.
In addition, the focus adjustment method described in the above shooting method may also be used in a video recording process. When a video is recorded, because the depth information of a plurality of target objects may be changed, the mobile phone can determine the target focusing distance and the target focusing position in real time by adopting the method, so that an image is acquired and an image in the video is generated after the real-time target focusing position is focused.
The shooting method provided by the embodiment of the application is explained above by taking the electronic device as a mobile phone as an example. When the electronic device is a tablet computer or a watch or other devices, the method can still be used for shooting to obtain images of a plurality of target objects which can be clearly imaged, and details are not repeated here.
It should be noted that, in the above embodiments, the definitions of the depth of field, the foreground depth and the back depth of field may be a currently accepted defining method in the industry, and objects seen by human eyes within the depth of field, the foreground depth and the back depth of field are clear. The embodiments of the present application may also have a stricter or broader definition than the currently accepted definition method in the industry, and the specific definition manner is not limited in the embodiments of the present application.
For example, as shown in the above equations 1-3, the ranges of the depth of field, the front depth of field, and the back depth of field are related to the allowable circle of confusion δ, and the embodiments of the present application may define the ranges of the depth of field, the front depth of field, and the back depth of field more strictly or more broadly by defining different δ. The circle of confusion is that when an object point is imaged, due to aberration, an imaging light beam cannot converge at one point, and a diffused circular projection is formed on an image plane to be a circle of confusion. When the point light source is imaged on the image plane through the lens, if the distance between the lens and the image plane is kept unchanged at the moment, the point light source is moved back and forth along the optical axis direction, and the image imaged on the image plane becomes a circle with a certain diameter, namely a diffusion circle. When a point seen by the human eye corresponds to the diameter of a certain circle of confusion, the human eye considers the point to be clear, and the size limit of the circle of confusion considered to be clear by the human eye is defined as the allowable circle of confusion diameter delta. The allowable circle diameter δ is defined differently, and the ranges of the depth of field, the front depth of field, and the rear depth of field are also different.
It will be appreciated that in order to implement the above-described functions, the electronic device comprises corresponding hardware and/or software modules for performing the respective functions. The present application is capable of being implemented in hardware or a combination of hardware and computer software in conjunction with the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, with the embodiment described in connection with the particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In this embodiment, the electronic device may be divided into functional modules according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in the form of hardware. It should be noted that the division of the modules in this embodiment is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Embodiments of the present application also provide an electronic device including one or more processors and one or more memories. The one or more memories are coupled to the one or more processors for storing computer program code comprising computer instructions which, when executed by the one or more processors, cause the electronic device to perform the associated method steps described above to implement the photographing method in the above embodiments.
Embodiments of the present application further provide a computer-readable storage medium, in which computer instructions are stored, and when the computer instructions are run on an electronic device, the electronic device is caused to execute the above related method steps to implement the shooting method in the above embodiments.
Embodiments of the present application further provide a computer program product, which when running on a computer, causes the computer to execute the above related steps to implement the shooting method performed by the electronic device in the above embodiments.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component or a module, and may include a processor and a memory connected to each other; the memory is used for storing computer execution instructions, and when the device runs, the processor can execute the computer execution instructions stored in the memory, so that the chip can execute the shooting method executed by the electronic equipment in the above-mentioned method embodiments.
The electronic device, the computer-readable storage medium, the computer program product, or the chip provided in this embodiment are all configured to execute the corresponding method provided above, so that the beneficial effects achieved by the electronic device, the computer-readable storage medium, the computer program product, or the chip may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Through the description of the above embodiments, those skilled in the art will understand that, for convenience and simplicity of description, only the division of the above functional modules is used as an example, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical functional division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, that is, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A shooting method is applied to electronic equipment, the electronic equipment comprises a camera, and the method is characterized by comprising the following steps:
after a camera application is opened, displaying a first preview image on a photographing preview interface, wherein the first preview image comprises n target faces, n is an integer larger than 1, the n target faces comprise a first target face and a second target face, the first target face image is clear, and the second target face image is fuzzy;
determining target focusing positions according to the depth information of the n target faces;
and after the camera focuses to the target focusing position, displaying a second preview image on a preview interface, wherein the second preview image comprises the n target faces, and the images of the first target face and the second target face on the second preview image are clear.
2. The method of claim 1, further comprising:
after the shooting operation of the user is detected, shooting to obtain a target image, wherein the target image comprises the n target faces, and the images of the first target face and the second target face on the target image are clear.
3. The method according to claim 1 or 2, wherein the face frames of the n target faces are displayed on the first preview image, and the area of the face frame of the target face is greater than or equal to a first preset value.
4. The method according to claim 1 or 2, wherein the n target faces are faces specified by a user based on the first preview image.
5. The method of any one of claims 1-4, wherein the user is prompted during focusing of the camera to the target focus position that focus adjustment is being performed to sharpen more human faces.
6. The method according to any one of claims 1 to 5, wherein the determining the target focusing position according to the depth information of the n target faces comprises:
and after the preset operation of the user is responded and the group photo mode is entered, determining the target focusing position according to the depth information of the n target faces.
7. The method according to any one of claims 1 to 6, wherein the sequence obtained by sorting the depth information of the n target human faces in the order from small to large is d'1,d′2,...,d′nDetermining a target focusing position according to the depth information of the n target faces, including:
separating the focus distance from d 'in the sequence'1And starting to move backwards one by one until the mth target face is found, and satisfying the formula I:
m=minm
st.d′m-d′1≤ΔL1&&d′n-d′m≤ΔL2
wherein m.ltoreq.n, st. denotes "satisfy",&&denotes "AND", min denotes "minimum value", Δ L1Indicates the foreground depth, Δ L, of the camera2Representing a back depth of field, d 'of the camera'mThe position of the corresponding mth target face is the target focusing position;
if d 'satisfying formula I does not exist'mThen the focus distance is derived from d 'in the sequence'1And starting to move backwards one by one until the mth target face is found, and satisfying the formula two:
m=minm
st.d′m-d′1≤ΔL1
wherein m is less than or equal to n, d'mAnd the position of the corresponding mth target face is the target focusing position.
8. The method of claim 7, wherein the determining the target focusing position according to the depth information of the n target faces comprises:
when a first preset condition is met, determining a target focusing position according to the depth information of the n target faces; the first preset condition includes:
a first target face on the first preview image is a first target face closest to the camera, a focusing position corresponding to the first preview image is located at the position of the first target face, and d'n-d′1>ΔL2(ii) a Alternatively, the first and second electrodes may be,
the focusing position corresponding to the first preview image is located at d'sThe position of the corresponding s-th target face is n ≤ and d's-d′1>ΔL1||d′n-d′s>ΔL2
9. The method according to any one of claims 1 to 6, wherein the determining the target focusing position according to the depth information of the n target faces comprises:
counting the number of target faces of which the depth information is within the depth field range when the focusing distance is the depth information corresponding to each target face in the n target faces respectively;
if the focusing distance is dmWhen the number of target faces with depth information within the depth field range is the largest, d ismAnd the position of the corresponding target face is the target focusing position.
10. The method according to any one of claims 1 to 6, wherein the determining the target focusing position according to the depth information of the n target faces comprises:
calculating depth information d 'corresponding to the first target face closest to the camera'1Sum with foreground depth u;
calculating an image distance v according to 1/f-1/u +1/v, wherein f represents the focal length of the camera;
and calculating a driving code value k of a camera motor according to the image distance v, wherein the positions corresponding to the k codes are the target focusing positions.
11. The method according to any one of claims 1 to 6, wherein the determining the target focusing position according to the depth information of the n target faces comprises:
calculating depth information d corresponding to each target face in the n target faces1,d2,...,dnRespectively sum with the foreground depths to obtain u1,u2,...,un
Respectively make u1,u2,...,unDetermining the number of target faces in the front depth of field range as the object distance;
if the object distance is according to the depth information dmCalculated object distance umWhen the number of the target faces in the front depth range is the largest, 1/u is obtained according to the ratio of 1/fm+1/vmCalculating the corresponding image distance vmAnd f represents the focal length of the camera;
according to the image distance vmAnd calculating a driving code value k of a camera motor, wherein the positions corresponding to the k codes are the target focusing positions.
12. An electronic device, comprising:
the camera is used for collecting images;
a screen for displaying an interface;
one or more processors;
a memory;
and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the electronic device, cause the electronic device to perform the photographing method of any of claims 1-11.
13. An electronic device, comprising:
one or more processors;
a memory;
and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the electronic device, cause the electronic device to perform the photographing method of any of claims 1-11.
14. A computer-readable storage medium, comprising computer instructions which, when run on a computer, cause the computer to perform the photographing method according to any one of claims 1 to 11.
15. A computer program product, characterized in that it causes a computer to carry out the shooting method according to any one of claims 1-11, when said computer program product is run on the computer.
CN202010815068.0A 2020-08-13 2020-08-13 Shooting method and equipment Active CN114079726B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010815068.0A CN114079726B (en) 2020-08-13 2020-08-13 Shooting method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010815068.0A CN114079726B (en) 2020-08-13 2020-08-13 Shooting method and equipment

Publications (2)

Publication Number Publication Date
CN114079726A true CN114079726A (en) 2022-02-22
CN114079726B CN114079726B (en) 2023-05-02

Family

ID=80280480

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010815068.0A Active CN114079726B (en) 2020-08-13 2020-08-13 Shooting method and equipment

Country Status (1)

Country Link
CN (1) CN114079726B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114666497A (en) * 2022-02-28 2022-06-24 青岛海信移动通信技术股份有限公司 Imaging method, terminal device, storage medium, and program product
CN116074624A (en) * 2022-07-22 2023-05-05 荣耀终端有限公司 Focusing method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1713016A (en) * 2004-06-15 2005-12-28 佳能株式会社 Image-taking apparatus
JP2007214845A (en) * 2006-02-09 2007-08-23 Casio Comput Co Ltd Electronic camera, multiple-point simultaneous focus frame displaying method and program
CN101149462A (en) * 2006-09-22 2008-03-26 索尼株式会社 Imaging apparatus, control method of imaging apparatus, and computer program
US20110292276A1 (en) * 2010-05-28 2011-12-01 Sony Corporation Imaging apparatus, imaging system, control method of imaging apparatus, and program
CN102338972A (en) * 2010-07-21 2012-02-01 华晶科技股份有限公司 Assistant focusing method using multiple face blocks
CN102377945A (en) * 2010-08-20 2012-03-14 三洋电机株式会社 Image pickup apparatus
US20170374269A1 (en) * 2014-12-30 2017-12-28 Nokia Corporation Improving focus in image and video capture using depth maps
CN108076268A (en) * 2016-11-15 2018-05-25 谷歌有限责任公司 The equipment, system and method for auto-focusing ability are provided based on object distance information
CN109561255A (en) * 2018-12-20 2019-04-02 惠州Tcl移动通信有限公司 Terminal photographic method, device and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1713016A (en) * 2004-06-15 2005-12-28 佳能株式会社 Image-taking apparatus
JP2007214845A (en) * 2006-02-09 2007-08-23 Casio Comput Co Ltd Electronic camera, multiple-point simultaneous focus frame displaying method and program
CN101149462A (en) * 2006-09-22 2008-03-26 索尼株式会社 Imaging apparatus, control method of imaging apparatus, and computer program
US20110292276A1 (en) * 2010-05-28 2011-12-01 Sony Corporation Imaging apparatus, imaging system, control method of imaging apparatus, and program
CN102338972A (en) * 2010-07-21 2012-02-01 华晶科技股份有限公司 Assistant focusing method using multiple face blocks
CN102377945A (en) * 2010-08-20 2012-03-14 三洋电机株式会社 Image pickup apparatus
US20170374269A1 (en) * 2014-12-30 2017-12-28 Nokia Corporation Improving focus in image and video capture using depth maps
CN108076268A (en) * 2016-11-15 2018-05-25 谷歌有限责任公司 The equipment, system and method for auto-focusing ability are provided based on object distance information
CN109561255A (en) * 2018-12-20 2019-04-02 惠州Tcl移动通信有限公司 Terminal photographic method, device and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114666497A (en) * 2022-02-28 2022-06-24 青岛海信移动通信技术股份有限公司 Imaging method, terminal device, storage medium, and program product
CN114666497B (en) * 2022-02-28 2024-03-15 青岛海信移动通信技术有限公司 Imaging method, terminal device and storage medium
CN116074624A (en) * 2022-07-22 2023-05-05 荣耀终端有限公司 Focusing method and device
CN116074624B (en) * 2022-07-22 2023-11-10 荣耀终端有限公司 Focusing method and device

Also Published As

Publication number Publication date
CN114079726B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
EP4044580B1 (en) Capturing method and electronic device
CN114205522B (en) Method for long-focus shooting and electronic equipment
CN113810587B (en) Image processing method and device
CN113747050B (en) Shooting method and equipment
KR20170000773A (en) Apparatus and method for processing image
RU2628494C1 (en) Method and device for generating image filter
CN113592887A (en) Video shooting method, electronic device and computer-readable storage medium
CN113709355B (en) Sliding zoom shooting method and electronic equipment
CN115209057B (en) Shooting focusing method and related electronic equipment
CN113938602B (en) Image processing method, electronic device, chip and readable storage medium
CN114079726B (en) Shooting method and equipment
KR20180133897A (en) Method, apparatus, program and recording medium
CN115689963A (en) Image processing method and electronic equipment
CN114926351A (en) Image processing method, electronic device, and computer storage medium
WO2022057384A1 (en) Photographing method and device
CN117692771A (en) Focusing method and related device
JP2008278228A (en) Digital camera for visualizing photographing intent
CN116723383B (en) Shooting method and related equipment
CN114125148B (en) Control method of electronic equipment operation mode, electronic equipment and readable storage medium
CN115484387B (en) Prompting method and electronic equipment
CN114697530B (en) Photographing method and device for intelligent view finding recommendation
CN115442509A (en) Shooting method, user interface and electronic equipment
CN115225756A (en) Method for determining target object, shooting method and device
CN115170441B (en) Image processing method and electronic equipment
CN116055871B (en) Video processing method and related equipment thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant