CN112887606B - Shooting method and device and electronic equipment - Google Patents

Shooting method and device and electronic equipment Download PDF

Info

Publication number
CN112887606B
CN112887606B CN202110106773.8A CN202110106773A CN112887606B CN 112887606 B CN112887606 B CN 112887606B CN 202110106773 A CN202110106773 A CN 202110106773A CN 112887606 B CN112887606 B CN 112887606B
Authority
CN
China
Prior art keywords
depth
field
image
determining
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110106773.8A
Other languages
Chinese (zh)
Other versions
CN112887606A (en
Inventor
彭金平
陈相谕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110106773.8A priority Critical patent/CN112887606B/en
Publication of CN112887606A publication Critical patent/CN112887606A/en
Application granted granted Critical
Publication of CN112887606B publication Critical patent/CN112887606B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
    • H04N23/959Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a shooting method, and belongs to the technical field of communication. The method comprises the following steps: receiving depth of field selection operation of a user; determining a first depth of field range according to the depth of field selection operation; determining a target pixel corresponding to a first image in the image sensor according to the first depth of field range; and under the condition that a shooting instruction is received, exposing the target pixel in response to the shooting instruction, and generating a target image. According to the method and the device, the first depth of field range is determined according to the received depth of field selection operation, only the target pixel corresponding to the first depth of field range in the image sensor is exposed, information redundancy in the generated target image is reduced, and the occupancy rate of the target image to the storage space is reduced.

Description

Shooting method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to a shooting method, a shooting device and electronic equipment.
Background
With the popularization of mobile phones and the improvement of mobile phone shooting technologies, more and more users shoot through mobile phones. The current shooting method is panoramic imaging, that is, exposure imaging is performed on all image information contained in an image to be shot captured by a lens. However, in some scenarios, the user may be interested in only certain image information in the image to be captured, e.g., at the time of capturing, the user focuses only on the subject of capturing, not on the background of capturing.
If the existing shooting method is adopted, the image obtained through one-time exposure simultaneously contains image information corresponding to the shooting main body and the shooting background, so that information redundancy occurs in the shot image, and the image obtained through panoramic imaging occupies a larger storage space due to more pixels.
Content of application
The embodiment of the application aims to provide a shooting method, which can solve the problems that information redundancy exists in images shot by the existing shooting method, and the storage space occupied by the images is large.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a shooting method, including:
receiving the depth of field selection operation of a user;
determining a first depth of field range according to the depth of field selection operation;
determining a target pixel corresponding to a first image in the image sensor according to the first depth of field range;
and under the condition that a shooting instruction is received, responding to the shooting instruction, exposing the target pixel, and generating a target image.
In a second aspect, an embodiment of the present application provides a shooting device, including:
the field depth selection operation receiving module is used for receiving the field depth selection operation of a user;
the first depth-of-field range determining module is used for determining a first depth-of-field range according to the depth-of-field selecting operation;
the target pixel determining module is used for determining a target pixel corresponding to a first image in the image sensor according to the first depth of field range;
and the target image generation module is used for responding to the shooting instruction and exposing the target pixel to generate a target image under the condition of receiving the shooting instruction.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, the first depth of field range is determined according to the received depth of field selection operation, only the target pixel corresponding to the first depth of field range in the image sensor is exposed, information redundancy in the generated target image is reduced, and the occupancy rate of the target image to the storage space is reduced.
Drawings
Fig. 1 is a flowchart illustrating specific steps of a shooting method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a preset operation interface of a shooting device according to an embodiment of the present application;
fig. 3 is a structural diagram of a shooting device according to a second embodiment of the present application;
fig. 4 is a structural diagram of an electronic device according to a third embodiment of the present application;
fig. 5 is a schematic diagram of a hardware structure of an electronic device according to a third embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/", and generally means that the former and latter related objects are in an "or" relationship.
The following describes the shooting method and the shooting apparatus provided in the embodiments of the present application in detail through specific embodiments and application scenarios thereof with reference to the accompanying drawings.
Example one
Referring to fig. 1, specific steps of a shooting method provided in a first embodiment of the present application are shown.
Step 101, receiving a depth of field selection operation of a user.
In the embodiment of the application, when a user shoots an image through the shooting terminal, a depth of field selection operation can be executed on the preview image according to an actual shooting requirement, and a first depth of field range of the preview image is determined.
It should be noted that, the depth of field selection operation performed on the preview image by the user may be performed after determining the second depth of field of the preview image, with the second depth of field range as a reference; the first depth of field range may also be determined directly through the depth of field selection operation, for example, when the user performs image capturing with reference to the capturing tutorial, the corresponding first depth of field range may be selected directly through the depth of field selection operation, or the user sets the first depth of field range directly through the depth of field selection operation according to the capturing experience, and so on.
The first depth of field range is a depth of field range corresponding to a target image which is expected to be finally generated by a user, and the second depth of field range is an actual depth of field range of the preview image.
Step 102, determining a first depth of field range according to the depth of field selection operation.
The shooting terminal determines a first depth of field range according to the received depth of field selection operation.
Step 103, determining a target pixel corresponding to the first image in the image sensor according to the first depth of field range.
The first image may be an image to be shot, or may be a depth image corresponding to the image to be shot.
The target pixel is an exposure unit in the image sensor. The image sensor comprises a plurality of pixels, the pixels are minimum exposure units in the image sensor, and each pixel corresponds to a depth of field value. Pixels of the image sensor, the depth of field value of which is matched with the first depth of field range, are all target pixels, and the target pixels are exposed, so that a target image with the depth of field range being the first depth of field range can be obtained.
And 104, under the condition that a shooting instruction is received, responding to the shooting instruction, exposing the target pixel, and generating a target image.
After receiving the shooting instruction, the shooting device only carries out exposure processing on the determined target pixel in response to the received shooting instruction, and the generated target image only comprises pixel points with the depth of field values belonging to the first depth of field range due to the fact that the target pixel is matched with the first depth of field range, so that information redundancy in the generated target image is reduced, and the occupancy rate of the target image to a storage space is reduced.
In an optional embodiment of the present application, before the receiving a depth of view selection operation of the user in step 101, the method further includes:
s11, obtaining a depth value corresponding to each pixel point in the first image, and determining a second depth range of the first image.
Step 103, determining a target pixel corresponding to the first image in the image sensor according to the first depth of field range includes:
and S12, matching the second depth of field range of the first image with the first depth of field range, and determining a first pixel point in the first image.
And S13, determining a pixel corresponding to the first pixel point in the image sensor as a target pixel.
In the embodiment of the application, before receiving a depth of field selection operation of a user, a depth of field value corresponding to each pixel point in the first image may also be obtained, where each pixel point corresponds to one depth of field value. After the depth of field values corresponding to the pixel points of the first image are obtained, the second depth of field range of the first image is determined according to the depth of field values of the pixel points in the first image, specifically, the maximum depth of field value and the minimum depth of field value in the depth of field values corresponding to the pixel points can be determined, and the second depth of field range of the first image is determined according to the determined maximum depth of field value and the determined minimum depth of field value.
The second depth of field range is the maximum depth of field range of the first image, and the first image is exposed in the second depth of field range, so that a clear image can be obtained. The second depth of field range of the first image may be acquired using TOF (Time of flight ranging). Specifically, after focusing the shooting subject in the first image according to the received manual focusing operation or by adopting an automatic focusing mode, the shooting device acquires parameters such as the diameter of the allowable circle of confusion, the focal length of the lens, the shooting aperture of the lens, the focusing distance and the like of the first image, and then calculates the depth of field of the first image according to the acquired parameters such as the diameter of the allowable circle of confusion, the focal length of the lens, the shooting aperture of the lens, the focusing distance and the like. Of course, other distance measuring means may also be adopted to obtain the second depth of field range of the first image, and the embodiment of the present application is not particularly limited.
And after the second depth of field range of the first image is determined, matching the second depth of field range with the first depth of field range determined according to the depth of field selection operation of the user, and determining that the intersection of the first depth of field range and the second depth of field range is the target depth of field range. For example, assume that the first depth of field is [ a1, a2], the second depth of field is [ b1, b2], if a1< b1< b2< a2, then the target depth of field is [ b1, b2]; if a1< b1< a2< b2, the target depth of field range is [ b1, a2]. The first pixel point in the first image is the pixel point of which the depth of field value belongs to the target depth of field range.
After the first pixel point in the first image is determined, a target pixel in the image sensor is further determined according to the first pixel point, and the depth of field range corresponding to the target pixel is matched with the depth of field range corresponding to the first pixel point. Each pixel in the image sensor corresponds to a depth of field value, the pixel is exposed, pixel points with the depth of field values matched with the depth of field values corresponding to the pixel can be obtained, the target pixel is exposed, second pixel points with the depth of field values matched with the depth of field values of the first pixel points can be obtained, and the second pixel points form a target image.
In an optional embodiment of the present application, after the obtaining of the depth of field value corresponding to each pixel point in the first image in step S11, the method further includes:
step S21, marking each pixel point according to the depth value of each pixel point in the first image to obtain a depth information image of the first image, wherein the pixel points with the same depth value in the depth information image are marked with the same identification information.
And S22, displaying the depth information map.
Step 101, receiving a depth of field selection operation of a user, including:
and step S23, receiving the depth of field selection operation of the depth of field information map.
The step 102 of determining a first depth of field range according to the depth of field selection operation includes:
and step S24, determining a first depth of field range according to the depth of field selection operation of the depth of field information map.
In this embodiment of the application, after obtaining the depth of field value corresponding to each pixel point in the first image, each pixel point may be further marked according to the depth of field value of each pixel point in the first image, specifically, each pixel point corresponds to one depth of field value, one depth of field value generally corresponds to a plurality of pixel points, and the same mark may be used for the pixel points with the same depth of field value, or the same mark may be used for the pixel points with the same depth of field value belonging to the same depth of field range. After each pixel point in the first image is marked, a depth information map of the first image is generated according to the marking result, and the depth information map is displayed for a user. Optionally, the depth information map may be displayed in a three-dimensional stereo image manner, where in the depth information map, one depth value corresponds to one depth plane, and pixel points in one depth plane have the same mark. It can be understood that a plurality of pixel points in the depth plane corresponding to one depth value are equivalent to image slices of the first image, image slices corresponding to each depth value in the depth range of the first image constitute the first image, and the image slices corresponding to different depth values have different detail features of the first image. The user can determine the first depth of field range to be selected through the detail features displayed by the image slices corresponding to the depth of field values in the depth of field information map. Therefore, a depth of field selection button can be provided on the display interface of the depth of field information map of the first image, and the user can determine the first depth of field range in real time by performing a depth of field selection operation in the process of viewing the depth of field information map of the first image.
In an optional embodiment of the present application, after the obtaining, in step S11, a depth-of-field value corresponding to each pixel point in the first image, and determining a second depth-of-field range of the first image, the method further includes:
step S31, displaying a depth-of-field axis corresponding to the first depth-of-field range, where the depth-of-field axis includes depth values from a minimum depth value to a maximum depth value of the first depth-of-field range.
Step 101, receiving a depth of field selection operation of a user, including:
and step S32, receiving a first depth of field value and a second depth of field value selected by the user in the depth of field axis.
The step 102 of determining a first depth of field range according to the depth of field selection operation includes:
and step S33, determining a first depth of field range according to the first depth of field value and the second depth of field value.
In the embodiment of the application, after the second depth of field range of the first image is acquired, a depth of field axis corresponding to the second depth of field range may be displayed on a preset operation interface, so that a user may select the first depth of field range. Specifically, each depth value from the minimum depth value to the maximum depth value of the second depth range is displayed on the depth axis corresponding to the second depth range, and the user may select any two depth values on the depth axis corresponding to the second depth range through a dragging operation, for example, a depth culling box is displayed on the depth axis corresponding to the second depth range, both left and right frames of the depth culling box may be dragged, the depth value corresponding to the left frame on the depth axis is recorded as the first depth value, and the depth value corresponding to the right frame on the depth axis is recorded as the second depth value, so that the first depth value and the second depth value selected by the user may be determined according to the dragging operation of the user on the depth culling box. After determining the first and second user-selected depth of view values, the user-selected first depth of view range may be determined. Taking the depth of field culling box existing on the depth of field axis corresponding to the second depth of field as an example, generally, the depth of field values on the depth of field axis are arranged from left to right in the order from small depth of field value to large depth of field value, therefore, in the first depth of field value and the second depth of field value determined according to the pushing operation of the user on the depth of field culling box, the first depth of field value is smaller than the second depth of field value, the first depth of field value is used as the minimum value of the first depth of field range, the second depth of field value is used as the maximum value of the first depth of field range, and thus, the first depth of field range can be determined according to the first depth of field value and the second depth of field value.
Two depth of field value adjusting buttons may also be provided on the preset operation interface, and referring to fig. 2, a schematic view of the preset operation interface of the shooting device according to the embodiment of the present application is shown, where in the preset operation interface, two depth of field value adjusting buttons are provided, which are an assembly "depth of field 1" and an assembly "depth of field 2" respectively. The minimum of the depth 1 and the depth 2 may be set to be the minimum of the second depth range of the first image, and the maximum of the depth 1 and the depth 2 may be set to be the maximum of the second depth range of the first image. Respectively initializing the depth of field 1 and the depth of field 2, setting the initial value of the depth of field 1 as the minimum value of the second depth of field range of the first image, and setting the initial value of the depth of field 2 as the maximum value of the second depth of field range of the first image. When the user determines that the shooting mode is the 'selectable depth of field shooting', the user can adjust the values of the depth of field 1 and the depth of field 2 in the preview operation interface, determine a first depth of field value according to the value of the depth of field 1 finally determined by the user, determine a second depth of field value according to the value of the depth of field 2 finally determined by the user, and determine a first depth of field range according to the first depth of field value and the second depth of field value.
It should be noted that, in the embodiment of the present application, the first depth value and the second depth value may also be depth interval values, and the user determines one or more depth value intervals through a depth selection operation, and determines the first depth range according to the determined one or more depth value intervals. That is to say, in the embodiment of the present application, the first depth-of-field range may be a continuous interval, or may be composed of multiple sub-intervals, and the specific first depth-of-field range is determined according to the depth-of-field selection operation of the user as long as the shooting requirement of the user can be met.
In an optional embodiment of the present application, after the matching the second depth-of-field range of the first image with the first depth-of-field range, and before determining the first pixel point in the first image, and before receiving the shooting instruction, the method further includes: generating a preview image according to the first pixel point; and displaying the preview image.
The essence of the image is the combination of the pixel points corresponding to different depth of field values, and therefore, in the embodiment of the present application, after the first pixel point in the first image is determined according to the first depth of field range and the second depth of field range, and before the shooting instruction is received, the preview image can be generated according to the first pixel point. As shown in fig. 2, in the process of adjusting the value of the depth of field 1 and the value of the depth of field 2 by the user, a first depth of field range and a first pixel point of the first image may be determined in real time, a preview image is generated according to the first pixel point, and the generated preview image is displayed in an image display area of the preset operation interface.
In the embodiment of the present application, the preview image only includes the first pixel point. Specifically, the pixel points in the first image except the first pixel point can be deleted to obtain a preview image, and the preview image is displayed, so that a user can check the image effect of the target image to be generated in real time through the preview image after determining the first depth of field range, and if the user is not satisfied with the image effect corresponding to the current first depth of field range, the first depth of field range can be readjusted before executing a shooting instruction on the shooting device, so that the first depth of field range is prevented from being readjusted when the shooting effect is found to be not satisfied with expectations after the shooting is finished, the shooting is carried out again, and the image shooting efficiency is improved.
In summary, in the embodiment of the present application, the first depth of field range is determined according to the received depth of field selection operation, and only the target pixel corresponding to the first depth of field range in the image sensor is exposed, so that information redundancy in the generated target image is reduced, and the occupancy rate of the target image to the storage space is reduced.
In the shooting method provided by the embodiment of the present application, the execution subject may be a shooting device, or a control module in the shooting device for executing the shooting method. In the embodiment of the present application, a shooting method executed by a shooting device is taken as an example, and the shooting method provided in the embodiment of the present application is described.
Example two
Referring to fig. 3, a structural diagram of a shooting device according to a second embodiment of the present invention is shown, which specifically includes:
the depth of field selection operation receiving module 201 is configured to receive a depth of field selection operation of a user.
The first depth of field range determining module 202 is configured to determine a first depth of field range according to the depth of field selecting operation.
And the target pixel determining module 203 is configured to determine a target pixel corresponding to the first image in the image sensor according to the first depth of field range.
And the target image generation module 204 is configured to, in a case that a shooting instruction is received, expose the target pixel in response to the shooting instruction, and generate a target image.
In an optional embodiment of the present application, the apparatus further comprises:
the second depth-of-field range determining module is used for acquiring a depth-of-field value corresponding to each pixel point in the first image and determining a second depth-of-field range of the first image;
the target pixel determination module 203 includes:
a first pixel point determining submodule, configured to match a second depth-of-field range of the first image with the first depth-of-field range, and determine a first pixel point in the first image;
and the target pixel determining submodule is used for determining the pixel corresponding to the first pixel point in the image sensor as a target pixel.
In an optional embodiment of the present application, the apparatus further comprises:
the depth-of-field information map determining module is used for marking each pixel point according to the depth-of-field value of each pixel point in the first image to obtain a depth-of-field information map of the first image, wherein the pixel points with the same depth-of-field value in the depth-of-field information map are marked with the same identification information;
the depth information map display module is used for displaying the depth information map;
the depth of field selection operation receiving module 201 includes:
the depth of field selection operation receiving submodule is used for receiving the depth of field selection operation of a user;
the first depth of field range determination module 202 includes:
and the first depth-of-field range generation submodule is used for determining a first depth-of-field range according to the depth-of-field selection operation of the depth-of-field information map.
In an optional embodiment of the present application, the apparatus further comprises:
the depth of field axis display module is used for displaying a depth of field axis corresponding to the second depth of field range, wherein the depth of field axis comprises depth of field values from the minimum depth of field value to the maximum depth of field value of the second depth of field range;
the depth of field selection operation receiving module 201 includes:
the depth value selection receiving submodule is used for receiving a first depth value and a second depth value selected by a user in the depth axis;
the first depth of field range determination module 202 includes:
and the second depth range determining submodule is used for determining the first depth range according to the first depth value and the second depth value.
In an optional embodiment of the present application, the apparatus further comprises:
the preview image generating module is used for generating a preview image according to the first pixel point;
and the preview image display module is used for displaying the preview image.
The shooting device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiment of the present application is not particularly limited.
The photographing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The shooting device provided in the embodiment of the present application can implement each process implemented by the shooting device in the method embodiment of fig. 1, and is not described here again in order to avoid repetition.
In summary, in the embodiment of the present application, the first depth of field range is determined according to the received depth of field selection operation, and only the target pixel corresponding to the first depth of field range in the image sensor is exposed, so that information redundancy in the generated target image is reduced, and the occupancy rate of the target image to the storage space is reduced.
EXAMPLE III
Optionally, as shown in fig. 4, an electronic device 400 is further provided in the embodiment of the present application, and includes a processor 401, a memory 402, and a program or an instruction that is stored in the memory 402 and is executable on the processor 401, where the program or the instruction is executed by the processor 401 to implement each process of the foregoing shooting method embodiment, and can achieve the same technical effect, and is not described again here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 5 is a schematic diagram of a hardware structure of an electronic device implementing the embodiment of the present application.
The electronic device 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and the like.
Those skilled in the art will appreciate that the electronic device 500 may further include a power supply (e.g., a battery) for supplying power to various components, and the power supply may be logically connected to the processor 510 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 5 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
Example four
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.
EXAMPLE five
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the foregoing shooting method embodiment, and can achieve the same technical effect, and for avoiding repetition, the details are not repeated here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatuses in the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions recited, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the present embodiments are not limited to those precise embodiments, which are intended to be illustrative rather than restrictive, and that various changes and modifications may be effected therein by one skilled in the art without departing from the scope of the appended claims.

Claims (8)

1. A photographing method, characterized in that the method comprises:
acquiring a depth value corresponding to each pixel point in a first image, and determining a second depth range of the first image;
receiving depth of field selection operation of a user;
determining a first depth of field range according to the depth of field selection operation;
determining a target pixel corresponding to a first image in the image sensor according to the first depth of field range;
under the condition that a shooting instruction is received, responding to the shooting instruction, exposing the target pixel, and generating a target image; wherein pixels other than the target pixel are not exposed during exposure;
wherein, the determining the target pixel corresponding to the first image in the image sensor according to the first depth of field range includes:
determining pixel points in the intersection of the second depth-of-field range and the first depth-of-field range of the first image as first pixel points in the first image;
and determining a pixel corresponding to the first pixel point in the image sensor as a target pixel.
2. The method according to claim 1, wherein after the obtaining of the depth value corresponding to each pixel point in the first image, the method further comprises:
marking each pixel point according to the depth of field value of each pixel point in the first image to obtain a depth of field information image of the first image, wherein the pixel points with the same depth of field value in the depth of field information image are marked with the same identification information;
displaying the depth information map;
the receiving of the depth of field selection operation of the user comprises:
receiving the depth of field selection operation of the user on the depth of field information map;
the determining a first depth of field range according to the depth of field selection operation includes:
determining a first depth of field range according to a depth of field selection operation on the depth of field information map.
3. The method according to claim 1, wherein after obtaining the depth value corresponding to each pixel point in the first image and determining the second depth range of the first image, the method further comprises:
displaying a depth of field axis corresponding to the second depth of field range, wherein the depth of field axis comprises each depth of field value from the minimum depth of field value to the maximum depth of field value of the second depth of field range;
the receiving of the depth of field selection operation of the user comprises:
receiving a first depth of field value and a second depth of field value selected by a user in the depth of field axis;
the determining a first depth of field range according to the depth of field selection operation includes:
determining a first depth of field range according to the first depth of field value and the second depth of field value.
4. A camera, characterized in that the camera comprises:
the field depth selection operation receiving module is used for receiving the field depth selection operation of a user;
a first depth-of-field range determination module, configured to determine a first depth-of-field range according to the depth-of-field selection operation;
the second depth-of-field range determining module is used for acquiring a depth-of-field value corresponding to each pixel point in the first image and determining a second depth-of-field range of the first image;
the target pixel determining module is used for determining a target pixel corresponding to a first image in the image sensor according to the first depth of field range;
the target image generation module is used for responding to a shooting instruction under the condition that the shooting instruction is received, exposing the target pixel and generating a target image; wherein pixels other than the target pixel are not exposed during exposure;
wherein the target pixel determination module comprises:
a first pixel point determining submodule, configured to determine a pixel point in an intersection of the second depth-of-field range and the first depth-of-field range of the first image as a first pixel point in the first image;
and the target pixel determining submodule is used for determining the pixel corresponding to the first pixel point in the image sensor as a target pixel.
5. The apparatus of claim 4, further comprising:
the depth-of-field information map determining module is used for marking each pixel point according to the depth-of-field value of each pixel point in the first image to obtain a depth-of-field information map of the first image, wherein the pixel points with the same depth-of-field value in the depth-of-field information map are marked with the same identification information;
the depth information map display module is used for displaying the depth information map;
the depth of field selection operation receiving module comprises:
the depth of field selection operation receiving submodule is used for receiving the depth of field selection operation of a user;
the first depth of field range determination module comprises:
and the first depth-of-field range generation submodule is used for determining a first depth-of-field range according to the depth-of-field selection operation of the depth-of-field information map.
6. The apparatus of claim 4, further comprising:
the depth-of-field axis display module is used for displaying a depth-of-field axis corresponding to the second depth-of-field range, wherein the depth-of-field axis comprises depth-of-field values from the minimum depth-of-field value to the maximum depth-of-field value of the second depth-of-field range;
the depth-of-field selection operation receiving module comprises:
the depth value selection receiving submodule is used for receiving a first depth value and a second depth value selected by a user in the depth axis;
the first depth of field range determination module comprises:
and the second depth range determining submodule is used for determining the first depth range according to the first depth value and the second depth value.
7. An electronic device comprising a processor, a memory and a program stored on the memory and executable on the processor, the program, when executed by the processor, implementing the steps of the photographing method according to any of claims 1 to 3.
8. A readable storage medium, characterized in that it stores thereon a program which, when executed by a processor, implements the steps of the photographing method according to any one of claims 1 to 3.
CN202110106773.8A 2021-01-26 2021-01-26 Shooting method and device and electronic equipment Active CN112887606B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110106773.8A CN112887606B (en) 2021-01-26 2021-01-26 Shooting method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110106773.8A CN112887606B (en) 2021-01-26 2021-01-26 Shooting method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112887606A CN112887606A (en) 2021-06-01
CN112887606B true CN112887606B (en) 2023-04-07

Family

ID=76053379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110106773.8A Active CN112887606B (en) 2021-01-26 2021-01-26 Shooting method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112887606B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113709366B (en) * 2021-08-24 2022-11-22 联想(北京)有限公司 Information processing method and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8040399B2 (en) * 2008-04-24 2011-10-18 Sony Corporation System and method for effectively optimizing zoom settings in a digital camera
CN104159038B (en) * 2014-08-26 2018-05-08 北京智谷技术服务有限公司 The image formation control method and device and imaging device of shallow Deep Canvas image
CN106973227A (en) * 2017-03-31 2017-07-21 努比亚技术有限公司 Intelligent photographing method and device based on dual camera
CN107483821B (en) * 2017-08-25 2020-08-14 维沃移动通信有限公司 Image processing method and mobile terminal
CN107888833A (en) * 2017-11-28 2018-04-06 维沃移动通信有限公司 A kind of image capturing method and mobile terminal
CN110503011B (en) * 2019-08-06 2022-03-22 Oppo广东移动通信有限公司 Data calibration method, electronic device and non-volatile computer-readable storage medium
CN111885307B (en) * 2020-07-30 2022-07-22 努比亚技术有限公司 Depth-of-field shooting method and device and computer readable storage medium

Also Published As

Publication number Publication date
CN112887606A (en) 2021-06-01

Similar Documents

Publication Publication Date Title
CN111698553B (en) Video processing method and device, electronic equipment and readable storage medium
CN112714255B (en) Shooting method and device, electronic equipment and readable storage medium
CN111601039B (en) Video shooting method and device and electronic equipment
CN112135046B (en) Video shooting method, video shooting device and electronic equipment
CN112637500B (en) Image processing method and device
CN112532881B (en) Image processing method and device and electronic equipment
CN111340731A (en) Image processing method and device, electronic equipment and storage medium
CN112087579B (en) Video shooting method and device and electronic equipment
CN113014798A (en) Image display method and device and electronic equipment
CN111953900B (en) Picture shooting method and device and electronic equipment
CN110677580B (en) Shooting method, shooting device, storage medium and terminal
CN112352417B (en) Focusing method of shooting device, system and storage medium
CN112887606B (en) Shooting method and device and electronic equipment
CN113866782A (en) Image processing method and device and electronic equipment
CN112565604A (en) Video recording method and device and electronic equipment
CN105120153A (en) Image photographing method and device
CN112653841B (en) Shooting method and device and electronic equipment
CN112887624B (en) Shooting method and device and electronic equipment
CN114422687B (en) Preview image switching method and device, electronic equipment and storage medium
CN111654620B (en) Shooting method and device
CN114025100A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN113473012A (en) Virtualization processing method and device and electronic equipment
CN111325148A (en) Method, device and equipment for processing remote sensing image and storage medium
CN112911148B (en) Image processing method and device and electronic equipment
CN112887620A (en) Video shooting method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant