WO2022183876A1 - 拍摄方法、装置、计算机可读存储介质及电子设备 - Google Patents

拍摄方法、装置、计算机可读存储介质及电子设备 Download PDF

Info

Publication number
WO2022183876A1
WO2022183876A1 PCT/CN2022/074592 CN2022074592W WO2022183876A1 WO 2022183876 A1 WO2022183876 A1 WO 2022183876A1 CN 2022074592 W CN2022074592 W CN 2022074592W WO 2022183876 A1 WO2022183876 A1 WO 2022183876A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
focus
data
data component
component
Prior art date
Application number
PCT/CN2022/074592
Other languages
English (en)
French (fr)
Inventor
朱文波
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2022183876A1 publication Critical patent/WO2022183876A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present application relates to the field of image processing, and in particular, to a photographing method, an apparatus, an electronic device, and a computer-readable storage medium.
  • Embodiments of the present application provide a shooting method, an apparatus, an electronic device, and a computer-readable storage medium, which can improve the efficiency of focusing shooting.
  • An embodiment of the present application provides a photographing method, wherein the photographing method includes:
  • Focus shooting is performed according to the first focus parameter to acquire a second image.
  • the embodiment of the present application also provides a photographing device, wherein the photographing device includes:
  • an acquisition module for acquiring the first image
  • an extraction module for extracting a first data component from a plurality of data components of the image
  • a calculation module configured to determine a first focus parameter according to the first data component
  • a focusing module configured to perform focusing shooting according to the first focusing parameter to acquire a second image.
  • Embodiments of the present application further provide a computer-readable storage medium, wherein a computer program is stored in the storage medium, and when the computer program runs on the computer, the computer is made to execute the steps in any of the shooting methods provided by the embodiments of the present application .
  • An embodiment of the present application further provides an electronic device, wherein the electronic device includes a processor and a memory, and a computer program is stored in the memory, and the processor executes any one of the methods provided by the embodiments of the present application by calling the computer program stored in the memory. The steps in the shooting method.
  • An embodiment of the present application further provides an electronic device, wherein the electronic device includes a camera, a front-end image processing chip, and a main processor, where the camera is used to acquire a first image, and the front-end chip is used to obtain a first image from an image of the first image.
  • a first data component is extracted from a variety of data components, and the main processor is configured to determine a first focus parameter according to the first data component, so that the camera performs focus shooting according to the first focus parameter to obtain the first focus parameter. Second image.
  • FIG. 1 is a first schematic flowchart of a photographing method provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a second type of photographing method provided by an embodiment of the present application.
  • FIG. 3 is a third schematic flowchart of the photographing method provided by the embodiment of the present application.
  • FIG. 4 is a fourth schematic flowchart of the photographing method provided by the embodiment of the present application.
  • FIG. 5 is a schematic flowchart of the fifth type of the photographing method provided by the embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a first type of a photographing apparatus provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a second structure of a photographing apparatus provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a first structure of an electronic device provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a second structure of an electronic device provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a second structure of an electronic device provided by an embodiment of the present application.
  • An embodiment of the present application provides a photographing method, and the photographing method is applied to an electronic device.
  • the execution body of the photographing method may be the photographing device provided in the embodiment of the present application, or an electronic device integrated with the photographing device, the photographing device may be implemented in hardware or software, and the electronic device may be a smartphone, a tablet computer, a palmtop A computer, notebook computer, or desktop computer, etc., is a device that is equipped with a processor and has processing power.
  • the embodiment of the present application provides a shooting method, including:
  • Focus shooting is performed according to the first focus parameter to acquire a second image.
  • the method further includes:
  • Focus shooting is performed according to the second focus parameter to acquire a third image.
  • the method further includes:
  • a first focus correction parameter is determined according to the third data component, and the first focus correction parameter is used to correct the first focus parameter.
  • the determining the first focus correction parameter according to the third data component includes:
  • the first focus correction parameter is determined from the thumbnail image of the third data component.
  • the method before the extracting the third data component from the multiple data components of the first image, the method further includes:
  • the kind of the third data component to be extracted is determined according to the picture change rate.
  • the data components include color components, and before extracting the first data components from the plurality of data components of the first image, further comprising:
  • the color component with the largest number of corresponding pixel points in the first image is determined as the first data component.
  • the method before the extracting the first data component from the multiple data components of the first image, the method further includes:
  • the extracting the first data component from the various data components of the first image includes:
  • the first data component of the focus area is extracted from the plurality of data components of the focus area.
  • the acquiring the first image includes:
  • a first image is acquired according to the multiple frames of original images.
  • the method before extracting the first data component from the multiple data components of the first image, the method further includes:
  • FIG. 1 is a schematic flowchart of a first type of photographing method provided by an embodiment of the present application.
  • the execution body of the photographing method may be the photographing apparatus provided by the embodiment of the present application, or an electronic device integrating the photographing apparatus.
  • the shooting method provided by the embodiment of the present application may include the following steps:
  • the camera is started to shoot, a video stream of the scene to be shot is acquired, and multiple frames of images of the scene to be shot are obtained therefrom, with the same or similar image content.
  • These images can be in RAW (RAW Image Format, original) format, that is, the unprocessed original image after the image sensor converts the captured light source signal into a digital signal.
  • the first image acquired in step 110 may be a certain frame of image in the video stream.
  • the acquired first image may be automatically focused by the device and then photographed.
  • the process of focusing is the process of moving the lens to make the image in the focus area the clearest. After the focus is successful, the focus has the highest clarity, while the area outside the focus is relatively blurry.
  • the automatic focusing methods include contrast focusing, phase focusing, laser focusing, and the like.
  • Contrast focusing is also called contrast focusing.
  • contrast focusing when we aim at the subject, the motor in the lens module will drive the lens to move from the bottom to the top. During this process, the pixel sensor will focus on the entire scene. Carry out comprehensive detection in the depth direction, and continuously record contrast values such as contrast. After finding the maximum contrast position, the lens that moves to the top will return to this position to complete the final focus.
  • the autofocus sensor and the pixel sensor are generally directly integrated together, and the left and right opposite pairs of pixels are taken out from the pixel sensor, and the objects in the scene are respectively detected. According to the relevant values of the left and right sides, the accurate focus point is found, and then the lens motor will deduce the corresponding position of the lens at one time to complete the focus.
  • Laser focusing is to emit a low-power laser to the subject through the infrared laser sensor next to the rear camera, after reflection, it is received by the sensor, and the distance to the subject is calculated. After that, the motor between the lenses directly pushes the lens to the corresponding position to complete the focusing. Like phase focus, it is also done at one time.
  • the output calculation is still based on the complete data of the entire image.
  • the amount of data is large, which increases the calculation time of the parameters. Generally, it can only be performed at a certain number of frames, and the focus parameters cannot be calculated for each frame.
  • the calculation means that the motor cannot be driven in time to adapt to the changes of each frame to perform the focusing operation. At the same time, the large amount of calculation also increases the power consumption of the system.
  • the purpose of this application is to introduce an auto-focusing scheme to improve this situation.
  • the device can still use the above-mentioned auto-focusing scheme to achieve focusing to acquire the first image.
  • the following images can refer to the focusing situation of the previous image, calculate through the algorithm, automatically calculate the focusing parameters, automatically complete the focusing, and optimize the focusing algorithm while continuously focusing.
  • manual focusing by the user is not required.
  • the input algorithm used to calculate the focus parameter may not be the complete first image, but part of the image data extracted from the first image without affecting the acquisition of the focus parameter, and the extracted part of the image data That is, the first data component.
  • the first data component has a reduced amount of data, and can be used to quickly obtain focus parameters.
  • the focus is on the distance of the focus area relative to the lens.
  • the result is mainly based on the clarity of the image and does not depend on the complete data of the image. For example, a data component of a certain color is extracted from the first image to participate in the calculation.
  • the data component is not the complete first image, it can also reflect the clarity of the first image and does not affect the calculation result.
  • the focusing parameters can be quickly calculated to achieve efficient focusing.
  • Images usually have their own format, for example, RAW format, YUV format, etc.
  • the data components of the image may refer to the color components of the image.
  • the RGB data is represented by a combination of different luminance values of the three primary colors R (Red, red), G (Green, green), and B (Blue, blue). a specific color.
  • R Red, red
  • G Green, green
  • B Blue, blue
  • a specific color a specific color.
  • RAW format for the original image in RAW format, it can be divided into R color component, G color component and B color component, respectively corresponding to the red, green and blue data of the image.
  • these color components are not complete image data, they can also reflect the clarity of the image. Selecting only some of the color components to calculate the focus parameters does not affect the calculation results.
  • one or more of the R color component, the G color component and the B color component may be extracted as the first data component from the multiple color components of the first image in the RAW format. Different color components are stored in different locations. When extracting the first data component from multiple data components of the first image, the extraction can be performed according to the storage location, and the first data component is extracted from the storage location corresponding to the first data component. .
  • the number of pixels corresponding to each color component in the first image may be counted, and the color component with the largest number of corresponding pixels in the first image is determined as the first data component. For example, if the main tone of the first image is green, and it is calculated that the number of green pixels in the first image is the most, the R color component is determined as the first data component, and the R color component is extracted from the storage location corresponding to the R color component. color component, and then the focus parameter is calculated from the R color component.
  • the first data component may also be referred to as a target color component.
  • the first image can be any frame of image in the video stream. If green is the main tone in a certain frame of image, the R color component can be extracted as the first data component, and the main tone in the next frame of image is the main tone. If it becomes red, in the next frame of image, the R color component is extracted as the first data component.
  • the first data component is determined according to the real-time shooting situation, and the first data component extracted according to the actual situation of each frame of image can ensure the accuracy of the calculated focusing parameter when participating in the calculation of the focusing parameter.
  • an image in RAW format can be converted into an image in YUV format
  • an image in YUV format includes three data components—a Y component, a U component, and a V component.
  • the Y component represents the brightness, that is, the gray value
  • the U component and the V component represent the chroma, which are used to describe the color and saturation, and are used to specify the color of the pixel.
  • One or more of the Y component, the U component, and the V component may be extracted as the first data component from among various data components of the first image in the YUV format.
  • step 120 may be performed in an integrated circuit chip that adopts hardware acceleration technology, that is, the step of "extracting the first data component from multiple data components of the image" is implemented in a hardware-hardened manner, using hardware
  • the module replaces the software algorithm to make full use of the inherent fast characteristics of the hardware to achieve the purpose of fast extraction.
  • some images or image data components and focus parameters corresponding to these images or image data components are pre-collected as samples, and a learning algorithm is used for training to obtain the correspondence between image data and focus parameters.
  • the first data component is input into the pre-trained learning algorithm, and the focus parameter corresponding to the first data component is obtained according to the correspondence between the image data and the focus parameter in the learning algorithm, that is, the first Focus parameters.
  • the first focus parameter is the estimated focus parameter for capturing the next frame of image.
  • the first focus parameter can be used as the focus parameter when shooting the next frame of image, and the focus shooting of the next frame of image is continued, thereby obtaining the second image.
  • the motor drives the lens to move and change the focal length so that the focusing parameter of the camera is consistent with the first focusing parameter.
  • the method further includes:
  • Focus shooting is performed according to the second focus parameter to acquire a third image.
  • FIG. 2 is a schematic flowchart of a second type of photographing method provided by an embodiment of the present application.
  • the user turns on the camera on the electronic device.
  • the motor drives the lens to move to automatically focus and shoot, so as to obtain the first image.
  • the electronic device will perform data component extraction on the first image to obtain the first data component of the first image, and use the first data component to perform focus parameter calculation to obtain the first focus parameter.
  • the camera After obtaining the first focus parameter, the camera performs focus shooting according to the first focus parameter, thereby obtaining a second image, and the second image repeats all the steps experienced by the first image in the above embodiment, and then obtains the second focus parameter,
  • the second focus parameter is used to perform focus shooting to obtain a third image, and so on. That is, the shooting method provided by the embodiment of the present application is not only applied to a certain frame of images, but is continuously applied during the shooting and acquisition of multiple frames of images, and the calculated focus parameters are also continuously updated.
  • images including the first image, the second image, the third image, .
  • operations such as cropping, graffiti, watermarking, and text addition are performed on images.
  • the focus parameters of each frame of images during actual shooting may not be equal to the focus parameters calculated from the previous frame of images, for example, the focus parameters may have to undergo a correction process, or the user may be after the motor auto-focusing. Manual focus adjustment, etc., may cause the actual focus parameters when the image is captured to be different from the focus parameters calculated from the previous frame of image. Therefore, while continuously determining the latest focusing parameters through the learning algorithm, the present application can also obtain the actual focusing parameters of the images captured in each frame, and input the actual focusing parameters into the learning algorithm, so as to update the learning algorithm and improve the The accuracy of the learning algorithm, and the focusing parameters output by the learning algorithm can be adapted to the user's real-time shooting needs and shooting habits.
  • the focus area of the first image is determined first, and then the first data is extracted from the various data components of the first image.
  • the first data component of the focus area is extracted from the various data components of the focus area, and the first data component of the focus area is used to calculate the first focus parameter.
  • the focus area is the key processing area after focus processing
  • the focus area has a higher definition than the non-focus area after the focus processing.
  • the first data component extracted from the focus area can be more guaranteed to be accurate. the accuracy of the first focusing parameter.
  • the amount of data involved in the calculation is further reduced, and the first focus parameter can be obtained more quickly, thereby further improving the focus shooting efficiency.
  • the focus parameters of the first image are acquired, the focus plane when shooting the first image is determined according to the focus parameters, the depth interval of the focus plane is acquired, and the first image is An area where the medium depth information is in the depth interval of the focal plane is determined as a focus area.
  • the depth interval of the focal plane may be a depth interval in which a focus object is located when a certain focus object is photographed. It can be understood that since the object is three-dimensional, and the depth corresponding to the focal plane is a specific value, it is often impossible to include all the depth information of the object, so it needs to be summarized by the depth interval.
  • the depth interval of the focal plane When dividing the depth interval of the focal plane, the depths in the range before and after the depth of the focal plane are included, so that the area where the focused object is located can be correctly segmented as the focal plane area.
  • the number of images participating in data component extraction may be determined according to the current power consumption of the electronic device.
  • the corresponding relationship between the power consumption level and the number of images is preset, and the current power consumption value of the electronic device is obtained (for example, the power consumption value can be the power consumption of the electronic device, the number of processes processed in the background, etc.), according to the preset power consumption value.
  • the power consumption level is determined, the power consumption level to which the current power consumption value belongs, and the number of images corresponding to the power consumption level is acquired.
  • the power consumption levels are from low power consumption to high power consumption, respectively, from one to seven levels.
  • the power consumption level to which the current power consumption value belongs is level one
  • the first 7 frames of images before the current frame image are obtained and the Data component extraction
  • the power consumption level to which the current power consumption value belongs is level 5
  • only the first 3 frames of images before the current frame image are captured for data component extraction.
  • the power consumption level to which the current power consumption value belongs is determined.
  • the consumption level is level 7
  • only the first frame image before the current frame image is captured for data component extraction.
  • the first data components extracted from each image are respectively input into the learning algorithm for calculation, and the change in focus parameters is obtained according to the calculation results of these first data components
  • the focus parameter of the next frame of image is predicted according to the change trend of the obtained focus parameter, so as to obtain the first focus parameter.
  • FIG. 3 is a schematic flowchart of a third type of photographing method provided by an embodiment of the present application.
  • the photographing method can be applied to the electronic device provided by the embodiment of the present application, and the photographing method provided by the embodiment of the present application may include the following steps:
  • the camera is started to shoot, a video stream of the scene to be shot is acquired, and multiple frames of images of the scene to be shot are obtained therefrom, with the same or similar image content.
  • These images can be in RAW (RAW Image Format, original) format, that is, the unprocessed original image after the image sensor converts the captured light source signal into a digital signal.
  • the first image acquired in step 110 may be a certain frame of image in the video stream.
  • the acquired first image may be automatically focused by the device and then photographed.
  • the process of focusing is the process of moving the lens to make the image in the focus area the clearest. After the focus is successful, the focus has the highest clarity, while the area outside the focus is relatively blurry.
  • the automatic focusing methods include contrast focusing, phase focusing, laser focusing, and the like.
  • RGB data is represented by a combination of different brightness values of the three primary colors R (Red, red), G (Green, green), and B (Blue, blue). specific color.
  • R color component for the original image in RAW format, it can be divided into R color component, G color component and B color component, which correspond to the red, green and blue data of the image respectively. and one or more of the B color components as the first data component.
  • the number of pixels corresponding to each color component in the first image may be counted, and the color component with the largest number of corresponding pixels in the image is determined as the first data component. For example, if the main tone of the first image is green, and it is calculated that the number of green pixels in the first image is the most, the R color component is determined as the first data component, and the R color component is extracted from the storage location corresponding to the R color component. color component. At this time, the first data component may also be referred to as a target color component.
  • the first data component can be any frame of image in the video stream. If green is the main tone in a certain frame of image, the R color component can be extracted as the first data component, and the main tone in the next frame of image is the main tone. If it becomes red, in the next frame of image, the R color component is extracted as the first data component.
  • the first data component is determined according to the real-time shooting situation, and the first data component extracted according to the actual situation of each frame of image can ensure the accuracy of the calculated focus parameter when the subsequent first data component participates in the calculation of the focus parameter.
  • the first data component can be extracted from the various data components of the first image. Since different color components are stored in different locations, when extracting the first data component from multiple data components of the first image, the extraction can be performed according to the storage location, and the first data component can be extracted from the storage location corresponding to the first data component weight.
  • step 204 may be performed in an integrated circuit chip using hardware acceleration technology, that is, the step of "extracting the first data component from multiple data components of the first image" is implemented in a hardware-hardened manner, Use hardware modules to replace software algorithms to make full use of the inherent fast characteristics of hardware and achieve the purpose of fast extraction.
  • some images or image data components and focus parameters corresponding to these images or image data components are pre-collected as samples, and a learning algorithm is used for training to obtain the correspondence between image data and focus parameters.
  • the first data component is input into the pre-trained learning algorithm, and the focus parameter corresponding to the first data component is obtained according to the correspondence between the image data and the focus parameter in the learning algorithm, that is, the first Focus parameters.
  • the historical frame images of the first image refer to images captured before the first image.
  • the third data component can also be extracted from the various data components of the image.
  • the second data component may be one or more data components other than the first data component, but not necessarily all data components except the first data component.
  • the number of the extracted second data components can be determined by the methods of step 206 and step 207, that is, firstly obtain the adjacent history of two frames of the first image frame images, determine the picture change rate of two adjacent historical frame images, and then determine the type of extraction of the third data component to be extracted according to the picture change rate.
  • the two adjacent historical frame images obtained by shooting may refer to the last two frames of images captured before the first image, that is, the latest two frames of images captured in this shooting, and these two frames of images are the latest of all the images.
  • the content of the image to be shot can be accurately predicted.
  • the picture change rate of the two adjacent historical frame images is relatively large, it can be considered that the picture in the to-be-shot image may have a large change in picture content compared to the pictures of the two adjacent historical frame images. For example, the scene to be photographed may be changed, or the object in the scene to be photographed is moving, etc., these situations may cause a large change in the content of the picture.
  • the larger the image change rate of the two adjacent historical frame images the more third data components are extracted to perform the first focus parameter analysis.
  • Correction if the image change rate of the two adjacent historical frame images is small, a relatively small number of third data components may be extracted to correct the first focus parameter. For example, when the image change rate is less than the change rate threshold, only one third data component is extracted from each frame of image.
  • which data component or components to extract as the third data component can be determined by the following method: when the data components are color components, statistics corresponding to each color component in the first image determine the color component with the largest number of corresponding pixels in the first image as the first data component, and in the remaining data components, select a certain number of third data in the order of the number of corresponding pixels from more to less weight. For example, when it is determined to extract two third data components, the color component with the largest number of corresponding pixels in the first image is determined as the first data component, and the color components with the second and third largest number of corresponding pixels in the image are determined as the first data component. Determined as the third data component.
  • the third data component is extracted from the determined storage location of the third data component according to the different storage locations of the data components.
  • scaling processing may be performed on the third data component, and the scaling processing method may be, for example, a Resize (cropping) function.
  • the Resize function can reduce the amount of data and change the image size through an interpolation algorithm.
  • the interpolation algorithm may include, for example, a nearest neighbor interpolation algorithm, a bilinear interpolation algorithm, a bicubic difference algorithm, an interpolation algorithm based on pixel region relationship, a Lanzos interpolation algorithm, and the like.
  • the rules for extracting the first data component described in the embodiments of the present application can ensure that the focus parameter calculated by the first data component is approximately accurate.
  • the focus parameter calculated by the first data component can be corrected to further ensure the accuracy of the focus parameter. Since the third data component is only used to participate in the correction, the accuracy requirement is not as high as that of the first data component. Therefore, the third data component is scaled to reduce the size of the image while retaining the image content, thereby reducing the size of the image.
  • the data volume of the second data component improves the correction efficiency, obtains the corrected focus parameters more quickly, and further improves the focus shooting efficiency.
  • the thumbnail image obtained after the scaling of the third data component is input into the pre-trained learning algorithm for calculation, and the first focus correction parameter is obtained according to the corresponding relationship between the image data and the focus parameter.
  • the first focus correction parameter plays a correction role and is used to correct the first focus parameter to improve the accuracy of the first focus parameter.
  • a corresponding number of first focus correction parameters are obtained by determining the number of third data components, and the first focus parameter is corrected by using the corresponding number of first focus correction parameters to obtain a more accurate corrected first focus parameter.
  • the corrected first focus parameter can be used as the focus parameter for shooting the next frame of image, and the focus shooting of the next frame of image is continued, thereby obtaining the second image.
  • the motor drives the lens to move and change the focal length, so that the focusing parameter of the camera is consistent with the corrected first focusing parameter, and the camera is shot under the condition of using the corrected first focusing parameter for focusing , to get the second image.
  • the method further includes:
  • Focus shooting is performed according to the corrected second focus parameter to acquire a third image.
  • FIG. 4 is a schematic flowchart of a fourth type of photographing method provided by an embodiment of the present application.
  • the user turns on the camera on the electronic device.
  • the motor drives the lens to move to automatically focus and shoot, so as to obtain the first image.
  • the electronic device will perform data component extraction on it to obtain the first data component and the third data component of the first image.
  • focus parameter calculation is performed by using the first data component to obtain the first focus parameter.
  • focus parameter calculation is performed to obtain the first focus correction parameter.
  • the first focus correction parameter and the first focus parameter are the two focus parameters to jointly calculate the final focus parameter, that is, the first focus correction parameter is used to perform the focus parameter correction on the first focus parameter to obtain the corrected first focus parameter.
  • the camera After obtaining the corrected first focus parameter, the camera performs focus shooting according to the first focus parameter, thereby obtaining a second image.
  • the second image repeats all the steps that the first image has gone through in the above-mentioned embodiment, and then obtains the second focus parameter and the second focus correction parameter.
  • the second focus parameter of , and the focus shooting is performed according to the corrected second focus parameter to obtain a third image, and so on.
  • the shooting method provided by the embodiment of the present application is not only applied to a certain frame of images, but is continuously applied during the shooting and acquisition of multiple frames of images, and the calculated focus parameters are also continuously updated.
  • the specific acquisition method of the second focus parameter and the second focus correction parameter reference may be made to the foregoing related descriptions of the first focus parameter and the first focus correction parameter, which will not be repeated here.
  • images including the first image, the second image, the third image, .
  • operations such as cropping, graffiti, watermarking, and text addition are performed on images.
  • the focus parameters of each frame of images during actual shooting may not be equal to the focus parameters calculated from the previous frame of images, for example, the focus parameters may have to undergo a correction process, or the user may be after the motor auto-focusing. Manual focus adjustment, etc., may cause the actual focus parameters when the image is captured to be different from the focus parameters calculated from the previous frame of image. Therefore, while continuously determining the latest focusing parameters through the learning algorithm, the present application can also obtain the actual focusing parameters of the images captured in each frame, and input the actual focusing parameters into the learning algorithm, so as to update the learning algorithm and improve the The accuracy of the learning algorithm, and the focusing parameters output by the learning algorithm can be adapted to the user's real-time shooting needs and shooting habits.
  • FIG. 5 is a schematic flowchart of a fifth type of photographing method provided by an embodiment of the present application.
  • the image contains three data components, namely Y component, U component and V component.
  • the number of the Y component is twice that of the U component, and it is also twice that of the V component.
  • the process of extracting the data component is performed by an integrated circuit chip using hardware acceleration technology. Before the chip performs component extraction, it is determined that the current first/data component is the Y component, then the chip extracts the Y component as the first data component, and the first focus parameter is obtained after the focus parameter calculation. At the same time, the chip extracts the U component and The V component is used as the third data component, and after scaling the extracted U component and V component, focus parameter calculation is performed to obtain the first focus correction parameter.
  • the first focus correction parameter is used to correct the first focus parameter to obtain the corrected first focus parameter, and the corrected first focus parameter is obtained by using the corrected first focus parameter.
  • the obtained first focusing parameter is compared with the first focusing correction parameter, and the comparison result is fed back to the chip, which affects the component extraction process of the chip.
  • a second image is obtained by shooting, the second image obtains a second focus parameter according to the second data component, and a second focus correction parameter is obtained according to the fourth data component, and the second focus parameter and the second focus correction parameter are also repeated above. The steps that the first focus parameter and the first focus correction parameter go through.
  • the first data component may be a first type of data component
  • the method further includes: calculating a difference between the first focus correction parameter and the first focus parameter. If the difference is less than or equal to the preset threshold, the second data components of the first type are still extracted from the multiple data components of the second image; if the difference is greater than the preset threshold, multiple data components of the next frame of image are extracted A second data component of a second type is extracted from the data component, the second type being different from the first type.
  • the difference between the first focus parameter and the first focus correction parameter is calculated, and if the difference is less than or equal to a preset threshold, it is determined that the first focus parameter and the first focus correction
  • the difference between the focus correction parameters is small, and the Y component of the image is suitable for calculating the focus parameter. Therefore, when the chip extracts the component of the second frame image, the Y component of the second image is still selected as the second focus parameter for calculating the second focus parameter.
  • the chip extracts the components of the second frame image , the type of the data component will be changed, eg, the U component will be reselected as the second data component.
  • the first image is obtained first; the first data component is then extracted from various data components of the first image; the first focusing parameter is determined according to the first data component; The first focusing parameter is used for focusing shooting to obtain a second image.
  • the image is changed from complete data to data components by extracting data components, which reduces the amount of data involved in the calculation, and can quickly calculate the first focus parameter, and use the first focus parameter for the next focus shooting.
  • the efficiency of in-focus shooting is improved.
  • the embodiment of the present application also provides a photographing device.
  • FIG. 6 is a first structural schematic diagram of a photographing apparatus provided by an embodiment of the present application.
  • the photographing apparatus 300 can be applied to electronic equipment, and the photographing apparatus 300 includes an acquisition module 301, a first extraction module 302, a first determination module 303 and a first photographing module 304, as follows:
  • an acquisition module 301 configured to acquire a first image
  • a first extraction module 302 configured to extract a first data component from a plurality of data components of the first image
  • a first determining module 303 configured to determine a first focusing parameter according to the first data component
  • the first shooting module 304 is configured to perform focus shooting according to the first focus parameter to obtain a second image.
  • FIG. 7 is a schematic diagram of a second structure of the photographing device 300 provided by the embodiment of the present application.
  • the photographing apparatus 300 further includes a second extracting module 305, a second determining module 306, and a second photographing module 307:
  • the second extraction module 305 is used for extracting the second data component from the various data components of the second image
  • a second determining module 306, configured to determine a second focusing parameter according to the second data component
  • the second shooting module 307 is configured to perform focus shooting according to the second focus parameter to obtain a third image.
  • FIG. 7 is a schematic diagram of a second structure of the photographing device 300 provided by the embodiment of the present application.
  • the photographing device 300 further includes a third extraction module 308, a third determination module 309 and a correction module 310:
  • the third extraction module 308 is configured to extract a third data component from the various data components of the first image, where the third data component is one or more data components other than the first data component in the first image;
  • a third determination module 309 configured to determine a first focus correction parameter according to the second data component, where the first focus correction parameter is used to correct the first focus parameter
  • the correction module 310 is configured to use the first focus correction parameter to correct the first focus parameter to obtain the corrected first focus parameter.
  • the third determining module 309 when determining the first focus correction parameter according to the third data component, the third determining module 309 may be used to:
  • the first focus correction parameter is determined from the thumbnail image of the third data component.
  • the photographing device 300 further includes a fourth determining module 311, and before extracting the third data component from the various data components of the first image, the fourth determining module 311 may be used for :
  • the type of extraction of the third data component to be extracted is determined according to the picture change rate.
  • the data components include color components
  • the photographing device 300 further includes a fifth determination module 312 , before extracting the first data components from the various data components of the first image, the fifth The determination module 312 may be used to:
  • the color component with the largest number of corresponding pixel points in the first image is determined as the first data component.
  • the photographing device 300 further includes a sixth determining module 313 , before extracting the first data component from the various data components of the first image, the third determining module 313 may be used for :
  • the first extraction module 302 when extracting the first data component from multiple data components of the first image, the first extraction module 302 may be used for:
  • the first data component of the focus area is extracted from the various data components of the focus area.
  • the acquisition module 301 first acquires the first image; then the first extraction module 302 extracts the first data component from various data components of the first image; the first determination module 303 The first focus parameter is determined according to the first data component; and then the shooting module 304 performs focus shooting according to the first focus parameter to obtain the second image.
  • the image is changed from complete data to data components by extracting data components, which reduces the amount of data involved in the calculation, and can quickly calculate the first focus parameter, and use the first focus parameter for the next focus shooting.
  • the efficiency of in-focus shooting is improved.
  • the embodiments of the present application also provide an electronic device.
  • the electronic device can be a smart phone, a tablet computer, a gaming device, an AR (Augmented Reality) device, a car, an obstacle detection device around a vehicle, an audio playback device, a video playback device, a notebook, a desktop computing device, and a wearable device such as a watch , glasses, helmets, electronic bracelets, electronic necklaces, electronic clothing and other equipment.
  • FIG. 8 is a schematic diagram of a first structure of an electronic device 400 according to an embodiment of the present application.
  • the electronic device 400 includes a processor 401 and a memory 402 .
  • a computer program is stored in the memory, and the processor invokes the computer program stored in the memory to execute the steps in any of the shooting methods provided in the embodiments of the present application.
  • the processor 401 is electrically connected to the memory 402 .
  • the processor 401 is the control center of the electronic device 400, uses various interfaces and lines to connect various parts of the entire electronic device, executes the electronic device by running or calling the computer program stored in the memory 402, and calling the data stored in the memory 402. Various functions of the device and processing data, so as to carry out the overall monitoring of the electronic device.
  • the processor 401 in the electronic device 400 can load the instructions corresponding to the processes of one or more computer programs into the memory 402 according to the steps in the above-mentioned shooting method, and the processor 401 can run and store the instructions.
  • Focus shooting is performed according to the first focus parameter to acquire a second image.
  • FIG. 9 is a schematic diagram of a second structure of an electronic device 400 according to an embodiment of the present application.
  • the electronic device 400 further includes: a display screen 403 , a control circuit 404 , an input unit 405 , a sensor 406 and a power supply 407 .
  • the processor 401 is electrically connected to the display screen 403 , the control circuit 404 , the input unit 405 , the sensor 406 and the power supply 407 , respectively.
  • the display screen 403 may be used to display information input by or provided to the user and various graphical user interfaces of the electronic device, which may be composed of images, text, icons, videos, and any combination thereof.
  • the control circuit 404 is electrically connected to the display screen 403 for controlling the display screen 403 to display information.
  • the input unit 405 may be used to receive input numbers, character information or user characteristic information (eg fingerprints), and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
  • the input unit 405 may include a touch sensing module.
  • the sensor 406 is used to collect the information of the electronic device itself or the user's information or the external environment information.
  • the sensor 406 may include a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, an attitude sensor, a barometer, a heart rate sensor, and the like.
  • Power supply 407 is used to power various components of electronic device 400 .
  • the power supply 407 may be logically connected to the processor 401 through a power management system, so as to implement functions such as managing charging, discharging, and power consumption through the power management system.
  • the electronic device 400 may further include a camera, a Bluetooth module, and the like, which will not be repeated here.
  • the processor 401 in the electronic device 400 can load the instructions corresponding to the processes of one or more computer programs into the memory 402 according to the steps in the above translation method, and the processor 401 executes and stores the instructions.
  • Focus shooting is performed according to the first focus parameter to acquire a second image.
  • the processor 401 further performs the following steps:
  • Focus shooting is performed according to the second focus parameter to acquire a third image.
  • the processor 401 after determining the first focus parameter from the first data component, the processor 401 further performs the following steps:
  • the first focus correction parameter is determined according to the third data component, and the first focus correction parameter is used to correct the first focus parameter.
  • the processor 401 when determining the first focus correction parameter according to the third data component, performs the following steps:
  • the first focus correction parameter is determined from the thumbnail image of the third data component.
  • the processor 401 before extracting the third data component from the various data components of the first image, the processor 401 further performs the following steps:
  • the kind of the third data component to be extracted is determined according to the picture change rate.
  • the data components include color components
  • the processor 401 before extracting the first data component from the plurality of data components of the first image, the processor 401 further performs the following steps:
  • the color component with the largest number of corresponding pixel points in the first image is determined as the first data component.
  • the processor 401 before extracting the first data component from the plurality of data components of the first image, the processor 401 further performs the following steps:
  • Extracting the first data component from the various data components of the first image includes:
  • the first data component of the focus area is extracted from the various data components of the focus area.
  • the embodiment of the present application provides an electronic device, and the processor in the electronic device performs the following steps: firstly acquiring a first image; then extracting the first data component from various data components of the image; The data component determines a first focus parameter; and then performs focus shooting according to the first focus parameter to acquire a second image.
  • the image is changed from complete data to data components by extracting data components, which reduces the amount of data involved in the calculation, and can quickly calculate the first focus parameter, and use the first focus parameter for the next focus shooting.
  • the efficiency of in-focus shooting is improved.
  • an embodiment of the present application further provides an electronic device, the electronic device at least includes a camera 408 and a processor 402, and the processor 402 includes a front-end image processing chip 4021 and a main processor 4022, wherein:
  • the front-end image processing chip 4021 is used to extract the first data component from various data components of the image
  • the main processor 4022 is configured to determine the first focus parameter according to the extracted first data component, so that the camera is focused according to the first focus parameter, and returns to the step of capturing and obtaining an image.
  • the front-end image processing chip 4021 is an integrated circuit chip, which can be used in smart phones, tablet computers, game equipment, AR (Augmented Reality, augmented reality) equipment, automobiles, vehicle peripheral obstacle detection devices, audio playback devices, video playback devices, notebooks, Desktop computing devices, wearable devices such as watches, glasses, helmets, electronic bracelets, electronic necklaces and other electronic devices.
  • the front-end image processing chip 4021 and the main processor 4022 provided in this embodiment of the present application are independent of each other, and hardware acceleration technology is adopted to allocate the work with a very large amount of calculation to special hardware for processing, so as to reduce the workload of the main processor 4022, thereby reducing the workload of the main processor 4022.
  • the main processor 4022 is not required to translate every pixel in the image layer by layer through software.
  • the front-end image processing chip 4021 is specially responsible for extracting the data components of the image. After the camera 408 captures the image, the front-end image processing chip 4021 that adopts the hardware acceleration technology extracts the data component of the image, extracts the first data component from the various data components of the image, and then extracts the first data component. It is transmitted to the main processor 4022, and the main processor 4022 determines the first focus parameter according to the first data component extracted by the front-end image processing chip 4021, so that the camera 408 focuses according to the first focus parameter, and returns to the step of capturing an image.
  • the front-end image processing chip 4021 after the front-end image processing chip 4021 extracts the first data component from the various data components of the image, the front-end image processing chip 4021 continues to extract the second data component from the various data components of the image, and The extracted second data component is transmitted to the main processor 4022, and the main processor 4022 performs subsequent calculations.
  • the main processor 4022 determines the focus correction parameter according to the second data component, and uses the focus correction parameter to correct the first focus parameter. , to obtain the corrected first focusing parameter, so that the camera 408 performs focusing according to the corrected first focusing parameter, and returns to the step of capturing and obtaining an image.
  • the electronic device includes a camera, a front-end image processing chip, and a main processor, wherein the camera 408 is used to acquire the first image; the front-end image processing chip 4021 is used to extract from various data components of the image.
  • the main processor 4022 is configured to determine the first focus parameter according to the first data component extracted by the front-end image processing chip, so that the camera performs focus shooting according to the first focus parameter to obtain the second image.
  • the image is changed from complete data to data components by extracting data components, which reduces the amount of data involved in the calculation, and can quickly calculate the first focus parameter, and use the first focus parameter for the next focus shooting.
  • using hardware acceleration technology to extract data components can improve the efficiency of data component extraction and further improve the efficiency of image capturing.
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the storage medium, and when the computer program runs on the computer, the computer executes the shooting method of any of the foregoing embodiments.
  • the computer when a computer program is run on a computer, the computer performs the following steps:
  • Focus shooting is performed according to the first focus parameter to acquire a second image.
  • the storage medium may include, but is not limited to, a read only memory (ROM, Read Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk or an optical disk, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Automatic Focus Adjustment (AREA)

Abstract

一种拍摄方法、装置、电子设备及计算机可读存储介质,其中,拍摄方法包括:获取第一图像;从第一图像的多种数据分量中提取出第一数据分量;根据第一数据分量确定第一对焦参数;根据第一对焦参数进行对焦拍摄,以获取第二图像。

Description

拍摄方法、装置、计算机可读存储介质及电子设备
本申请要求于2021年3月3日提交中国专利局、申请号为202110236906.3、发明名称为“拍摄方法、装置、计算机可读存储介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理领域,尤其涉及一种拍摄方法、装置、电子设备及计算机可读存储介质。
背景技术
随着智能终端技术的不断发展,电子设备的使用越来越普及。随着移动终端处理能力的增强以及摄像头技术的发展,用户对拍摄的图像质量的要求也越来越高。
发明内容
本申请实施例提供一种拍摄方法、装置、电子设备及计算机可读存储介质,能够提高对焦拍摄的效率。
本申请实施例提供一种拍摄方法,其中,拍摄方法包括:
获取第一图像;
从所述第一图像的多种数据分量中提取出第一数据分量;
根据所述第一数据分量确定第一对焦参数;
根据所述第一对焦参数进行对焦拍摄,以获取第二图像。
本申请实施例还提供了一种拍摄装置,其中,拍摄装置包括:
获取模块,用于获取第一图像;
提取模块,用于从所述图像的多种数据分量中提取出第一数据分量;
计算模块,用于根据所述第一数据分量确定第一对焦参数;
对焦模块,用于根据所述第一对焦参数进行对焦拍摄,以获取第二图像。
本申请实施例还提供一种计算机可读存储介质,其中,存储介质中存储有计算机程序,当计算机程序在计算机上运行时,使得计算机执行本申请实施例提供的任一种拍摄方法中的步骤。
本申请实施例还提供一种电子设备,其中,电子设备包括处理器和存储器,存储器中存储有计算机程序,处理器通过调用存储器中存储的计算机程序,以执行本申请实施例提供的任一种拍摄方法中的步骤。
本申请实施例还提供一种电子设备,其中,电子设备包括摄像头、前端图像处理芯片和主处理器,所述摄像头用于获取第一图像,所述前端芯片用于从所述第一图像的多种数据分量中提取出第一数据分量,所述主处理器用于根据所述第一数据分量确定第一对焦参数,以使得所述摄像头根据所述第一对焦参数进行对焦拍摄,以获取第二图像。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍。显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的拍摄方法的第一种流程示意图。
图2为本申请实施例提供的拍摄方法的第二种流程示意图。
图3为本申请实施例提供的拍摄方法的第三种流程示意图。
图4为本申请实施例提供的拍摄方法的第四种流程示意图。
图5为本申请实施例提供的拍摄方法的第五种流程示意图。
图6为本申请实施例提供的拍摄装置的第一种结构示意图。
图7为本申请实施例提供的拍摄装置的第二种结构示意图。
图8为本申请实施例提供的电子设备的第一种结构示意图。
图9为本申请实施例提供的电子设备的第二种结构示意图。
图10为本申请实施例提供的电子设备的第二种结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整的描述。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有付出创造性劳动前提下所获得的所有实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书以及上述附图中的术语“第一”、“第二”、“第三”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应当理解,这样描述的对象在适当情况下可以互换。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含。例如,包含了一系列步骤的过程、方法或包含了一系列模块或单元的装置、电子设备、系统不必限于清楚地列出的那些步骤或模块和单元,还可以包括没有清楚地列出的步骤或模块或单元,也可以包括对于这些过程、方法、装置、电子设备或系统固有的其它步骤或模块或单元。
本申请实施例提供一种拍摄方法,该拍摄方法应用于电子设备。该拍摄方法的执行主体可以是本申请实施例提供的拍摄装置,或者集成了该拍摄装置的电子设备,该拍摄装置可以采用硬件或者软件的方式实现,电子设备可以是智能手机、平板电脑、掌上电脑、笔记本电脑、或者台式电脑等配置有处理器而具有处理能力的设备。
本申请实施例提供一种一种拍摄方法,包括:
获取第一图像;
从所述第一图像的多种数据分量中提取出第一数据分量;
根据所述第一数据分量确定第一对焦参数;
根据所述第一对焦参数进行对焦拍摄,以获取第二图像。
在一实施例中,所述根据所述第一对焦参数进行对焦,以获取第二图像之后,还包括:
从所述第二图像的多种数据分量中提取第二数据分量;
根据所述第二数据分量确定第二对焦参数;
根据所述第二对焦参数进行对焦拍摄,以获取第三图像。
在一实施例中,所述根据所述第一数据分量确定第一对焦参数之后,还包括:
从所述第一图像的多种数据分量中提取出第三数据分量,所述第三数据分量为所述第一图像中除所述第一数据分量以外的一种或多种数据分量;
根据所述第三数据分量确定第一对焦修正参数,所述第一对焦修正参数用于对所述第一对焦参数进行修正。
在一实施例中,所述根据所述第三数据分量确定第一对焦修正参数包括:
获取所述第三数据分量的缩略图;
根据所述第三数据分量的缩略图确定所述第一对焦修正参数。
在一实施例中,所述从所述第一图像的多种数据分量中提取出第三数据分量之前,还包括:
获取所述第一图像的两帧相邻历史帧图像,确定所述两帧相邻历史帧图像的画面变化率;
根据所述画面变化率确定要提取的第三数据分量的种类。
在一实施例中,所述数据分量包括颜色分量,所述从所述第一图像的多种数据分量中 提取出第一数据分量之前,还包括:
统计所述第一图像中每种颜色分量对应的像素点数量;
将所述第一图像中对应像素点数量最多的颜色分量确定为所述第一数据分量。
在一实施例中,所述从所述第一图像的多种数据分量中提取出第一数据分量之前,还包括:
确定所述第一图像的对焦区域;
所述从所述第一图像的多种数据分量中提取出第一数据分量包括:
从所述对焦区域的多种数据分量中提取出所述对焦区域的第一数据分量。
在一实施例中,所述获取第一图像包括:
获取待拍摄场景的视频流,得到待拍摄场景的多帧原始图像;
根据所述多帧原始图像获取第一图像。
在一实施例中,所述第一图像为RAW格式,所述从所述第一图像的多种数据分量中提取出第一数据分量之前,还包括:
对所述第一图像进行处理,得到所述第一图像的RGB数据,所述RGB数据包括所述第一图像的多种数据分量,所述第一图像的多种数据分量包括R颜色分量、G颜色分量和B颜色分量。
请参照图1,图1为本申请实施例提供的拍摄方法的第一种流程示意图。该拍摄方法的执行主体可以是本申请实施例提供的拍摄装置,或者集成了该拍摄装置的电子设备。本申请实施例提供的拍摄方法可以包括以下步骤:
110,获取第一图像。
在一实施例中,启动相机进行拍摄,获取待拍摄场景的视频流,从中得到待拍摄场景的多帧图像,具有相同或相似的图像内容。这些图像可以为RAW(RAW Image Format,原始)格式,即图像感应器将捕捉到的光源信号转化为数字信号后,未经加工的原始图像。步骤110中获取的第一图像,可以是视频流中的某一帧图像。
在一实施例中,获取的第一图像可以由设备进行自动对焦,然后拍摄得到。对焦的过程就是通过移动镜片来使对焦区域的图像达到最清晰的过程,对焦成功以后,焦点的清晰度最高,而焦点以外的区域表现为相对模糊的状态。其中,自动对焦方式包括反差对焦、相位对焦、激光对焦等。
反差对焦也称对比度对焦,采用反差对焦时,当我们对准被摄物体时,镜头模组内的马达便会驱动镜片从底部向顶部移动,在这个过程中,像素传感器将会对整个场景范围进行纵深方向上的全面检测,并持续记录对比度等反差数值,找出反差最大位置后,运动到顶部的镜片则会重新回到该位置,完成最终的对焦。
在相位对焦中,一般直接将自动对焦传感器和像素传感器直接集成在一起,从像素传感器上拿出左右相对的成对像素点,分别对场景内的物体进行进光量等信息的检测,通过比对左右两侧的相关值的情况,找出准确的对焦点,之后镜片马达便会一次性地将镜片推导相应位置,完成对焦。
激光对焦是通过后置摄像头旁边的红外激光传感器向被摄物体发射低功率激光,经过反射后被传感器接收,并计算出与被摄物体之间的距离。之后镜间马达便直接将镜片推到相应位置,完成对焦。和相位对焦一样,同样是一次完成。
目前的自动对焦方案中,还是基于整幅图像的完整数据同时进行出力计算,数据量较大,增加参数的计算时间,一般只能隔一定的帧数进行,无法对每一帧都进行对焦参数的计算,即不能及时驱动马达适应每一帧的变化进行对焦操作。同时,较大的计算量也增加了系统的功耗。
本申请旨在引进一种自动对焦方案改进这种情况,在拍摄前几帧图像时,设备仍然可 以采用上述自动对焦方案实现对焦,以获取第一图像。对焦获取了几帧图像后,后面的图像可以参考前面图像的对焦情况,通过算法进行计算,自动计算出对焦参数,自动完成对焦,并在不断对焦的同时优化对焦算法。从而,不需要用户手动对焦。
120,从第一图像的多种数据分量中提取出第一数据分量。
输入算法中用于计算出对焦参数的可以并非完整的第一图像,而是在不影响对焦参数获取的情况下,从第一图像中提取出的部分图像数据,被提取出的这部分图像数据即第一数据分量。第一数据分量相比完整的第一图像,在数据量上有所减少,能够用来快速得到对焦参数。在获取对焦参数时,关注的是对焦区域相对镜头的距离,该结果主要依据于图像的清晰程度,并不依赖于图像的完整数据。例如,从第一图像中提取某种颜色的数据分量参与计算,该数据分量虽然并非完整的第一图像,但是也能反映第一图像的清晰程度,并不影响计算结果。通过减小计算量,能够快速计算出对焦参数,实现高效对焦。
图像通常有自己的格式,例如,RAW格式、YUV格式等。图像的数据分量可以指图像的颜色分量。例如,对于RAW格式的图像,可以对其处理得到RGB数据,RGB数据通过用三原色R(Red,红)、G(Green,绿)、B(Blue,蓝)的不同的亮度值组合来表示某一种具体的颜色。相应的,对于RAW格式的原始图像,可以分为R颜色分量、G颜色分量和B颜色分量,分别对应图像的红色、绿色和蓝色数据。这些颜色分量虽然并非完整的图像数据,但是也能反映图像的清晰程度。仅选择其中部分颜色分量来计算对焦参数,并不影响计算结果。
在一实施例中,可以从RAW格式的第一图像的多种颜色分量中提取出R颜色分量、G颜色分量和B颜色分量中的一种或多种作为第一数据分量。不同的颜色分量存储在不同的位置,在从第一图像的多种数据分量中提取第一数据分量时,可以根据存储位置进行提取,从第一数据分量对应的存储位置提取出第一数据分量。
例如,在提取之前,可以统计出第一图像中每种颜色分量对应的像素点数量,将第一图像中对应像素点数量最多的颜色分量确定为第一数据分量。例如,假如第一图像的主基调为绿色,统计出第一图像中绿色的像素点是最多的,则将R颜色分量确定为第一数据分量,从R颜色分量对应的存储位置中提取出R颜色分量,并在之后根据R颜色分量计算对焦参数。此时,第一数据分量也可称为目标颜色分量。
需要说明的是,从第一图像中提取哪一或哪些数据分量作为第一数据分量,并不是固定的,而是可以根据拍摄情况实时改变。可以理解的是,第一图像可以是视频流中的任意一帧图像,假如某一帧图像中绿色为主基调,则可提取R颜色分量作为第一数据分量,而下一帧图像中主基调变为了红色,则在下一帧图像中,提取R颜色分量作为第一数据分量。根据实时的拍摄情况确定第一数据分量,根据每一帧图像的实际情况提取出的第一数据分量,在参与计算对焦参数时,能够保证计算出的对焦参数的准确性。
在一实施例中,RAW格式的图像可以转换为YUV格式的图像,YUV格式的图像包括三种数据分量——Y分量、U分量以及V分量。其中Y分量代表明亮度,也就是灰度值,而U分量以及V分量则代表色度,作用是描述色彩及饱和度,用于指定像素的颜色。可以从YUV格式的第一图像的多种数据分量中提取出Y分量、U分量和V分量中的一种或多种作为第一数据分量。
在一实施例中,步骤120可以在采用了硬件加速技术的集成电路芯片中进行,即采用硬件固化的方式实施“从图像的多种数据分量中提取出第一数据分量”的步骤,使用硬件模块来代替软件算法以充分利用硬件所固有的快速特性,达到快速提取的目的。
130,根据第一数据分量确定第一对焦参数。
在一实施例中,预先采集一些图像或图像数据分量以及这些图像或图像数据分量对应的对焦参数作为样本,通过学习算法进行训练,得到图像数据与对焦参数的对应关系。
在提取出第一数据分量后,将第一数据分量输入到预先训练好的学习算法中,根据学习算法中图像数据与对焦参数的对应关系得到该第一数据分量对应的对焦参数,即第一对焦参数。第一对焦参数即预估出的用于下一帧图像拍摄的对焦参数。
140,根据第一对焦参数进行对焦拍摄,以获取第二图像。
在得到第一对焦参数后,即可将该第一对焦参数作为拍摄下一帧图像时的对焦参数,继续下一帧图像的对焦拍摄,从而得到第二图像。根据第一对焦参数,马达驱动镜片进行移动,以及改变焦距,使得摄像头的对焦参数与第一对焦参数相符,在使用该第一对焦参数进行对焦的情况下拍摄,以获取第二图像。
在一实施例中,对于获取的第二图像,将重复上述第一图像的所有步骤,即,在根据第一对焦参数进行对焦,以获取第二图像之后,还包括:
从第二图像的多种数据分量中提取第二数据分量;
根据第二数据分量确定第二对焦参数;
根据第二对焦参数进行对焦拍摄,以获取第三图像。
请参照图2,图2为本申请实施例提供的拍摄方法的第二种流程示意图。首先,用户在电子设备上开启相机,相机开启后,马达驱动镜片移动从而自动对焦拍摄,以获取第一图像。对于第一图像,电子设备会对其进行数据分量提取,得到第一图像的第一数据分量,利用该第一数据分量来进行对焦参数计算,得到第一对焦参数。得到第一对焦参数后,摄像头根据该第一对焦参数进行对焦拍摄,从而获取到第二图像,第二图像重复上述实施例中第一图像经历过的所有步骤,进而获取到第二对焦参数,采用第二对焦参数进行对焦拍摄从而得到第三图像,以此类推。即,本申请实施例提供的拍摄方法并非是只应用于某一帧图像,而是在拍摄获取多帧图像期间,持续应用,计算出的对焦参数也持续更新。
在利用计算出的对焦参数拍摄获取到图像(包括第一图像、第二图像、第三图像、……)之后,得到的图像可以输出,用来进行后端图像处理。例如对图像进行裁剪、涂鸦、加水印、加文字等操作。
在一些情况下,每一帧图像在实际拍摄时的对焦参数可能并不等于根据上一帧图像计算出的对焦参数,例如,对焦参数可能要经历修正过程,或者,用户可能在马达自动对焦后又进行手动调焦,等等,这些情况都可能导致拍摄图像时的实际对焦参数与上一帧图像计算出的对焦参数并不相等。因而,本申请在通过学习算法不断确定最新的对焦参数的同时,也可以获取每一帧拍摄出来的图像的实际对焦参数,将实际对焦参数输入到学习算法中,以对学习算法进行更新,提高学习算法的精确度,并且,使得学习算法输出的对焦参数能够适应用户的实时拍摄需求及拍摄习惯。
在一实施例中,在从第一图像的多种数据分量中提取出第一数据分量之前,先确定第一图像的对焦区域,在从第一图像的多种数据分量中提取出第一数据分量时,从对焦区域的多种数据分量中提取出对焦区域的第一数据分量,将对焦区域的第一数据分量用于计算第一对焦参数。
由于对焦区域是经过对焦处理的重点处理区域,通过对焦处理后,对焦区域相比非对焦区域的清晰度更高,在高清晰度下,对焦区域中提取出的第一数据分量能够更加确保得出的第一对焦参数的准确性。同时,由于只使用了对焦区域的第一数据分量而非整个第一图像的第一数据分量,因而参与计算的数据量得到了进一步缩减,能够更加快速地得出第一对焦参数从而进一步提高对焦拍摄的效率。
在一实施例中,在确定第一图像的对焦区域时,首先,获取第一图像的对焦参数,根据对焦参数确定拍摄第一图像时的焦平面,获取焦平面的深度区间,将第一图像中深度信息处于焦平面的深度区间的区域确定为对焦区域。
其中,焦平面的深度区间可以是对某一对焦对象进行拍摄时,该对焦对象所处的深度 区间。可以理解的是,由于物体是立体的,而焦平面对应的深度为一个具体数值,往往无法囊括对象的全部深度信息,因而需要用深度区间来概括。在划分焦平面的深度区间时,将焦平面深度的前后一段范围内的深度都归入进来,从而能够正确分割出聚焦对象所在的区域作为焦平面区域。
在一实施例中,可以根据电子设备当前的功耗情况确定参与数据分量提取的图像的数量。预先设定功耗等级与图像数量的对应关系,获取电子设备当前的功耗值(该功耗值例如可以为电子设备已消耗的电量、后台处理的进程数量等),根据预先设定的功耗等级,确定当前的功耗值所属的功耗等级,并获取与该功耗等级对应数量的图像。例如,功耗等级从低功耗到高功耗分别为一到七级,当确定出当前的功耗值所属的功耗等级为一级时,获取当前帧图像拍摄前的前7帧图像进行数据分量提取,当确定出当前的功耗值所属的功耗等级为五级时,只获取当前帧图像拍摄前的前3帧图像进行数据分量提取,当确定出当前的功耗值所属的功耗等级为七级时,只获取当前帧图像拍摄前的前1帧图像进行数据分量提取。
其中,当参与数据分量提取的图像的数量为不止一个时,每个图像提取的第一数据分量都分别输入到学习算法中进行计算,根据这些第一数据分量的计算结果得出对焦参数的变化趋势,根据得出的对焦参数的变化趋势对下一帧图像的对焦参数进行预测,从而得到第一对焦参数。
功耗等级越低,参与数据分量提取的图像的数量越多,需要处理的数据越多,得出的第一对焦参数越准确。功耗等级越高,参与数据分量提取的图像的数量越少,需要处理的数据越少,越节约功耗。
根据前一实施例所描述的方法,以下作进一步详细说明。
请参照图3,图3为本申请实施例提供的拍摄方法的第三种流程示意图。该拍摄方法可应用于本申请实施例提供的电子设备,本申请实施例提供的拍摄方法可以包括以下步骤:
201、获取第一图像。
在一实施例中,启动相机进行拍摄,获取待拍摄场景的视频流,从中得到待拍摄场景的多帧图像,具有相同或相似的图像内容。这些图像可以为RAW(RAW Image Format,原始)格式,即图像感应器将捕捉到的光源信号转化为数字信号后,未经加工的原始图像。步骤110中获取的第一图像,可以是视频流中的某一帧图像。
在一实施例中,获取的第一图像可以由设备进行自动对焦,然后拍摄得到。对焦的过程就是通过移动镜片来使对焦区域的图像达到最清晰的过程,对焦成功以后,焦点的清晰度最高,而焦点以外的区域表现为相对模糊的状态。其中,自动对焦方式包括反差对焦、相位对焦、激光对焦等。
202、统计第一图像中每种颜色分量对应的像素点数量。
203、将第一图像中对应像素点数量最多的颜色分量确定为第一数据分量。
图像通常有自己的格式,例如,RAW格式、YUV格式等。对于RAW格式的图像,可以对其处理得到RGB数据,RGB数据通过用三原色R(Red,红)、G(Green,绿)、B(Blue,蓝)的不同的亮度值组合来表示某一种具体的颜色。相应的,对于RAW格式的原始图像,可以分为R颜色分量、G颜色分量和B颜色分量,分别对应图像的红色、绿色和蓝色数据,可以将第一图像的R颜色分量、G颜色分量和B颜色分量中的一种或多种作为第一数据分量。
在一实施例中,可以统计出第一图像中每种颜色分量对应的像素点数量,将图像中对应像素点数量最多的颜色分量确定为第一数据分量。例如,若第一图像的主基调为绿色,统计出第一图像中绿色的像素点是最多的,则将R颜色分量确定为第一数据分量,从R颜 色分量对应的存储位置中提取出R颜色分量。此时,第一数据分量也可称为目标颜色分量。
需要说明的是,将哪一或哪些颜色分量作为第一数据分量,并不是固定的,而是可以根据拍摄情况实时改变。可以理解的是,第一图像可以是视频流中的任意一帧图像,假如某一帧图像中绿色为主基调,则可提取R颜色分量作为第一数据分量,而下一帧图像中主基调变为了红色,则在下一帧图像中,提取R颜色分量作为第一数据分量。根据实时的拍摄情况确定第一数据分量,根据每一帧图像的实际情况提取出的第一数据分量,在后续第一数据分量参与计算对焦参数时,能够保证计算出的对焦参数的准确性。
204、从第一图像的多种数据分量中提取出第一数据分量。
在确定出第一数据分量之后,即可从第一图像的多种数据分量中提取第一数据分量。由于不同的颜色分量存储在不同的位置,在从第一图像的多种数据分量中提取第一数据分量时,可以根据存储位置进行提取,从第一数据分量对应的存储位置提取出第一数据分量。
在一实施例中,步骤204可以在采用了硬件加速技术的集成电路芯片中进行,即采用硬件固化的方式实施“从第一图像的多种数据分量中提取出第一数据分量”的步骤,使用硬件模块来代替软件算法以充分利用硬件所固有的快速特性,达到快速提取的目的。
205、根据第一数据分量确定第一对焦参数。
在一实施例中,预先采集一些图像或图像数据分量以及这些图像或图像数据分量对应的对焦参数作为样本,通过学习算法进行训练,得到图像数据与对焦参数的对应关系。
在提取出第一数据分量后,将第一数据分量输入到预先训练好的学习算法中,根据学习算法中图像数据与对焦参数的对应关系得到该第一数据分量对应的对焦参数,即第一对焦参数。
206、获取第一图像的两帧相邻历史帧图像,确定两帧相邻历史帧图像的画面变化率。
其中,第一图像的历史帧图像指的是在第一图像之前拍摄的图像。
207、根据画面变化率确定要提取的第三数据分量的种类。
208、从图像的多种数据分量中提取出第三数据分量。
在从图像的多种数据分量中提取出第一数据分量后,同样可以从图像的多种数据分量中提取出第三数据分量。需要说明的是,第二数据分量可以是除第一数据分量以外的一种或多种数据分量,而并不一定是除第一数据分量以外的所有数据分量。
而至于选择哪一或哪几种数据分量作为第三数据分量,首先,提取的第二数据分量的数量可以通过步骤206和步骤207的方法确定,即首先获取第一图像的两帧相邻历史帧图像,确定两帧相邻历史帧图像的画面变化率,然后根据画面变化率确定要提取的第三数据分量的种类提取。
其中,拍摄得到的两帧相邻历史帧图像可以指在第一图像之前拍摄得到的最后两帧图像,即本次拍摄中拍最晚得到的两帧图像,这两帧图像是所有图像中最新的两帧图像,根据这两帧图像能够准确地预测即将拍摄的图像的内容。当两帧相邻历史帧图像的画面变化率较大时,可以认为即将拍摄的图像中的画面相比这两帧相邻历史帧图像的画面,画面内容可能发生较大变化。例如,可能变换了待拍摄场景,或者待拍摄场景中的物体正在运动等,这些情况都可能引起画面内容发生较大变化。
为了在画面内容发生较大变化的情况下确保第一对焦参数的准确性,两帧相邻历史帧图像的图像变化率越大,则提取越多的第三数据分量来对第一对焦参数进行修正,若两帧相邻历史帧图像的图像变化率较小,则可以提取比较少的第三数据分量来对第一对焦参数进行修正。例如,当图像变化率小于变化率阈值时,每帧图像只提取出一种第三数据分量。
除确定提取的第三数据分量的数量以外,具体提取哪一或哪些数据分量作为第三数据分量则可以通过以下方法决定:当数据分量为颜色分量时,统计第一图像中每种颜色分量对应的像素点数量,将第一图像中对应像素点数量最多的颜色分量确定为第一数据分量, 在剩余的数据分量中,按照对应像素点数量从多到少的顺序选择确定数量的第三数据分量。例如,当确定提取两个第三数据分量时,将第一图像中对应像素点数量最多的颜色分量确定为第一数据分量,将图像中对应像素点数量第二多和第三多的颜色分量确定为第三数据分量。
在确定出第二数据分量后,根据各数据分量存储位置的不同,从确定的第三数据分量的存储位置中提取得到第三数据分量。
209、获取第三数据分量的缩略图。
例如,可以对第三数据分量进行缩放处理,缩放处理的方式例如可以采用Resize(裁剪)函数,Resize函数能够通过插值算法实现数据量缩减,改变图像大小。插值算法例如可以包括最近邻插值算法、双线性插值算法、双三次差值算法、基于像素区域关系插值算法和兰索斯插值算法等。第二数据分量在进行缩放处理后,图像内容不变,只是大小和原图像不一样。
通过本申请实施例中描述的提取第一数据分量的规则,能够保证该第一数据分量计算出的对焦参数的大致准确。通过提取第三数据分量可以对第一数据分量计算出的对焦参数进行修正,进一步确保对焦参数的准确性。而由于第三数据分量只是用来参与修正,在精确度上的要求没有第一数据分量那么高,因而,对第三数据分量进行缩放处理,在保留图像内容的同时缩减图像的大小,从而减少第二数据分量的数据量,提高修正效率,更快地得到修正后的对焦参数,进一步提高对焦拍摄的效率。
210、根据第三数据分量的缩略图确定第一对焦修正参数。
将第三数据分量缩放处理后得到的缩略图输入到预先训练好的学习算法中进行计算,根据其中图像数据与对焦参数的对应关系得到第一对焦修正参数。第一对焦修正参数起到修正作用,用来对第一对焦参数进行修正,提高第一对焦参数的准确度。
211、使用第一对焦修正参数对第一对焦参数进行修正,得到修正后的第一对焦参数。
通过确定数量的第三数据分量得到对应数量的第一对焦修正参数,利用对应数量的第一对焦修正参数对第一对焦参数进行修正,得到更加准确的修正后的第一对焦参数。
212、根据修正后的第一对焦参数进行对焦拍摄,以获取第二图像。
在得到修正后的第一对焦参数后,即可将修正后的该第一对焦参数作为拍摄下一帧图像时的对焦参数,继续下一帧图像的对焦拍摄,从而得到第二图像。根据修正后的第一对焦参数,马达驱动镜片进行移动,以及改变焦距,使得摄像头的对焦参数与修正后的第一对焦参数相符,在使用该修正后的第一对焦参数进行对焦的情况下拍摄,以获取第二图像。
在一实施例中,对于获取的第二图像,将重复上述第一图像的所有步骤,即,在根据第一对焦参数进行对焦,以获取第二图像之后,还包括:
从第二图像的多种数据分量中提取第二数据分量;
根据第二数据分量确定第二对焦参数;
从第二图像的多种数据分量中提取出第四数据分量,第四数据分量为第二图像中除第二数据分量以外的一种或多种数据分量;
根据第四数据分量确定第二对焦修正参数,第二对焦修正参数用于对第二对焦参数进行修正;
根据修正后的第二对焦参数进行对焦拍摄,以获取第三图像。
请参阅图4,图4为本申请实施例提供的拍摄方法的第四种流程示意图。首先,用户在电子设备上开启相机,相机开启后,马达驱动镜片移动从而自动对焦拍摄,以获取第一图像。对于第一图像,电子设备会对其进行数据分量提取,得到第一图像的第一数据分量和第三数据分量。对于第一数据分量,利用该第一数据分量来进行对焦参数计算,得到第一对焦参数。而对于第三数据分量,对第三数据分量完成缩放处理后进行对焦参数计算, 得到第一对焦修正参数。最终第一对焦修正参数和第一对焦参数这两种对焦参数共同计算出最终的对焦参数,即由第一对焦修正参数对第一对焦参数进行对焦参数修正,得到修正后的第一对焦参数。得到修正后的第一对焦参数后,摄像头根据该第一对焦参数进行对焦拍摄,从而获取到第二图像。第二图像重复上述实施例中第一图像经历过的所有步骤,进而获取到第二对焦参数和第二对焦修正参数,由第二对焦修正参数对第二对焦参数进行对焦参数修正,得到修正后的第二对焦参数,根据修正后的第二对焦参数进行对焦拍摄从而得到第三图像,以此类推。即,本申请实施例提供的拍摄方法并非是只应用于某一帧图像,而是在拍摄获取多帧图像期间,持续应用,计算出的对焦参数也持续更新。第二对焦参数和第二对焦修正参数的具体获取方式可参见前面对于第一对焦参数和第一对焦修正参数的相关描述,在此不再赘述。
在利用修正后的对焦参数拍摄获取到图像(包括第一图像、第二图像、第三图像、……)之后,得到的图像可以输出,用来进行后端图像处理。例如对图像进行裁剪、涂鸦、加水印、加文字等操作。
在一些情况下,每一帧图像在实际拍摄时的对焦参数可能并不等于根据上一帧图像计算出的对焦参数,例如,对焦参数可能要经历修正过程,或者,用户可能在马达自动对焦后又进行手动调焦,等等,这些情况都可能导致拍摄图像时的实际对焦参数与上一帧图像计算出的对焦参数并不相等。因而,本申请在通过学习算法不断确定最新的对焦参数的同时,也可以获取每一帧拍摄出来的图像的实际对焦参数,将实际对焦参数输入到学习算法中,以对学习算法进行更新,提高学习算法的精确度,并且,使得学习算法输出的对焦参数能够适应用户的实时拍摄需求及拍摄习惯。
请参阅图5,图5为本申请实施例提供的拍摄方法的第五种流程示意图。在图5中,图像中包含三种数据分量,分别是Y分量、U分量和V分量。其中Y分量的数量是U分量的2倍,也是V分量的2倍,提取数据分量的过程由采用了硬件加速技术的集成电路芯片进行。在芯片进行分量提取之前,确定出当前的第一/数据分量为Y分量,则芯片提取Y分量作为第一数据分量,进行对焦参数计算后得到第一对焦参数,同时,芯片提取出U分量和V分量作为第三数据分量,对提取出的U分量和V分量进行缩放处理后,进行对焦参数计算,得到第一对焦修正参数。
在一实施例中,在得到第一对焦参数和第一对焦修正参数后,一方面,用第一对焦修正参数进对第一对焦参数进行修正,得到修正后的第一对焦参数,用修正后的第一对焦参数指引马达对焦,另一方面,将得到的第一对焦参数和第一对焦修正参数进行比较,将比较结果反馈给芯片,影响芯片的分量提取过程。并且,马达对焦后拍摄得到第二图像,第二图像根据第二数据分量得到第二对焦参数,根据第四数据分量得到第二对焦修正参数,第二对焦参数和第二对焦修正参数同样重复上述第一对焦参数和第一对焦修正参数经历的步骤。
即,第一数据分量可以为第一类型的数据分量,而根据第三数据分量确定第一对焦修正参数之后,还包括:计算第一对焦修正参数与第一对焦参数的差值。若差值小于或等于预设阈值,则仍然从第二图像的多种数据分量中提取出第一类型的第二数据分量;若差值大于预设阈值,则从下一帧图像的多种数据分量中提取出第二类型的第二数据分量,第二类型不同于第一类型。
例如,在得到第一对焦参数和第一对焦修正参数后,计算第一对焦参数和第一对焦修正参数的差值,若该差值小于或等于预设阈值,则判定第一对焦参数与第一对焦修正参数差距较小,图像的Y分量适合用来计算对焦参数,从而,芯片对第二帧图像进行分量提取时,仍然选择第二图像的Y分量作为用来计算第二对焦参数的第二数据分量。
若该差值大于预设阈值,则判定第一对焦参数与第一对焦修正参数差距较大,图像的 Y分量并不适合用来计算对焦参数,从而,芯片对第二帧图像进行分量提取时,将更换数据分量的类型,例如,重新选择U分量作为第二数据分量。
通过比较第一对焦参数和第一对焦修正参数是否差距过大,确定数据分量的种类选取是否准确,从而,在不准确时能够及时调整,确保选取最合适的数据分量来计算对焦参数。
由上可知,本申请实施例提供的拍摄方法,首先获取第一图像;然后从第一图像的多种数据分量中提取出第一数据分量;根据第一数据分量确定第一对焦参数;进而根据第一对焦参数进行对焦拍摄,以获取第二图像。本申请实施例通过提取数据分量的形式将图像由完整的数据变为数据分量,减小了参与计算的数据量,能够快速计算出第一对焦参数,使用第一对焦参数进行下一次对焦拍摄,因而,提高了对焦拍摄的效率。
本申请实施例还提供一种拍摄装置。请参照图6,图6为本申请实施例提供的拍摄装置的第一种结构示意图。其中该拍摄装置300可应用于电子设备,该拍摄装置300包括获取模块301、第一提取模块302、第一确定模块303和第一拍摄模块304,如下:
获取模块301,用于获取第一图像;
第一提取模块302,用于从第一图像的多种数据分量中提取出第一数据分量;
第一确定模块303,用于根据第一数据分量确定第一对焦参数;
第一拍摄模块304,用于根据第一对焦参数进行对焦拍摄,以获取第二图像。
请一并参阅图7,图7为本申请实施例提供的拍摄装置300的第二种结构示意图。在一实施例中,拍摄装置300还包括第二提取模块305、第二确定模块306和第二拍摄模块307:
第二提取模块305,用于从第二图像的多种数据分量中提取第二数据分量;
第二确定模块306,用于根据第二数据分量确定第二对焦参数;
第二拍摄模块307,用于根据第二对焦参数进行对焦拍摄,以获取第三图像。
请一并参阅图7,图7为本申请实施例提供的拍摄装置300的第二种结构示意图。在一实施例中,拍摄装置300还包括第三提取模块308、第三确定模块309和修正模块310:
第三提取模块308,用于从第一图像的多种数据分量中提取出第三数据分量,第三数据分量为第一图像中除第一数据分量以外的一种或多种数据分量;
第三确定模块309,用于根据第二数据分量确定第一对焦修正参数,第一对焦修正参数用于对第一对焦参数进行修正;
修正模块310,用于使用第一对焦修正参数对第一对焦参数进行修正,得到修正后的第一对焦参数。
在一实施例中,在根据第三数据分量确定第一对焦修正参数时,第三确定模块309可以用于:
获取第三数据分量的缩略图;
根据第三数据分量的缩略图确定第一对焦修正参数。
请继续参阅图7,在一实施例中,拍摄装置300还包括第四确定模块311,在从第一图像的多种数据分量中提取出第三数据分量之前,第四确定模块311可以用于:
获取第一图像的两帧相邻历史帧图像,确定两帧相邻历史帧图像的画面变化率;
根据画面变化率确定要提取的第三数据分量的种类提取。
请继续参阅图7,在一实施例中,数据分量包括颜色分量,拍摄装置300还包括第五确定模块312,在从第一图像的多种数据分量中提取出第一数据分量之前,第五确定模块312可以用于:
统计第一图像中每种颜色分量对应的像素点数量;
将第一图像中对应像素点数量最多的颜色分量确定为第一数据分量。
请继续参阅图7,在一实施例中,拍摄装置300还包括第六确定模块313,在从第一图像的多种数据分量中提取出第一数据分量之前,第三确定模块313可以用于:
确定第一图像的对焦区域。
其中,在从第一图像的多种数据分量中提取出第一数据分量时,第一提取模块302可以用于:
从对焦区域的多种数据分量中提取出对焦区域的第一数据分量。
以上各个模块的具体实施可参见前面的实施例,在此不再赘述。
由上可知,本申请实施例提供的拍摄装置,首先获取模块301获取第一图像;然后第一提取模块302从第一图像的多种数据分量中提取出第一数据分量;第一确定模块303根据第一数据分量确定第一对焦参数;进而拍摄模块304根据第一对焦参数进行对焦拍摄,以获取第二图像。本申请实施例通过提取数据分量的形式将图像由完整的数据变为数据分量,减小了参与计算的数据量,能够快速计算出第一对焦参数,使用第一对焦参数进行下一次对焦拍摄,因而,提高了对焦拍摄的效率。
本申请实施例还提供一种电子设备。电子设备可以是智能手机、平板电脑、游戏设备、AR(Augmented Reality,增强现实)设备、汽车、车辆周边障碍检测装置、音频播放装置、视频播放装置、笔记本、桌面计算设备、可穿戴设备诸如手表、眼镜、头盔、电子手链、电子项链、电子衣物等设备。
参考图8,图8为本申请实施例提供的电子设备400的第一种结构示意图。其中,电子设备400包括处理器401和存储器402。存储器中存储有计算机程序,处理器通过调用存储器中存储的计算机程序,以执行本申请实施例提供的任一种拍摄方法中的步骤。处理器401与存储器402电性连接。
处理器401是电子设备400的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或调用存储在存储器402内的计算机程序,以及调用存储在存储器402内的数据,执行电子设备的各种功能和处理数据,从而对电子设备进行整体监控。
在本实施例中,电子设备400中的处理器401可以按照上述拍摄方法中的步骤,将一个或一个以上的计算机程序的进程对应的指令加载到存储器402中,并由处理器401来运行存储在存储器402中的计算机程序,从而实现上述拍摄方法中的步骤,例如:
获取第一图像;
从第一图像的多种数据分量中提取出第一数据分量;
根据第一数据分量确定第一对焦参数;
根据第一对焦参数进行对焦拍摄,以获取第二图像。
请继续参考图9,图9为本申请实施例提供的电子设备400的第二种结构示意图。其中,电子设备400还包括:显示屏403、控制电路404、输入单元405、传感器406以及电源407。其中,处理器401分别与显示屏403、控制电路404、输入单元405、传感器406以及电源407电性连接。
显示屏403可用于显示由用户输入的信息或提供给用户的信息以及电子设备的各种图形用户接口,这些图形用户接口可以由图像、文本、图标、视频和其任意组合来构成。
控制电路404与显示屏403电性连接,用于控制显示屏403显示信息。
输入单元405可用于接收输入的数字、字符信息或用户特征信息(例如指纹),以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。例如,输入单元405可以包括触控感应模组。
传感器406用于采集电子设备自身的信息或者用户的信息或者外部环境信息。例如,传感器406可以包括距离传感器、磁场传感器、光线传感器、加速度传感器、指纹传感器、 霍尔传感器、位置传感器、陀螺仪、惯性传感器、姿态感应器、气压计、心率传感器等多个传感器。
电源407用于给电子设备400的各个部件供电。在一些实施例中,电源407可以通过电源管理系统与处理器401逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管图8及图9中未示出,电子设备400还可以包括摄像头、蓝牙模块等,在此不再赘述。
在本实施例中,电子设备400中的处理器401可以按照上述翻译方法中的步骤,将一个或一个以上的计算机程序的进程对应的指令加载到存储器402中,并由处理器401来运行存储在存储器402中的计算机程序,从而实现上述拍摄方法中的步骤,例如:
获取第一图像;
从第一图像的多种数据分量中提取出第一数据分量;
根据第一数据分量确定第一对焦参数;
根据第一对焦参数进行对焦拍摄,以获取第二图像。
在一些情况下,在根据第一对焦参数进行对焦拍摄,以获取第二图像之后,处理器401还执行以下步骤:
从第二图像的多种数据分量中提取第二数据分量;
根据第二数据分量确定第二对焦参数;
根据第二对焦参数进行对焦拍摄,以获取第三图像。
在一些情况下,在根据第一数据分量确定第一对焦参数之后,处理器401还执行以下步骤:
从第一图像的多种数据分量中提取出第三数据分量,第三数据分量为第一图像中除第一数据分量以外的一种或多种数据分量;
根据第三数据分量确定第一对焦修正参数,第一对焦修正参数用于对第一对焦参数进行修正。
在一些情况下,在根据第三数据分量确定第一对焦修正参数时,处理器401执行以下步骤:
获取第三数据分量的缩略图;
根据第三数据分量的缩略图确定第一对焦修正参数。
在一些情况下,在从第一图像的多种数据分量中提取出第三数据分量之前,处理器401还执行以下步骤:
获取第一图像的两帧相邻历史帧图像,确定两帧相邻历史帧图像的画面变化率;
根据画面变化率确定要提取的第三数据分量的种类。
在一些情况下,数据分量包括颜色分量,在从第一图像的多种数据分量中提取出第一数据分量之前,处理器401还执行以下步骤:
统计第一图像中每种颜色分量对应的像素点数量;
将第一图像中对应像素点数量最多的颜色分量确定为第一数据分量。
在一些情况下,在从第一图像的多种数据分量中提取出第一数据分量之前,处理器401还执行以下步骤:
确定第一图像的对焦区域;
从第一图像的多种数据分量中提取出第一数据分量包括:
从对焦区域的多种数据分量中提取出对焦区域的第一数据分量。
由上可知,本申请实施例提供了一种电子设备,电子设备中的处理器执行以下步骤:首先获取第一图像;然后从图像的多种数据分量中提取出第一数据分量;根据第一数据分 量确定第一对焦参数;进而根据第一对焦参数进行对焦拍摄,以获取第二图像。本申请实施例通过提取数据分量的形式将图像由完整的数据变为数据分量,减小了参与计算的数据量,能够快速计算出第一对焦参数,使用第一对焦参数进行下一次对焦拍摄,因而,提高了对焦拍摄的效率。
请参阅图10,本申请实施例还提供一种电子设备,电子设备中至少包括摄头408和处理器402,处理器402包括前端图像处理芯片4021和主处理器4022,其中:
摄像头408,用于拍摄得到图像;
前端图像处理芯片4021,用于从图像的多种数据分量中提取出第一数据分量;
主处理器4022,用于根据提取第一数据分量确定第一对焦参数,以使得摄像头根据第一对焦参数进行对焦,返回执行拍摄得到图像的步骤。
该前端图像处理芯片4021为集成电路芯片,可用于智能手机、平板电脑、游戏设备、AR(Augmented Reality,增强现实)设备、汽车、车辆周边障碍检测装置、音频播放装置、视频播放装置、笔记本、桌面计算设备、可穿戴设备诸如手表、眼镜、头盔、电子手链、电子项链等电子设备中。
本申请实施例提供的前端图像处理芯片4021与主处理器4022相互独立,采取了硬件加速技术,把计算量非常大的工作分配给专门的硬件来处理以减轻主处理器4022的工作量,从而不需要主处理器4022通过软件一层层翻译图像中的每一个像素。以硬件加速的方式实施本申请实施例提供的拍摄方法,可以实现高速的数据分量提取。
其中,前端图像处理芯片4021专门负责图像的数据分量提取。摄像头408拍摄得到图像后,由采取了硬件加速技术的前端图像处理芯片4021进行图像的数据分量提取,从图像的多种数据分量中提取出第一数据分量,然后,提取出的第一数据分量被传输给主处理器4022,主处理器4022根据前端图像处理芯片4021提取的第一数据分量确定第一对焦参数,使得摄像头408根据第一对焦参数进行对焦,返回执行拍摄得到图像的步骤。
在一实施例中,在前端图像处理芯片4021从图像的多种数据分量中提取出第一数据分量之后,前端图像处理芯片4021继续从图像的多种数据分量中提取出第二数据分量,并将提取出的第二数据分量传输给主处理器4022,由主处理4022进行后续计算,例如,主处理器4022根据第二数据分量确定对焦修正参数,使用对焦修正参数对第一对焦参数进行修正,得到修正后的第一对焦参数,使得摄像头408根据修正后的第一对焦参数进行对焦,返回执行拍摄得到图像的步骤。
由上可知,本申请实施例提供的电子设备包括摄像头、前端图像处理芯片和主处理器,其中摄像头408用于获取第一图像;前端图像处理芯片4021用于从图像的多种数据分量中提取出第一数据分量;主处理器4022用于根据前端图像处理芯片提取的第一数据分量确定第一对焦参数,以使得摄像头根据第一对焦参数进行对焦拍摄,以获取第二图像。本申请实施例通过提取数据分量的形式将图像由完整的数据变为数据分量,减小了参与计算的数据量,能够快速计算出第一对焦参数,使用第一对焦参数进行下一次对焦拍摄,因而,提高了对焦拍摄的效率。同时,采用硬件加速技术进行数据分量提取,能够提高数据分量提取的效率从而进一步提高图像拍摄的效率。
本申请实施例还提供一种计算机可读存储介质,存储介质中存储有计算机程序,当计算机程序在计算机上运行时,计算机执行上述任一实施例的拍摄方法。
例如,在一些实施例中,当计算机程序在计算机上运行时,计算机执行以下步骤:
获取第一图像;
从第一图像的多种数据分量中提取出第一数据分量;
根据第一数据分量确定第一对焦参数;
根据第一对焦参数进行对焦拍摄,以获取第二图像。
需要说明的是,本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过计算机程序来指令相关的硬件来完成,计算机程序可以存储于计算机可读存储介质中,存储介质可以包括但不限于:只读存储器(ROM,Read Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁盘或光盘等。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见上文针对翻译方法的详细描述,此处不再赘述。
以上对本申请实施例所提供的拍摄方法、装置、存储介质及电子设备进行了详细介绍。本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上,本说明书内容不应理解为对本申请的限制。

Claims (20)

  1. 一种拍摄方法,其中,包括:
    获取第一图像;
    从所述第一图像的多种数据分量中提取出第一数据分量;
    根据所述第一数据分量确定第一对焦参数;
    根据所述第一对焦参数进行对焦拍摄,以获取第二图像。
  2. 根据权利要求1所述的拍摄方法,其中,所述根据所述第一对焦参数进行对焦,以获取第二图像之后,还包括:
    从所述第二图像的多种数据分量中提取第二数据分量;
    根据所述第二数据分量确定第二对焦参数;
    根据所述第二对焦参数进行对焦拍摄,以获取第三图像。
  3. 根据权利要求2所述的拍摄方法,其中,所述根据所述第一数据分量确定第一对焦参数之后,还包括:
    从所述第一图像的多种数据分量中提取出第三数据分量,所述第三数据分量为所述第一图像中除所述第一数据分量以外的一种或多种数据分量;
    根据所述第三数据分量确定第一对焦修正参数,所述第一对焦修正参数用于对所述第一对焦参数进行修正。
  4. 根据权利要求3所述的拍摄方法,其中,所述根据所述第三数据分量确定第一对焦修正参数包括:
    获取所述第三数据分量的缩略图;
    根据所述第三数据分量的缩略图确定所述第一对焦修正参数。
  5. 根据权利要求3所述的拍摄方法,其中,所述从所述第一图像的多种数据分量中提取出第三数据分量之前,还包括:
    获取所述第一图像的两帧相邻历史帧图像,确定所述两帧相邻历史帧图像的画面变化率;
    根据所述画面变化率确定要提取的第三数据分量的种类。
  6. 根据权利要求2所述的拍摄方法,其中,所述数据分量包括颜色分量,所述从所述第一图像的多种数据分量中提取出第一数据分量之前,还包括:
    统计所述第一图像中每种颜色分量对应的像素点数量;
    将所述第一图像中对应像素点数量最多的颜色分量确定为所述第一数据分量。
  7. 根据权利要求2所述的拍摄方法,其中,所述从所述第一图像的多种数据分量中提取出第一数据分量之前,还包括:
    确定所述第一图像的对焦区域;
    所述从所述第一图像的多种数据分量中提取出第一数据分量包括:
    从所述对焦区域的多种数据分量中提取出所述对焦区域的第一数据分量。
  8. 根据权利要求1所述的拍摄方法,其中,所述获取第一图像包括:
    获取待拍摄场景的视频流,得到待拍摄场景的多帧原始图像;
    根据所述多帧原始图像获取第一图像。
  9. 根据权利要求1所述的拍摄方法,其中,所述第一图像为RAW格式,所述从所述第一图像的多种数据分量中提取出第一数据分量之前,还包括:
    对所述第一图像进行处理,得到所述第一图像的RGB数据,所述RGB数据包括所述第一图像的多种数据分量,所述第一图像的多种数据分量包括R颜色分量、G颜色分量和B颜色分量。
  10. 一种拍摄装置,其中,包括:
    获取模块,用于获取第一图像;
    第一提取模块,用于从所述第一图像的多种数据分量中提取出第一数据分量;
    第一确定模块,用于根据所述第一数据分量确定第一对焦参数;
    第一拍摄模块,用于根据所述第一对焦参数进行对焦拍摄,以获取第二图像。
  11. 一种计算机可读存储介质,其中,所述存储介质中存储有计算机程序,当计算机程序在计算机上运行时,使得计算机执行如权利要求1至7任一项所述的拍摄方法中的步骤。
  12. 一种电子设备,其中,所述电子设备包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器通过调用所述存储器中存储的所述计算机程序,执行:。
    获取第一图像;
    从所述第一图像的多种数据分量中提取出第一数据分量;
    根据所述第一数据分量确定第一对焦参数;
    根据所述第一对焦参数进行对焦拍摄,以获取第二图像。
  13. 根据权利要求12所述的拍摄方法,其中,所述根据所述第一对焦参数进行对焦,以获取第二图像之后,还包括:
    从所述第二图像的多种数据分量中提取第二数据分量;
    根据所述第二数据分量确定第二对焦参数;
    根据所述第二对焦参数进行对焦拍摄,以获取第三图像。
  14. 根据权利要求13所述的拍摄方法,其中,所述根据所述第一数据分量确定第一对焦参数之后,还包括:
    从所述第一图像的多种数据分量中提取出第三数据分量,所述第三数据分量为所述第一图像中除所述第一数据分量以外的一种或多种数据分量;
    根据所述第三数据分量确定第一对焦修正参数,所述第一对焦修正参数用于对所述第一对焦参数进行修正。
  15. 根据权利要求14所述的拍摄方法,其中,所述根据所述第三数据分量确定第一对焦修正参数包括:
    获取所述第三数据分量的缩略图;
    根据所述第三数据分量的缩略图确定所述第一对焦修正参数。
  16. 根据权利要求14所述的拍摄方法,其中,所述从所述第一图像的多种数据分量中提取出第三数据分量之前,还包括:
    获取所述第一图像的两帧相邻历史帧图像,确定所述两帧相邻历史帧图像的画面变化率;
    根据所述画面变化率确定要提取的第三数据分量的种类。
  17. 根据权利要求13所述的拍摄方法,其中,所述数据分量包括颜色分量,所述从所述第一图像的多种数据分量中提取出第一数据分量之前,还包括:
    统计所述第一图像中每种颜色分量对应的像素点数量;
    将所述第一图像中对应像素点数量最多的颜色分量确定为所述第一数据分量。
  18. 根据权利要求13所述的拍摄方法,其中,所述从所述第一图像的多种数据分量中提取出第一数据分量之前,还包括:
    确定所述第一图像的对焦区域;
    所述从所述第一图像的多种数据分量中提取出第一数据分量包括:
    从所述对焦区域的多种数据分量中提取出所述对焦区域的第一数据分量。
  19. 根据权利要求12所述的拍摄方法,其中,所述获取第一图像包括:
    获取待拍摄场景的视频流,得到待拍摄场景的多帧原始图像;
    根据所述多帧原始图像获取第一图像。
  20. 一种电子设备,其中,所述电子设备包括:
    摄像头,用于获取第一图像;
    前端图像处理芯片,用于从所述第一图像的多种数据分量中提取出第一数据分量;
    主处理器,用于根据所述第一数据分量确定第一对焦参数,以使得所述摄像头根据所述第一对焦参数进行对焦拍摄,以获取第二图像。
PCT/CN2022/074592 2021-03-03 2022-01-28 拍摄方法、装置、计算机可读存储介质及电子设备 WO2022183876A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110236906.3A CN115037867B (zh) 2021-03-03 2021-03-03 拍摄方法、装置、计算机可读存储介质及电子设备
CN202110236906.3 2021-03-03

Publications (1)

Publication Number Publication Date
WO2022183876A1 true WO2022183876A1 (zh) 2022-09-09

Family

ID=83117718

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/074592 WO2022183876A1 (zh) 2021-03-03 2022-01-28 拍摄方法、装置、计算机可读存储介质及电子设备

Country Status (2)

Country Link
CN (1) CN115037867B (zh)
WO (1) WO2022183876A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117714857B (zh) * 2023-05-29 2024-09-24 荣耀终端有限公司 对焦方法及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0436511A2 (en) * 1990-01-05 1991-07-10 Canon Kabushiki Kaisha In-focus detecting device
CN1896860A (zh) * 2005-07-11 2007-01-17 三星电机株式会社 照相机的自动对焦装置及其自动对焦方法
CN102169275A (zh) * 2010-04-28 2011-08-31 上海盈方微电子有限公司 一种基于黄金分割非均匀采样窗口规划的数码相机自动聚焦系统
CN102572265A (zh) * 2010-09-01 2012-07-11 苹果公司 使用具有粗略和精细自动对焦分数的图像统计数据的自动对焦控制
CN103379273A (zh) * 2012-04-17 2013-10-30 株式会社日立制作所 摄像装置
CN107613216A (zh) * 2017-10-31 2018-01-19 广东欧珀移动通信有限公司 对焦方法、装置、计算机可读存储介质和电子设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013069050A1 (ja) * 2011-11-07 2013-05-16 株式会社ソニー・コンピュータエンタテインメント 画像生成装置および画像生成方法
JP5659304B2 (ja) * 2011-11-07 2015-01-28 株式会社ソニー・コンピュータエンタテインメント 画像生成装置および画像生成方法
CN108322651B (zh) * 2018-02-11 2020-07-31 Oppo广东移动通信有限公司 拍摄方法和装置、电子设备、计算机可读存储介质
CN112135055B (zh) * 2020-09-27 2022-03-15 苏州科达科技股份有限公司 变焦跟踪方法、装置、设备以及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0436511A2 (en) * 1990-01-05 1991-07-10 Canon Kabushiki Kaisha In-focus detecting device
CN1896860A (zh) * 2005-07-11 2007-01-17 三星电机株式会社 照相机的自动对焦装置及其自动对焦方法
CN102169275A (zh) * 2010-04-28 2011-08-31 上海盈方微电子有限公司 一种基于黄金分割非均匀采样窗口规划的数码相机自动聚焦系统
CN102572265A (zh) * 2010-09-01 2012-07-11 苹果公司 使用具有粗略和精细自动对焦分数的图像统计数据的自动对焦控制
CN103379273A (zh) * 2012-04-17 2013-10-30 株式会社日立制作所 摄像装置
CN107613216A (zh) * 2017-10-31 2018-01-19 广东欧珀移动通信有限公司 对焦方法、装置、计算机可读存储介质和电子设备

Also Published As

Publication number Publication date
CN115037867A (zh) 2022-09-09
CN115037867B (zh) 2023-12-01

Similar Documents

Publication Publication Date Title
CN111327824B (zh) 拍摄参数的选择方法、装置、存储介质及电子设备
CN110602467B (zh) 图像降噪方法、装置、存储介质及电子设备
CN111028189A (zh) 图像处理方法、装置、存储介质及电子设备
CN110572584B (zh) 图像处理方法、装置、存储介质及电子设备
CN111028190A (zh) 图像处理方法、装置、存储介质及电子设备
US8400532B2 (en) Digital image capturing device providing photographing composition and method thereof
US20220329729A1 (en) Photographing method, storage medium and electronic device
CN108513069B (zh) 图像处理方法、装置、存储介质及电子设备
CN111246093B (zh) 图像处理方法、装置、存储介质及电子设备
US10769416B2 (en) Image processing method, electronic device and storage medium
CN110266954A (zh) 图像处理方法、装置、存储介质及电子设备
CN111277751B (zh) 拍照方法、装置、存储介质及电子设备
US20240007588A1 (en) Slow-motion video recording method and device
WO2022183876A1 (zh) 拍摄方法、装置、计算机可读存储介质及电子设备
CN108259767B (zh) 图像处理方法、装置、存储介质及电子设备
CN110581957A (zh) 图像处理方法、装置、存储介质及电子设备
JP2008064797A (ja) 光学装置、撮像装置、光学装置の制御方法
CN108495038B (zh) 图像处理方法、装置、存储介质及电子设备
US8260083B2 (en) Image processing method and apparatus, and digital photographing apparatus using the same
CN108520036B (zh) 图像的选取方法、装置、存储介质及电子设备
WO2023001110A1 (zh) 神经网络训练方法、装置及电子设备
CN110930340A (zh) 一种图像处理方法及装置
CN115334241A (zh) 对焦控制方法、装置、存储介质及摄像设备
WO2022174696A1 (zh) 曝光处理方法、装置、电子设备及计算机可读存储介质
WO2021142711A1 (zh) 图像处理方法、装置、存储介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22762344

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22762344

Country of ref document: EP

Kind code of ref document: A1