WO2023000878A1 - Photographing method and apparatus, and controller, device and computer-readable storage medium - Google Patents

Photographing method and apparatus, and controller, device and computer-readable storage medium Download PDF

Info

Publication number
WO2023000878A1
WO2023000878A1 PCT/CN2022/099221 CN2022099221W WO2023000878A1 WO 2023000878 A1 WO2023000878 A1 WO 2023000878A1 CN 2022099221 W CN2022099221 W CN 2022099221W WO 2023000878 A1 WO2023000878 A1 WO 2023000878A1
Authority
WO
WIPO (PCT)
Prior art keywords
brightness
preview image
target sub
interval
feature information
Prior art date
Application number
PCT/CN2022/099221
Other languages
French (fr)
Chinese (zh)
Inventor
郑亮
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2023000878A1 publication Critical patent/WO2023000878A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene

Abstract

A photographing method and apparatus, and a controller, a device and a computer-readable storage medium. The photographing method comprises: acquiring a preview image of a current scenario (S100); performing feature extraction on the preview image, so as to obtain brightness feature information of the preview image (S200); inputting the brightness feature information into a strategy generation model, so as to obtain an exposure strategy, wherein the strategy generation model is obtained by training a neural network according to brightness feature information, which corresponds to a sample image (S300); and photographing the current scenario on the basis of the exposure strategy (S400).

Description

拍摄方法、装置、控制器、设备和计算机可读存储介质Shooting method, device, controller, device, and computer-readable storage medium
相关申请的交叉引用Cross References to Related Applications
本申请基于申请号为202110828028.4、申请日为2021年7月22日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。This application is based on a Chinese patent application with application number 202110828028.4 and a filing date of July 22, 2021, and claims the priority of this Chinese patent application. The entire content of this Chinese patent application is hereby incorporated by reference into this application.
技术领域technical field
本申请实施例涉及但不限于拍摄技术领域,尤其涉及一种拍摄方法、拍摄装置、控制器、拍摄设备和计算机可读存储介质。The embodiments of the present application relate to but are not limited to the technical field of photographing, and in particular, relate to a photographing method, a photographing device, a controller, a photographing device, and a computer-readable storage medium.
背景技术Background technique
目前,受制于传感器的物理特性,例如动态范围、噪声等,当拍摄室外逆光场景时,有些区域需要增强,有的区域需要减弱,因此可以通过HDR(High-Dynamic Range,高动态光照渲染)处理获得理想的宽动态范围的图像。现有的处理技术是使用多组曝光合成,压制高亮场景亮度并提升低亮场景亮度,因此,现有的处理技术需要有准确的曝光参数。但是,现有的HDR的触发机制多是利用整幅亮度的统计特种、决策数或预设阈值判断,均在识别场景能力有限,并且识别场景后不能精确指导曝光参数,从而影响拍摄质量。At present, due to the physical characteristics of the sensor, such as dynamic range, noise, etc., when shooting outdoor backlit scenes, some areas need to be enhanced, and some areas need to be weakened, so it can be processed by HDR (High-Dynamic Range, High Dynamic Light Rendering) Get images with an ideal wide dynamic range. Existing processing technology is to use multi-group exposure synthesis to suppress the brightness of high-brightness scenes and increase the brightness of low-brightness scenes. Therefore, the existing processing technology needs accurate exposure parameters. However, most of the existing HDR trigger mechanisms use the statistical features of the overall brightness, the number of decisions, or the preset threshold judgment, all of which have limited ability to recognize the scene, and cannot accurately guide the exposure parameters after the scene is recognized, thus affecting the shooting quality.
发明内容Contents of the invention
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。The following is an overview of the topics described in detail in this article. This summary is not intended to limit the scope of the claims.
本申请实施例提供了一种拍摄方法、拍摄装置、控制器、拍摄设备和计算机可读存储介质。Embodiments of the present application provide a photographing method, a photographing device, a controller, a photographing device, and a computer-readable storage medium.
第一方面,本申请实施例提供了一种拍摄方法,包括:获取当前场景的预览图像;对所述预览图像进行特征提取,得到所述预览图像的亮度特征信息;将所述亮度特征信息输入至策略生成模型,得到曝光策略,其中,所述策略生成模型由神经网络根据样本图像所对应的亮度特征信息训练得到;基于所述曝光策略对所述当前场景进行拍摄。In the first aspect, an embodiment of the present application provides a shooting method, including: acquiring a preview image of the current scene; performing feature extraction on the preview image to obtain brightness characteristic information of the preview image; inputting the brightness characteristic information Go to the strategy generation model to obtain an exposure strategy, wherein the strategy generation model is trained by a neural network according to the brightness feature information corresponding to the sample image; and the current scene is photographed based on the exposure strategy.
第二方面,本申请实施例还提供了一种拍摄装置,包括处理器、存储器、拍摄部件和显示屏幕,所述处理器分别连接至所述存储器、所述拍摄部件和所述显示屏幕,所述处理器包括特征提取单元和策略生成单元,所述拍摄部件包括光学摄像头;所述拍摄部件被设置为通过所述光学摄像头获取当前场景对应的预览图像;所述显示屏幕被设置为显示所述当前场景对应的所述预览图像;所述特征提取单元被设置为获取所述显示屏幕所显示的所述预览图像,并对所述预览图像进行特征提取,得到所述预览图像的亮度特征信息;所述策略生成单元被设置为将所述亮度特征信息输入至所述处理器中预先训练好的策略生成模型,得到曝光策略,其中,所述策略生成模型由神经网络根据样本图像所对应的亮度特征信息训练得到;所述拍摄部件还被设置为获取所述曝光策略,并基于所述曝光策略通过所述光学摄像头对所述当前场景进行拍摄。In the second aspect, the embodiment of the present application also provides a photographing device, including a processor, a memory, a photographing component, and a display screen, and the processor is respectively connected to the memory, the photographing component, and the display screen, so The processor includes a feature extraction unit and a policy generation unit, the photographing component includes an optical camera; the photographing component is configured to obtain a preview image corresponding to the current scene through the optical camera; the display screen is configured to display the The preview image corresponding to the current scene; the feature extraction unit is configured to acquire the preview image displayed on the display screen, and perform feature extraction on the preview image to obtain brightness feature information of the preview image; The strategy generation unit is configured to input the brightness feature information into a pre-trained strategy generation model in the processor to obtain an exposure strategy, wherein the strategy generation model is configured by the neural network according to the brightness corresponding to the sample image The feature information is obtained through training; the photographing component is further configured to acquire the exposure strategy, and photograph the current scene through the optical camera based on the exposure strategy.
第三方面,本申请实施例还提供了一种控制器,包括:存储器、处理器及存储在所述存 储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上述第一方面所述的拍摄方法。In the third aspect, the embodiment of the present application also provides a controller, including: a memory, a processor, and a computer program stored on the memory and operable on the processor, and the processor executes the computer program. The program implements the photographing method described in the first aspect above.
第四方面,本申请实施例还提供了一种拍摄设备,包括如上述第二方面所述的拍摄装置或者如上述第三方面所述的控制器。In a fourth aspect, an embodiment of the present application further provides a photographing device, including the photographing device described in the second aspect above or the controller described in the third aspect above.
第五方面,本申请实施例还提供了一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令用于执行如上述第一方面所述的拍摄方法。In a fifth aspect, the embodiment of the present application further provides a computer-readable storage medium storing computer-executable instructions, and the computer-executable instructions are used to execute the photographing method as described in the above-mentioned first aspect.
本申请的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本申请而了解。本申请的目的和其他优点可通过在说明书、权利要求书以及附图中所特别指出的结构来实现和获得。Additional features and advantages of the application will be set forth in the description which follows, and, in part, will be obvious from the description, or may be learned by practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
附图说明Description of drawings
附图用来提供对本申请技术方案的进一步理解,并且构成说明书的一部分,与本申请的实施例一起用于解释本申请的技术方案,并不构成对本申请技术方案的限制。The accompanying drawings are used to provide a further understanding of the technical solution of the present application, and constitute a part of the specification, and are used together with the embodiments of the present application to explain the technical solution of the present application, and do not constitute a limitation to the technical solution of the present application.
图1是本申请一个实施例提供的用于执行拍摄方法的系统架构平台的示意图;FIG. 1 is a schematic diagram of a system architecture platform for executing a shooting method provided by an embodiment of the present application;
图2是本申请一个实施例提供的拍摄方法的流程图;FIG. 2 is a flowchart of a shooting method provided by an embodiment of the present application;
图3是本申请一个实施例提供的拍摄方法中提取亮度特征信息的流程图;Fig. 3 is a flow chart of extracting brightness feature information in a shooting method provided by an embodiment of the present application;
图4是本申请一个实施例提供的拍摄方法中获取亮度位置权重的流程图;FIG. 4 is a flow chart of obtaining brightness position weights in a shooting method provided by an embodiment of the present application;
图5是本申请一个实施例提供的拍摄方法中获取亮度占比权重的流程图;FIG. 5 is a flow chart of obtaining brightness proportion weights in a shooting method provided by an embodiment of the present application;
图6是本申请一个实施例提供的拍摄方法中根据亮度位置权重、亮度占比权重和亮度区间比例得到亮度特征信息的流程图;Fig. 6 is a flow chart of obtaining brightness characteristic information according to brightness position weight, brightness proportion weight and brightness interval ratio in the shooting method provided by an embodiment of the present application;
图7是本申请一个实施例提供的曝光策略组合示意图;Fig. 7 is a schematic diagram of exposure strategy combinations provided by an embodiment of the present application;
图8是本申请一个实施例提供的神经网络模型的构造示意图;Fig. 8 is a schematic structural diagram of a neural network model provided by an embodiment of the present application;
图9是本申请一个实施例提供的拍摄装置的结构示意图。FIG. 9 is a schematic structural diagram of a photographing device provided by an embodiment of the present application.
具体实施方式detailed description
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。In order to make the purpose, technical solution and advantages of the present application clearer, the present application will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present application, not to limit the present application.
需要说明的是,虽然在装置示意图中进行了功能模块划分,在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于装置中的模块划分,或流程图中的顺序执行所示出或描述的步骤。说明书、权利要求书或上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。It should be noted that although the functional modules are divided in the schematic diagram of the device, and the logical sequence is shown in the flowchart, in some cases, it can be executed in a different order than the module division in the device or the flowchart in the flowchart. steps shown or described. The terms "first", "second" and the like in the specification, claims or the above drawings are used to distinguish similar objects, and not necessarily used to describe a specific order or sequence.
在相关技术中,受制于传感器的物理特性,例如动态范围、噪声等,当拍摄室外逆光场景时,有些区域需要增强,有的区域需要减弱,因此可以通过HDR处理获得理想的宽动态范围的图像。现有的处理技术是使用多组曝光合成,压制高亮场景亮度并提升低亮场景亮度,因此,现有的处理技术需要有准确的曝光参数。但是,现有的HDR的触发机制多是利用整幅亮度的统计特种、决策数或预设阈值判断,均在识别场景能力有限,并且识别场景后不能精确指导曝光参数,从而影响拍摄质量。In related technologies, due to the physical characteristics of the sensor, such as dynamic range and noise, when shooting outdoor backlit scenes, some areas need to be enhanced, and some areas need to be weakened, so an ideal wide dynamic range image can be obtained through HDR processing . Existing processing technology is to use multi-group exposure synthesis to suppress the brightness of high-brightness scenes and increase the brightness of low-brightness scenes. Therefore, the existing processing technology needs accurate exposure parameters. However, most of the existing HDR trigger mechanisms use the statistical features of the overall brightness, the number of decisions, or the preset threshold judgment, all of which have limited ability to recognize the scene, and cannot accurately guide the exposure parameters after the scene is recognized, thus affecting the shooting quality.
需要说明的是,现有的多帧动态计算,单帧场景决策树识别,整幅特征亮度识别HDR场景,不够准确,不能精确指导HDR时多曝光参数的选择。It should be noted that the existing multi-frame dynamic calculation, single-frame scene decision tree recognition, and the entire feature brightness recognition of HDR scenes are not accurate enough to guide the selection of multiple exposure parameters in HDR.
基于上述情况,本申请实施例提供了一种拍摄方法、拍摄装置、控制器、拍摄设备和计算机可读存储介质,该拍摄方法包括但不限于如下步骤:获取当前场景的预览图像;对预览图像进行特征提取,得到预览图像的亮度特征信息;将亮度特征信息输入至策略生成模型,得到曝光策略,其中,策略生成模型由神经网络根据样本图像所对应的亮度特征信息训练得到;基于曝光策略对当前场景进行拍摄。根据本申请实施例的技术方案,在拍摄的预览期间,本申请实施例会提取当前场景的预览图像的亮度特征信息并输入至训练好的策略生成模型,由于本申请实施例的策略生成模型是由神经网络根据样本图像所对应的亮度特征信息训练得到的,因此,策略生成模型会响应输出当前场景的预览图像所对应的曝光策略,进而可以在按下拍摄按键时采用策略生成模型输出的曝光策略进行拍摄处理,由于策略生成模型输出的曝光策略的准确性更高,因此本申请实施例能够提升拍摄质量。Based on the above situation, an embodiment of the present application provides a shooting method, a shooting device, a controller, a shooting device, and a computer-readable storage medium. The shooting method includes but is not limited to the following steps: acquiring a preview image of the current scene; Perform feature extraction to obtain the brightness feature information of the preview image; input the brightness feature information into the policy generation model to obtain the exposure strategy, wherein the strategy generation model is trained by the neural network according to the brightness feature information corresponding to the sample image; based on the exposure strategy to shoot the current scene. According to the technical solution of the embodiment of the present application, during the preview period of shooting, the embodiment of the present application will extract the brightness feature information of the preview image of the current scene and input it into the trained policy generation model, because the policy generation model of the embodiment of the present application is composed of The neural network is trained based on the brightness feature information corresponding to the sample image. Therefore, the strategy generation model will respond to output the exposure strategy corresponding to the preview image of the current scene, and then the exposure strategy output by the strategy generation model can be adopted when the shooting button is pressed. During the shooting process, because the accuracy of the exposure strategy output by the policy generation model is higher, the embodiment of the present application can improve the shooting quality.
下面结合附图,对本申请实施例作进一步阐述。The embodiments of the present application will be further described below in conjunction with the accompanying drawings.
如图1所示,图1是本申请一个实施例提供的用于执行拍摄方法的系统架构平台的示意图。As shown in FIG. 1 , FIG. 1 is a schematic diagram of a system architecture platform for executing a shooting method provided by an embodiment of the present application.
在图1的示例中,该系统架构平台100设置有处理器110和存储器120,其中,处理器110和存储器120可以通过总线或者其他方式连接,图1中以通过总线连接为例。In the example shown in FIG. 1 , the system architecture platform 100 is provided with a processor 110 and a memory 120 , wherein the processor 110 and the memory 120 may be connected via a bus or in other ways. In FIG. 1 , connection via a bus is taken as an example.
存储器120作为一种非暂态计算机可读存储介质,可用于存储非暂态软件程序以及非暂态性计算机可执行程序。此外,存储器120可以包括高速随机存取存储器,还可以包括非暂态存储器,例如至少一个磁盘存储器件、闪存器件、或其他非暂态固态存储器件。在一些实施方式中,存储器120可选包括相对于处理器110远程设置的存储器120,这些远程存储器可以通过网络连接至该系统架构平台。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 120, as a non-transitory computer-readable storage medium, can be used to store non-transitory software programs and non-transitory computer-executable programs. In addition, the memory 120 may include a high-speed random access memory, and may also include a non-transitory memory, such as at least one magnetic disk storage device, a flash memory device, or other non-transitory solid-state storage devices. In some implementations, the memory 120 may optionally include memory 120 located remotely relative to the processor 110, and these remote memories may be connected to the system architecture platform through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
本领域技术人员可以理解的是,该系统架构平台可以应用于3G通信网络系统、LTE通信网络系统、5G通信网络系统以及后续演进的移动通信网络系统等,本实施例对此并不作具体限定。Those skilled in the art can understand that the system architecture platform can be applied to 3G communication network systems, LTE communication network systems, 5G communication network systems and subsequent evolved mobile communication network systems, etc., which is not specifically limited in this embodiment.
本领域技术人员可以理解的是,图1中示出的系统架构平台并不构成对本申请实施例的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。Those skilled in the art can understand that the system architecture platform shown in FIG. 1 does not constitute a limitation to the embodiment of the present application, and may include more or less components than those shown in the illustration, or combine some components, or have different Part placement.
在图1所示的系统架构平台中,处理器110可以调用储存在存储器120中的拍摄程序,从而执行拍摄方法。In the system architecture platform shown in FIG. 1 , the processor 110 can call the shooting program stored in the memory 120 to execute the shooting method.
基于上述系统架构平台,下面提出本申请的拍摄方法的各个实施例。Based on the above-mentioned system architecture platform, various embodiments of the shooting method of the present application are proposed below.
如图2所示,图2是本申请一个实施例提供的拍摄方法的流程图,该方法包括但不限于有步骤S100、步骤S200、步骤S300和步骤S400。As shown in FIG. 2 , FIG. 2 is a flowchart of a shooting method provided by an embodiment of the present application, and the method includes but is not limited to step S100 , step S200 , step S300 and step S400 .
步骤S100、获取当前场景的预览图像。Step S100, acquiring a preview image of the current scene.
步骤S200、对预览图像进行特征提取,得到预览图像的亮度特征信息。Step S200, performing feature extraction on the preview image to obtain brightness feature information of the preview image.
步骤S300、将亮度特征信息输入至策略生成模型,得到曝光策略,其中,策略生成模型由神经网络根据样本图像所对应的亮度特征信息训练得到。Step S300 , input the brightness feature information into the strategy generation model to obtain the exposure strategy, wherein the strategy generation model is trained by the neural network according to the brightness feature information corresponding to the sample image.
步骤S400、基于曝光策略对当前场景进行拍摄。Step S400, shoot the current scene based on the exposure strategy.
具体地,本申请实施例在拍摄的预览期间,会提取当前场景的预览图像的亮度特征信息 并输入至训练好的策略生成模型,由于本申请实施例的策略生成模型是由神经网络根据样本图像所对应的亮度特征信息训练得到的,因此,策略生成模型会响应输出当前场景的预览图像所对应的曝光策略,进而可以在按下拍摄按键时采用策略生成模型输出的曝光策略进行拍摄处理,由于策略生成模型输出的曝光策略的准确性更高,因此本申请实施例能够提升拍摄质量。Specifically, in the embodiment of the present application, during the shooting preview period, the brightness feature information of the preview image of the current scene will be extracted and input to the trained policy generation model. Since the policy generation model in the embodiment of the present application is based on the neural network The corresponding brightness feature information is trained. Therefore, the policy generation model will respond to output the exposure strategy corresponding to the preview image of the current scene, and then use the exposure strategy output by the policy generation model to perform shooting processing when the shooting button is pressed. The accuracy of the exposure strategy output by the strategy generation model is higher, so the embodiment of the present application can improve the shooting quality.
值得注意的是,关于上述步骤S100中的获取当前场景的预览图像,该预览图像是指在按下快门之前拍摄设备的屏幕所显示的图像。示例性地,预览图像是指用户拿着拍摄设备对着景色并且没有按下拍摄按键之前拍摄设备的屏幕所显示的图像。It should be noted that, regarding the acquisition of the preview image of the current scene in the above step S100, the preview image refers to the image displayed on the screen of the shooting device before the shutter is pressed. Exemplarily, the preview image refers to the image displayed on the screen of the shooting device before the user holds the shooting device against the scenery and does not press the shooting button.
另外,值得注意的是,关于上述的策略生成模型,为预设的已经训练好的模型。在训练期间,本申请实施例会将提取样本图像的亮度特征信息,并作为神经网络的输入,使神经网络输出对应的曝光策略。当训练好策略生成模型之后,若将当前场景的预览图像的亮度特征信息输入至策略生成模型,策略生成模型会响应输出更加准确的曝光策略。In addition, it is worth noting that the above-mentioned policy generation model is a pre-trained model. During the training period, the embodiment of the present application will extract the brightness characteristic information of the sample image, and use it as the input of the neural network, so that the neural network can output the corresponding exposure strategy. After the strategy generation model is trained, if the brightness feature information of the preview image of the current scene is input to the strategy generation model, the strategy generation model will respond to output a more accurate exposure strategy.
可以理解的是,关于上述样本图像的数量,可以为多张。It can be understood that the number of the above sample images may be multiple.
另外,可以理解的是,关于上述的曝光策略,可以为多个曝光参数。In addition, it can be understood that, regarding the above-mentioned exposure strategy, there may be multiple exposure parameters.
另外,可以理解的是,关于上述的神经网络,是一种模仿动物神经网络行为特征,进行分布式并行信息处理的算法数学模型。这种网络依靠系统的复杂程度,通过调整内部大量节点之间相互连接的关系,从而达到处理信息的目的。In addition, it can be understood that the above-mentioned neural network is an algorithmic mathematical model that imitates the behavior characteristics of animal neural networks and performs distributed parallel information processing. This kind of network depends on the complexity of the system, and achieves the purpose of processing information by adjusting the interconnection relationship between a large number of internal nodes.
另外,如图3所示,图3是本申请一个实施例提供的拍摄方法中提取亮度特征信息的流程图,关于上述步骤S200,包括但不限于有步骤S410、步骤S420、步骤S430和步骤S440。In addition, as shown in Figure 3, Figure 3 is a flow chart of extracting brightness feature information in the shooting method provided by an embodiment of the present application. Regarding the above step S200, it includes but is not limited to step S410, step S420, step S430 and step S440 .
步骤S410、对预览图像进行划分以得到多个子区域,并从多个子区域中确定多个目标子区域。Step S410, divide the preview image to obtain multiple sub-regions, and determine multiple target sub-regions from the multiple sub-regions.
步骤S420、对于每个目标子区域,根据目标子区域的亮度值计算出累积亮度分布数值,并根据累积亮度分布数值和预设亮度阈值得到目标子区域的亮度区间比例。Step S420, for each target sub-region, calculate the cumulative brightness distribution value according to the brightness value of the target sub-region, and obtain the brightness interval ratio of the target sub-region according to the cumulative brightness distribution value and the preset brightness threshold.
步骤S430、获取每个目标子区域的亮度位置权重。Step S430, obtaining the brightness position weight of each target sub-region.
步骤S440、根据所有目标子区域的亮度位置权重和亮度区间比例,计算得到预览图像的亮度特征信息。Step S440, according to the luminance position weights and luminance interval ratios of all target sub-regions, calculate the luminance characteristic information of the preview image.
具体地,本申请实施例在提取过程中,会对预览图像进行划分,从而得到多个子区域,并从多个子区域中选择一定数量的目标子区域;接着计算每个目标子区域的累积亮度分布数值,即与高低亮度的CDF(cumulative distribution function,累积分布函数)值;由于本申请实施例会设置有预设亮度阈值,该预设亮度阈值表征一定的亮度区间,因此,本申请实施例会根据累积亮度分布数值和预设亮度阈值得到目标子区域的亮度区间比例,即在每个亮度区间的占比;接着,由于不同的目标子区域的位置不同,其距离曝光中心的距离也不同,因此,本申请实施例还会获取目标子区域的亮度位置权重;最后,本申请实施例会根据所有目标子区域的亮度位置权重和亮度区间比例,计算得到预览图像的亮度特征信息,从而实现预览图像的亮度特征信息的提取。Specifically, in the embodiment of the present application, during the extraction process, the preview image is divided to obtain multiple sub-regions, and a certain number of target sub-regions are selected from the multiple sub-regions; then the cumulative brightness distribution of each target sub-region is calculated Value, that is, the CDF (cumulative distribution function, cumulative distribution function) value of the high and low brightness; since the embodiment of the present application will set a preset brightness threshold, which represents a certain brightness range, the embodiment of the present application will be based on the cumulative distribution function. The brightness distribution value and the preset brightness threshold obtain the brightness interval ratio of the target sub-area, that is, the proportion in each brightness interval; then, because the positions of different target sub-areas are different, their distances from the exposure center are also different. Therefore, The embodiment of the present application will also obtain the brightness position weight of the target sub-region; finally, the embodiment of the present application will calculate the brightness feature information of the preview image according to the brightness position weight and brightness interval ratio of all target sub-regions, so as to realize the brightness of the preview image Extraction of feature information.
需要说明的是,本申请实施例通过计算当前场景对应的预览图像的不同区域的亮度分布情况,提取亮度特征值作为神经网络的输入,将标定的场景分类作为神经网络的输出,预训练完毕后网络结构,可以有效地解决需要曝光融合场景中的曝光参数的选择。It should be noted that the embodiment of the present application calculates the brightness distribution of different areas of the preview image corresponding to the current scene, extracts the brightness feature value as the input of the neural network, and uses the calibrated scene classification as the output of the neural network. After the pre-training is completed, The network structure can effectively solve the selection of exposure parameters in scenes that require exposure fusion.
值得注意的是,关于上述的预设亮度阈值,可以包括但不限于预设高亮阈值和预设低亮 阈值,对应地,由于存在不同范围的预设亮度阈值,因此,对于每个目标子区域,本申请实施例所得到的亮度区间比例也对应包括但不限于高亮区间比例和低亮区间比例;对应地,由于存在高亮区间比例和低亮区间比例,因此后续所得到的亮度特征信息也对应包括高亮区间特征信息和低亮区间特征信息。It is worth noting that the preset brightness threshold mentioned above may include but not limited to a preset high brightness threshold and a preset low brightness threshold. Correspondingly, since there are different ranges of preset brightness thresholds, for each target sub region, the brightness interval ratio obtained in the embodiment of the present application also includes, but is not limited to, the ratio of high brightness interval and the ratio of low brightness interval; The information also correspondingly includes feature information of high-brightness intervals and feature information of low-brightness intervals.
具体地,当预设亮度阈值包括预设高亮阈值和预设低亮阈值,关于上述步骤S420中的根据累积亮度分布数值和预设亮度阈值得到目标子区域的亮度区间比例,包括:根据累积亮度分布数值和预设高亮阈值得到目标子区域的高亮区间比例,以及根据累积亮度分布数值和预设低亮阈值得到目标子区域的低亮区间比例。Specifically, when the preset brightness threshold includes a preset high-brightness threshold and a preset low-brightness threshold, the step S420 to obtain the brightness interval ratio of the target sub-region according to the cumulative brightness distribution value and the preset brightness threshold includes: according to the accumulated The brightness distribution value and the preset high-brightness threshold obtain the ratio of the high-brightness interval of the target sub-region, and the ratio of the low-brightness interval of the target sub-region is obtained according to the accumulated brightness distribution value and the preset low-brightness threshold.
另外,对应地,关于上述步骤S440中的根据所有目标子区域的亮度位置权重和亮度区间比例,计算得到预览图像的亮度特征信息,包括:根据所有目标子区域的亮度位置权重和高亮区间比例计算得到预览图像的高亮区间特征信息,以及根据所有目标子区域的亮度位置权重和低亮区间比例计算得到预览图像的低亮区间特征信息。In addition, correspondingly, with regard to the brightness feature information of the preview image calculated according to the brightness position weights and brightness interval ratios of all target sub-regions in the above step S440, including: according to the brightness position weights and highlight interval ratios of all target sub-regions The feature information of the high-brightness interval of the preview image is calculated, and the feature information of the low-brightness interval of the preview image is calculated according to the brightness position weights of all target sub-regions and the ratio of the low-brightness interval.
另外,值得注意的是,目标子区域的数量小于或等于子区域的数量,具体地,子区域的数量和目标子区域的数量可以一致,或者,子区域的数量和目标子区域的数量可以不一致。In addition, it is worth noting that the number of target sub-regions is less than or equal to the number of sub-regions, specifically, the number of sub-regions and the number of target sub-regions can be consistent, or the number of sub-regions and the number of target sub-regions can be inconsistent .
具体地,当子区域的数量和目标子区域的数量为一致时,所划分的子区域全部均为目标子区域,因此,关于上述的亮度位置权重可以是指预览图像的全局亮度位置权重。Specifically, when the number of sub-regions is consistent with the number of target sub-regions, all divided sub-regions are target sub-regions. Therefore, the above-mentioned brightness position weight may refer to the global brightness position weight of the preview image.
又或者,当子区域的数量和目标子区域的数量为不一致时,即目标子区域的数量少于子区域的数量时,即所划分的子区域中只有部分子区域为目标子区域,因此,关于上述的亮度位置权重可以是指预览图像的局部亮度位置权重。Or, when the number of sub-regions is inconsistent with the number of target sub-regions, that is, when the number of target sub-regions is less than the number of sub-regions, that is, only part of the divided sub-regions are target sub-regions, therefore, The aforementioned brightness position weights may refer to local brightness position weights of the preview image.
另外,当目标子区域的数量小于子区域的数量时,目标子区域位于预览图像的中心位置或者位于预览图像的四周位置,又或者,本申请实施例可以根据用户的需求选择所感兴趣的位置。In addition, when the number of target sub-regions is smaller than the number of sub-regions, the target sub-region is located at the center of the preview image or at the periphery of the preview image, or, in this embodiment of the present application, an interested position can be selected according to user requirements.
另外,如图4所示,图4是本申请一个实施例提供的拍摄方法中获取亮度位置权重的流程图,关于上述步骤S430,包括但不限于有步骤S510和步骤S520。In addition, as shown in FIG. 4, FIG. 4 is a flow chart of acquiring brightness position weights in the shooting method provided by an embodiment of the present application. The above step S430 includes but not limited to step S510 and step S520.
步骤S510、获取目标子区域和预览图像中预设位置之间的距离信息。Step S510, acquiring distance information between the target sub-region and a preset position in the preview image.
步骤S520、根据距离信息计算得到每个目标子区域的亮度位置权重。Step S520, calculating and obtaining the brightness position weight of each target sub-region according to the distance information.
具体地,由于不同的目标子区域的位置不同,其距离曝光中心的距离也不同,因此,本申请实施例还会获取目标子区域和预览图像中预设位置之间的距离信息,并根据距离信息计算得到每个目标子区域的亮度位置权重。Specifically, since the positions of different target sub-regions are different, their distances from the exposure center are also different. Therefore, the embodiment of the present application will also obtain the distance information between the target sub-region and the preset position in the preview image, and according to the distance The information is calculated to obtain the brightness position weight of each target sub-region.
可以理解的是,关于上述的预设位置,可以但不限于是指曝光中心的位置或者预览图像的中心位置。It can be understood that, with regard to the aforementioned preset position, it may, but is not limited to, refer to the position of the exposure center or the center position of the preview image.
另外,如图5所示,图5是本申请一个实施例提供的拍摄方法中获取亮度占比权重的流程图,本申请实施例的拍摄方法还包括但不限于有步骤S600。In addition, as shown in FIG. 5 , FIG. 5 is a flowchart of obtaining brightness proportion weights in a shooting method provided by an embodiment of the present application. The shooting method in the embodiment of the present application also includes but is not limited to step S600 .
步骤S600、根据目标子区域的亮度区间比例,计算得到亮度占比权重。Step S600 , according to the brightness interval ratio of the target sub-region, calculate the brightness proportion weight.
具体地,本申请实施例还会根据亮度占比数值即亮度区间比例来调整权重值。Specifically, the embodiment of the present application also adjusts the weight value according to the brightness proportion value, that is, the brightness interval ratio.
需要说明的是,相比于上述的亮度位置权重,这里所提及的亮度占比权重用于曝光策略的微调,而上述的亮度位置权重用于曝光策略的粗调。It should be noted that, compared with the above-mentioned brightness position weights, the brightness proportion weights mentioned here are used for fine-tuning the exposure strategy, while the above-mentioned brightness position weights are used for rough adjustment of the exposure strategy.
基于图5,如图6所示,图6是本申请一个实施例提供的拍摄方法中根据亮度位置权重、亮度占比权重和亮度区间比例得到亮度特征信息的流程图,关于上述步骤S440,包括但不限 于有步骤S700。Based on Fig. 5, as shown in Fig. 6, Fig. 6 is a flow chart of obtaining luminance feature information according to luminance position weight, luminance proportion weight and luminance interval ratio in the shooting method provided by an embodiment of the present application. Regarding the above step S440, it includes But not limited to step S700.
步骤S700、根据所有目标子区域的亮度位置权重、亮度占比权重和亮度区间比例,计算得到预览图像的亮度特征信息。Step S700 , according to the luminance position weights, luminance proportion weights and luminance interval ratios of all target sub-regions, calculate the luminance characteristic information of the preview image.
具体地,本申请实施例在输入至策略生成模型时,还可以将亮度占比权重也一并输入至策略生成模型,由于亮度占比权重能够起到曝光策略的微调作用,因此,步骤S700中策略生成模型所输出的亮度特征信息会更加准确。Specifically, in the embodiment of the present application, when inputting into the policy generation model, the brightness proportion weight can also be input into the policy generation model. Since the brightness proportion weight can play a role in fine-tuning the exposure strategy, therefore, in step S700 The brightness feature information output by the policy generation model will be more accurate.
基于图2至图6中的方法步骤,在子区域的数量为M*N的情况下,目标子区域的数量为i*j,其中,i≤M,j≤N;累积亮度分布数值为cdf i,j,i∈[1,M],j∈[1,N];当预设高亮阈值为[lum3,lum4],预设低亮阈值为[lum1,lum2],其中,lum4>lum3≥lum2>lum1。 Based on the method steps in Figure 2 to Figure 6, when the number of sub-regions is M*N, the number of target sub-regions is i*j, where i≤M, j≤N; the cumulative brightness distribution value is cdf i,j ,i∈[1,M],j∈[1,N]; when the preset highlight threshold is [lum3,lum4], the preset low brightness threshold is [lum1,lum2], where, lum4>lum3 ≥ lum2 > lum1.
具体地,高亮区间比例由如下公式得到:L i,j=cdf i,j(lum4)-cdf i,j(lum3),其中,L i,j为高亮区间比例。 Specifically, the highlight interval ratio is obtained by the following formula: L i,j =cdf i,j (lum4)-cdf i,j (lum3), wherein, L i,j is the highlight interval ratio.
低亮区间比例由如下公式得到:D i,j=cdf i,j(lum2)-cdf i,j(lum1),其中,D i,j为低亮区间比例。 The ratio of the low-brightness interval is obtained by the following formula: D i,j =cdf i,j (lum2)-cdf i,j (lum1), where D i,j is the ratio of the low-brightness interval.
另外,亮度位置权重可以由如下公式得到:In addition, the brightness position weight can be obtained by the following formula:
Figure PCTCN2022099221-appb-000001
Figure PCTCN2022099221-appb-000001
其中,W d(i,j)为亮度位置权重,δ 1为可调参数。 Among them, W d (i, j) is the brightness position weight, and δ 1 is an adjustable parameter.
另外,亮度占比权重可以由如下公式得到:In addition, the brightness proportion weight can be obtained by the following formula:
Figure PCTCN2022099221-appb-000002
Figure PCTCN2022099221-appb-000002
其中,W a(lum)为亮度位置权重,δ 2和u为可调参数。 Among them, W a (lum) is the brightness position weight, δ 2 and u are adjustable parameters.
另外,高亮区间特征信息可以由如下公式得到:In addition, the feature information of the highlighted interval can be obtained by the following formula:
Figure PCTCN2022099221-appb-000003
Figure PCTCN2022099221-appb-000003
其中,vl k为高亮区间特征信息。 Among them, vl k is the feature information of the highlighted interval.
另外 低亮区间特征信息可以由如下公式得到: In addition , the feature information of the low-brightness interval can be obtained by the following formula:
Figure PCTCN2022099221-appb-000004
Figure PCTCN2022099221-appb-000004
其中,vd k为低亮区间特征信息。 Among them, vd k is the feature information of the low-brightness interval.
基于图2至图6中的方法步骤以及上述的推导公式,本申请实施例提供了一个整体的实施方案,具体如下:Based on the method steps in Fig. 2 to Fig. 6 and the above-mentioned derivation formula, the embodiment of the present application provides an overall implementation scheme, specifically as follows:
步骤一,获取大小为(w,h)图像的亮度值,Lum i,j表示位置在(i,j)处的亮度: Step 1: Obtain the brightness value of the image with size (w, h), and Lum i, j represents the brightness at position (i, j):
Lum i,j=Max(R i,j,G i,j,B i,j),i≤h,j≤w Lum i,j =Max(R i,j ,G i,j ,B i,j ),i≤h,j≤w
步骤二,将获得的亮度图像划分为M*N个子区域,分别计算各个子区域的累积亮度分布数值:cdf m,n,m∈[1,M],n∈[1,N]。设置预设低亮阈值:[lum1,lum2],设置预设高亮阈值:[lum3,lum4],其中,lum4>lum3≥lum2>lum1。 Step 2: Divide the obtained luminance image into M*N sub-regions, and calculate the cumulative luminance distribution value of each sub-region respectively: cdf m,n ,m∈[1,M],n∈[1,N]. Set the preset low brightness threshold: [lum1, lum2], set the preset high brightness threshold: [lum3, lum4], where, lum4>lum3≥lum2>lum1.
获得各个子区域的高亮比例,即高亮区间比例:L m,n=cdf m,n(lum4)-cdf m,n(lum3) Obtain the highlight ratio of each sub-region, that is, the ratio of the highlight interval: L m,n = cdf m,n (lum4)-cdf m,n (lum3)
获得各个子区域的低亮比例,即低亮区间比例:D m,n=cdf m,n(lum2)-cdf m,n(lum1) Obtain the low-brightness ratio of each sub-region, that is, the low-brightness interval ratio: D m,n = cdf m,n (lum2)-cdf m,n (lum1)
步骤三,获得待统计区域的权重:Step 3, obtain the weight of the area to be counted:
全局亮度权重W d,即全局亮度位置权重W d,根据距离中心的远近调整权重,权重示例如下但不限于如下形式: The global brightness weight W d , that is, the global brightness position weight W d , adjusts the weight according to the distance from the center. Examples of weights are as follows but not limited to:
Figure PCTCN2022099221-appb-000005
Figure PCTCN2022099221-appb-000005
其中,δ 1为可调参数,m≤M,n≤N。 Among them, δ 1 is an adjustable parameter, m≤M, n≤N.
占比权重W a,即亮度占比权重W a,根据亮度占比数值调整权重值,权重示例如下但不限于如下形式: Proportion weight W a , that is, brightness proportion weight W a , adjust the weight value according to the brightness proportion value. Examples of weights are as follows but not limited to the following forms:
Figure PCTCN2022099221-appb-000006
Figure PCTCN2022099221-appb-000006
其中,W a(lum)为亮度位置权重,δ 2和u为可调参数,m≤M,n≤N。 Wherein, W a (lum) is the brightness position weight, δ 2 and u are adjustable parameters, m≤M, n≤N.
需要说明的是,权重也可用表中存储,计算时采用LUT(Look-Up-Table,显示查找表)方式实现,以简化计算及增加效率。It should be noted that the weight can also be stored in a table, and the calculation is realized by using a LUT (Look-Up-Table, display look-up table) to simplify calculation and increase efficiency.
局部亮度权重W d:分配不同位置子区域中权重数值,得到ROI(region of interest,感兴趣区域)区域:W d(i,j)。 Local brightness weight W d : Assign weight values in sub-regions of different positions to obtain ROI (region of interest, region of interest) area: W d (i, j).
示例性地,例如中心区域W d(i,j),如下表1的形式分布: Exemplarily, for example, the central area W d (i, j), is distributed in the form of the following table 1:
00 00 00 00 00 00 00
00 00 00 00 00 00 00
00 00 11 11 11 00 00
00 00 11 11 11 00 00
00 00 00 00 00 00 00
00 00 00 00 00 00 00
表1Table 1
又或者,示例性地,例如四周区域W d(i,j),如下表2的类似形式: Or, exemplarily, for example, the surrounding area W d (i, j), in a similar form as in Table 2 below:
11 11 11 11 11 11 11
11 11 11 11 11 11 11
11 11 00 00 00 11 11
11 11 00 00 00 11 11
11 11 11 11 11 11 11
11 11 11 11 11 11 11
表2Table 2
又或者,其他边角,对角等局部区域采用类似的方法,从而得到不同的局部特征权重表。Alternatively, a similar method may be used for local areas such as other corners and diagonals, so as to obtain different local feature weight tables.
步骤四,通过加权平均得到图像的亮度特征分布值,即亮度特征信息。Step 4: Obtain the brightness characteristic distribution value of the image through weighted average, that is, the brightness characteristic information.
通过局部或者全局的权重获得整幅图像的2k个特征值(k>1),其中亮区和暗区两个权重W d,W a可以不同,可以通过如下计算出亮区和暗区权重相同: The 2k eigenvalues of the entire image (k>1) are obtained through local or global weights, where the two weights W d and W a of the bright area and the dark area can be different, and the weights of the bright area and the dark area can be calculated as follows :
亮区特征值,即高亮区间特征信息:
Figure PCTCN2022099221-appb-000007
The feature value of the bright area, that is, the feature information of the highlighted interval:
Figure PCTCN2022099221-appb-000007
暗区特征值,即低亮区间特征信息:
Figure PCTCN2022099221-appb-000008
The feature value of the dark area, that is, the feature information of the low-brightness interval:
Figure PCTCN2022099221-appb-000008
需要说明的是,计算时权重也可用LUT方式实现,从而得到一组亮度特征信息组成的2k 维向量:It should be noted that the weight can also be realized by LUT during calculation, so as to obtain a 2k-dimensional vector composed of a set of brightness feature information:
vec in=vector(vl 1,vd 1,...,vl k,vd k) vec in =vector(vl 1 ,vd 1 ,...,vl k ,vd k )
步骤五,用计算得到的特征向量输入训练好的神经网络,得到准确的曝光策略,以启动HDR算法并合成多个图片。Step 5: Input the trained neural network with the calculated feature vector to obtain an accurate exposure strategy to start the HDR algorithm and synthesize multiple images.
将曝光策略分类为对应的输出值,通过实际拍摄照片,标定照片在HDR算法中需要的曝光策略值。如曝光策略输出为一个L维向量:vec out=vector(exp 1,...,exp L) Classify the exposure strategy into the corresponding output value, and calibrate the exposure strategy value required in the HDR algorithm by actually taking photos. For example, the output of the exposure strategy is an L-dimensional vector: vec out = vector(exp 1 ,...,exp L )
关于曝光参数的选择方法,示例性地,将HDR合成需要三张不同曝光参数下的照片,在正常曝光的照片外,将另外的高低曝光参数组合,作为神经网络的输出,例如将低曝EV -分为三挡[ev -1,ev -2,ev -3],将高曝EV +分为三档:[ev +1,ev +2,ev +3],其中,曝光策略组合可以如图7所示,图7是本申请一个实施例提供的曝光策略组合示意图。 Regarding the selection method of exposure parameters, for example, HDR synthesis requires three photos under different exposure parameters. In addition to the normal exposure photos, other high and low exposure parameters are combined as the output of the neural network, for example, the low exposure EV -Divided into three levels [ev - 1, ev - 2, ev - 3], high exposure EV + is divided into three levels: [ev + 1, ev + 2, ev + 3], where the combination of exposure strategies can be as follows As shown in FIG. 7 , FIG. 7 is a schematic diagram of an exposure strategy combination provided by an embodiment of the present application.
具体地,本申请实施例可以选择对应样本照片所需的曝光策略,作为网络的输出,之后训练网络得到最优结构。Specifically, in the embodiment of the present application, the exposure strategy required for the corresponding sample photo can be selected as the output of the network, and then the network is trained to obtain the optimal structure.
当HDR融合不局限于三种曝光程度(正常,低曝,高曝)时,每增加一种需要融合的照片,就增加一种曝光设置,示例性地,当五张照片融合时,可以将低曝光细化:在原有的三融合基础上,再增加两种曝光设置:EV --=[ev --1,ev --2,ev --3,...],EV ++=[ev ++1,ev ++2,ev ++3,...]。 When HDR fusion is not limited to three exposure levels (normal, low exposure, high exposure), each time a photo to be fused is added, an exposure setting is added. For example, when five photos are fused, the Low-exposure refinement: On the basis of the original three-fusion, two more exposure settings are added: EV -- =[ev -- 1,ev -- 2,ev -- 3,...], EV ++ =[ ev ++ 1,ev ++ 2,ev ++ 3,...].
可以理解的是,上述所用的神经网络模型构造可以参考图8所示的形式。即在输入层与输出层之间包括一个或多个隐藏层来更好的进行划分或提高分类效果,在输出层之后加入softmax层等优化网络输出的效果。在其他一些实施方式中,也可以使用其他任意适用的神经网络模型。It can be understood that the structure of the neural network model used above can refer to the form shown in FIG. 8 . That is, one or more hidden layers are included between the input layer and the output layer to better divide or improve the classification effect, and add a softmax layer after the output layer to optimize the network output effect. In some other implementation manners, any other suitable neural network model can also be used.
当输入的特征特征向量及输出值较多时,根据需要调整网络结果,例如增减网络节点,增加网络层级等,输出也可增加softmax层来满足计算需求。在拍摄时,训练好的网络在终端内通过硬件支持或者软件固化的方式运行,预览时,计算当前场景的亮度特征数值,输入网络中计算得到启动HDR需要的曝光参数,生效后,拍摄照片完成HDR的照片融合。When there are many input feature vectors and output values, adjust the network results as needed, such as increasing or decreasing network nodes, increasing network levels, etc. The output can also increase the softmax layer to meet the computing needs. When shooting, the trained network runs in the terminal through hardware support or software solidification. When previewing, calculate the brightness characteristic value of the current scene, and input it into the network to calculate the exposure parameters required to start HDR. After it takes effect, the photo is taken. HDR photo fusion.
关于本申请实施例的拍摄方法,通过子区域统计整幅画面的亮度分布,计算子区域的CDF值获得高亮及低亮比例值,设计了几种权重系数获得当前场景的亮度分布特征:通过距离中心远近的不同获得距离权重,获取全局亮低亮度分布特征;通过亮度占比的不同调整不同亮度的权重,获取亮低亮度分布特征;调整权重分布中间为1,四周为0,获得局部的高低亮度分布特征值;调整权重分布中间为0,四周为1,获得局部的高低亮度分布特征值。Regarding the shooting method of the embodiment of the present application, the luminance distribution of the whole picture is counted by the sub-region, the CDF value of the sub-region is calculated to obtain the high-brightness and low-brightness ratio values, and several weight coefficients are designed to obtain the luminance distribution characteristics of the current scene: through The distance weight is obtained according to the distance from the center, and the global bright and low brightness distribution characteristics are obtained; the weight of different brightness is adjusted through the difference in brightness ratio, and the bright and low brightness distribution characteristics are obtained; the middle of the weight distribution is adjusted to 1, and the surrounding is 0, to obtain the local The characteristic value of high and low brightness distribution; adjust the weight distribution to 0 in the middle and 1 around to obtain the local characteristic value of high and low brightness distribution.
根据标定的图片,将场景的逆光程度或者HDR的曝光组合作为输出值,训练网络,得到网络结构。拍摄时,通过几种权重得到全局或者局部亮度特征的值输入已经训练好的网络中,从而得到逆光的程度或者HDR触发的所需的曝光参数。According to the calibrated picture, the backlighting degree of the scene or the exposure combination of HDR is used as the output value, and the network is trained to obtain the network structure. When shooting, the value of the global or local brightness feature is obtained through several weights and input into the trained network, so as to obtain the degree of backlight or the required exposure parameters triggered by HDR.
另外,需要说明的是,关于策略生成模型的训练过程和当前场景的拍摄过程,其方法步骤均相类似。In addition, it should be noted that the method steps of the training process of the policy generation model and the shooting process of the current scene are similar.
具体地,策略生成模型的训练过程如下:获取多张样本图像,并对样本图像进行划分以得到多个子区域,并从多个子区域中确定多个目标子区域;然后,对于样本图像中每个目标子区域,根据样本图像中的目标子区域的亮度值计算出累积亮度分布数值,并根据累积亮度分布数值和预设高亮阈值得到目标子区域的高亮区间比例,以及根据累积亮度分布数值和预设低亮阈值得到目标子区域的低亮区间比例;接着,获取目标子区域和样本图像中预设位置 之间的距离信息,并根据距离信息计算得到每个目标子区域的亮度位置权重;接着,根据目标子区域的亮度区间比例,计算得到亮度占比权重;接着,根据所有目标子区域的亮度位置权重、亮度占比权重和亮度区间比例,计算得到样本图像的亮度特征信息;最后,用计算得到的亮度特征信息对神经网络进行训练,用曝光参数组合对应的类别向量作为网络的输出,从而得到逆光的程度或HDR触发所需的曝光参数。Specifically, the training process of the policy generation model is as follows: obtain multiple sample images, and divide the sample images to obtain multiple sub-regions, and determine multiple target sub-regions from the multiple sub-regions; then, for each of the sample images Target sub-area, calculate the cumulative brightness distribution value according to the brightness value of the target sub-region in the sample image, and obtain the highlight interval ratio of the target sub-region according to the cumulative brightness distribution value and the preset highlight threshold value, and according to the cumulative brightness distribution value and the preset low-brightness threshold to obtain the low-brightness interval ratio of the target sub-region; then, obtain the distance information between the target sub-region and the preset position in the sample image, and calculate the brightness position weight of each target sub-region according to the distance information ; Then, according to the brightness interval ratio of the target sub-region, calculate the brightness proportion weight; then, calculate the brightness feature information of the sample image according to the brightness position weight, brightness proportion weight and brightness interval ratio of all target sub-regions; finally , use the calculated brightness feature information to train the neural network, and use the exposure parameters to combine the corresponding category vectors as the output of the network, so as to obtain the degree of backlight or the exposure parameters required for HDR triggering.
基于上述实施例的拍摄方法,下面提出本申请的拍摄装置的各个实施例。Based on the photographing methods of the above embodiments, various embodiments of the photographing device of the present application are proposed below.
如图9所示,图9是本申请一个实施例提供的拍摄装置的结构示意图,该拍摄装置200包括但不限于处理器110、存储器120、拍摄部件130和显示屏幕140,其中,处理器110分别连接至存储器120、拍摄部件130和显示屏幕140,处理器110包括但不限于有特征提取单元111和策略生成单元112,处理器110可以调用储存在存储器120中的拍摄程序,从而执行拍摄方法,拍摄部件130包括但不限于有光学摄像头131。As shown in FIG. 9, FIG. 9 is a schematic structural diagram of a photographing device provided by an embodiment of the present application. The photographing device 200 includes but is not limited to a processor 110, a memory 120, a photographing component 130, and a display screen 140, wherein the processor 110 Respectively connected to the memory 120, the photographing component 130 and the display screen 140, the processor 110 includes but not limited to a feature extraction unit 111 and a strategy generation unit 112, the processor 110 can call the photographing program stored in the memory 120, thereby executing the photographing method , the photographing component 130 includes but is not limited to an optical camera 131 .
具体地,拍摄部件130通过光学摄像头131对准要拍摄的当前场景,并将当前场景对应的预览图像显示在显示屏幕140上;接着,处理器110中的特征提取单元111能够获取显示屏幕140所显示的预览图像,并对预览图像进行特征提取,得到预览图像的亮度特征信息;接着,处理器110中的策略生成单元112会将亮度特征信息输入至处理器110中预先训练好的策略生成模型,得到曝光策略,其中,策略生成模型由神经网络根据样本图像所对应的亮度特征信息训练得到;最后,拍摄部件130会获取得到的曝光策略,并基于曝光策略通过光学摄像头131对当前场景进行拍摄。Specifically, the photographing part 130 aims at the current scene to be photographed through the optical camera 131, and displays a preview image corresponding to the current scene on the display screen 140; The displayed preview image, and feature extraction is performed on the preview image to obtain the brightness feature information of the preview image; then, the policy generation unit 112 in the processor 110 will input the brightness feature information into the pre-trained policy generation model in the processor 110 , to obtain the exposure strategy, wherein, the strategy generation model is trained by the neural network according to the brightness feature information corresponding to the sample image; finally, the photographing component 130 will obtain the obtained exposure strategy, and based on the exposure strategy, the optical camera 131 will shoot the current scene .
值得注意的是,本申请实施例的拍摄装置的具体实施方式和技术效果,可以参照上述实施例的拍摄方法的具体实施方式和技术效果。It should be noted that for the specific implementation manners and technical effects of the photographing device in the embodiment of the present application, reference may be made to the specific implementation manners and technical effects of the photographing method in the foregoing embodiments.
基于上述实施例的拍摄方法,下面提出本申请的控制器、拍摄设备和计算机可读存储介质的各个实施例。Based on the photographing methods of the above embodiments, various embodiments of the controller, the photographing device, and the computer-readable storage medium of the present application are proposed below.
另外,本申请的一个实施例提供了一种控制器,该控制器包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序。In addition, an embodiment of the present application provides a controller, which includes: a memory, a processor, and a computer program stored in the memory and operable on the processor.
处理器和存储器可以通过总线或者其他方式连接。The processor and memory can be connected by a bus or other means.
需要说明的是,本实施例中的控制器,可以对应为包括有如图1所示实施例中的存储器和处理器,能够构成图1所示实施例中的系统架构平台的一部分,两者属于相同的构思,因此两者具有相同的实现原理以及有益效果,此处不再详述。It should be noted that the controller in this embodiment may correspond to include the memory and the processor in the embodiment shown in Figure 1, which can constitute a part of the system architecture platform in the embodiment shown in Figure 1, and both belong to The same idea, so both have the same implementation principle and beneficial effect, which will not be described in detail here.
实现上述实施例的拍摄方法所需的非暂态软件程序以及指令存储在存储器中,当被处理器执行时,执行上述实施例的拍摄方法,例如,执行以上描述的图2中的方法步骤S100至S400、图3中的方法步骤S410至S440、图4中的方法步骤S510至S520、图5中的方法步骤S600、图6中的方法步骤S700。The non-transitory software programs and instructions required to realize the shooting method of the above-mentioned embodiment are stored in the memory, and when executed by the processor, the shooting method of the above-mentioned embodiment is executed, for example, the method step S100 in FIG. 2 described above is executed to S400 , method steps S410 to S440 in FIG. 3 , method steps S510 to S520 in FIG. 4 , method step S600 in FIG. 5 , method step S700 in FIG. 6 .
以上所描述的装置实施例仅仅是示意性的,其中作为分离部件说明的单元可以是或者也可以不是物理上分开的,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。The device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,本申请的一个实施例提供了一种拍摄拍摄设备,包括上述实施例的拍摄装置或者上述实施例的控制器。In addition, an embodiment of the present application provides a photographing and photographing device, including the photographing device of the foregoing embodiment or the controller of the foregoing embodiment.
具体地,本申请实施例的拍摄拍摄设备可以是终端设备,例如手机、平板电脑、可穿戴拍摄设备等。其中,拍摄拍摄设备自身可以携带有摄像头,用于拍摄图像。Specifically, the shooting and shooting device in this embodiment of the present application may be a terminal device, such as a mobile phone, a tablet computer, a wearable shooting device, and the like. Wherein, the photographing device itself may carry a camera for capturing images.
还应当理解的是,在本申请其他一些实施例中,上述实施例的拍摄拍摄设备也可以不是终端拍摄设备,而是能够实现拍摄功能的台式计算机或者服务器。It should also be understood that, in some other embodiments of the present application, the shooting device in the above embodiments may also not be a terminal shooting device, but a desktop computer or a server capable of realizing the shooting function.
此外,本申请的一个实施例还提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机可执行指令,当计算机可执行指令用于执行上述的拍摄方法,例如,执行以上描述的图2中的方法步骤S100至S400、图3中的方法步骤S410至S440、图4中的方法步骤S510至S520、图5中的方法步骤S600、图6中的方法步骤S700。In addition, an embodiment of the present application also provides a computer-readable storage medium, and the computer-readable storage medium stores computer-executable instructions. When the computer-executable instructions are used to execute the above-mentioned photographing method, for example, to execute the above-described Method steps S100 to S400 in FIG. 2 , method steps S410 to S440 in FIG. 3 , method steps S510 to S520 in FIG. 4 , method steps S600 in FIG. 5 , and method steps S700 in FIG. 6 .
本申请实施例包括:首先获取当前场景的预览图像,然后对所述预览图像进行特征提取,得到所述预览图像的亮度特征信息,接着将所述亮度特征信息输入至策略生成模型,得到曝光策略,其中,所述策略生成模型由神经网络根据样本图像所对应的亮度特征信息训练得到,最后基于上述的曝光策略对所述当前场景进行拍摄。根据本申请实施例的技术方案,在拍摄的预览期间,本申请实施例会提取当前场景的预览图像的亮度特征信息并输入至训练好的策略生成模型,由于本申请实施例的策略生成模型是由神经网络根据样本图像所对应的亮度特征信息训练得到的,因此,策略生成模型会响应输出当前场景的预览图像所对应的曝光策略,进而可以在按下拍摄按键时采用策略生成模型输出的曝光策略进行拍摄处理,由于策略生成模型输出的曝光策略的准确性更高,因此本申请实施例能够提升拍摄质量。The embodiment of the present application includes: first acquiring the preview image of the current scene, and then performing feature extraction on the preview image to obtain the brightness characteristic information of the preview image, and then inputting the brightness characteristic information into a strategy generation model to obtain an exposure strategy , wherein the strategy generation model is trained by a neural network according to the brightness feature information corresponding to the sample image, and finally the current scene is photographed based on the above-mentioned exposure strategy. According to the technical solution of the embodiment of the present application, during the preview period of shooting, the embodiment of the present application will extract the brightness feature information of the preview image of the current scene and input it into the trained policy generation model, because the policy generation model of the embodiment of the present application is composed of The neural network is trained based on the brightness feature information corresponding to the sample image. Therefore, the strategy generation model will respond to output the exposure strategy corresponding to the preview image of the current scene, and then the exposure strategy output by the strategy generation model can be adopted when the shooting button is pressed. During the shooting process, because the accuracy of the exposure strategy output by the policy generation model is higher, the embodiment of the present application can improve the shooting quality.
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统可以被实施为软件、固件、硬件及其适当的组合。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包括计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。Those skilled in the art can understand that all or some of the steps and systems in the methods disclosed above can be implemented as software, firmware, hardware and an appropriate combination thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application-specific integrated circuit . Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). As known to those of ordinary skill in the art, the term computer storage media includes both volatile and nonvolatile media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. permanent, removable and non-removable media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, tape, magnetic disk storage or other magnetic storage devices, or can Any other medium used to store desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embody computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .
以上是对本申请的一些实施进行了具体说明,但本申请并不局限于上述实施方式,熟悉本领域的技术人员在不违背本申请精神的共享条件下还可作出种种等同的变形或替换,这些等同的变形或替换均包括在本申请权利要求所限定的范围内。The above is a specific description of some implementations of the present application, but the present application is not limited to the above-mentioned embodiments. Those skilled in the art can also make various equivalent deformations or replacements without violating the spirit of the present application. Equivalent modifications or replacements are all within the scope defined by the claims of the present application.

Claims (14)

  1. 一种拍摄方法,包括:A shooting method, comprising:
    获取当前场景的预览图像;Get the preview image of the current scene;
    对所述预览图像进行特征提取,得到所述预览图像的亮度特征信息;performing feature extraction on the preview image to obtain brightness feature information of the preview image;
    将所述亮度特征信息输入至策略生成模型,得到曝光策略,其中,所述策略生成模型由神经网络根据样本图像所对应的亮度特征信息训练得到;Inputting the brightness feature information into a strategy generation model to obtain an exposure strategy, wherein the strategy generation model is trained by a neural network according to the brightness feature information corresponding to the sample image;
    基于所述曝光策略对所述当前场景进行拍摄。Shooting the current scene based on the exposure strategy.
  2. 根据权利要求1所述的拍摄方法,其中,所述对所述预览图像进行特征提取,得到所述预览图像的亮度特征信息,包括:The shooting method according to claim 1, wherein said performing feature extraction on said preview image to obtain brightness feature information of said preview image comprises:
    对所述预览图像进行划分以得到多个子区域,并从所述多个子区域中确定多个目标子区域;dividing the preview image to obtain a plurality of sub-regions, and determining a plurality of target sub-regions from the plurality of sub-regions;
    对于每个所述目标子区域,根据所述目标子区域的亮度值计算出累积亮度分布数值,并根据所述累积亮度分布数值和预设亮度阈值得到所述目标子区域的亮度区间比例;For each of the target sub-regions, calculate the cumulative brightness distribution value according to the brightness value of the target sub-region, and obtain the brightness interval ratio of the target sub-region according to the cumulative brightness distribution value and a preset brightness threshold;
    获取每个所述目标子区域的亮度位置权重;Acquiring the brightness position weight of each target sub-region;
    根据所有所述目标子区域的所述亮度位置权重和所述亮度区间比例,计算得到所述预览图像的亮度特征信息。The brightness characteristic information of the preview image is calculated according to the brightness position weights and the brightness interval ratios of all the target sub-regions.
  3. 根据权利要求2所述的拍摄方法,其中,所述预设亮度阈值包括预设高亮阈值和预设低亮阈值;The shooting method according to claim 2, wherein the preset brightness threshold comprises a preset high brightness threshold and a preset low brightness threshold;
    对应地,所述根据所述累积亮度分布数值和预设亮度阈值得到所述目标子区域的亮度区间比例,包括:根据所述累积亮度分布数值和所述预设高亮阈值得到所述目标子区域的高亮区间比例,以及根据所述累积亮度分布数值和所述预设低亮阈值得到所述目标子区域的低亮区间比例;Correspondingly, the obtaining the brightness interval ratio of the target sub-region according to the cumulative brightness distribution value and the preset brightness threshold includes: obtaining the target sub-region according to the cumulative brightness distribution value and the preset highlight threshold the ratio of the high-brightness interval of the region, and the ratio of the low-brightness interval of the target sub-region obtained according to the cumulative brightness distribution value and the preset low-brightness threshold;
    对应地,所述根据所有所述目标子区域的所述亮度位置权重和所述亮度区间比例,计算得到所述预览图像的亮度特征信息,包括:根据所有所述目标子区域的所述亮度位置权重和所述高亮区间比例计算得到所述预览图像的高亮区间特征信息,以及根据所有所述目标子区域的所述亮度位置权重和所述低亮区间比例计算得到所述预览图像的低亮区间特征信息。Correspondingly, the calculating according to the brightness position weights and the brightness interval ratios of all the target sub-regions to obtain the brightness feature information of the preview image includes: according to the brightness positions of all the target sub-regions calculating the weight and the ratio of the highlighted interval to obtain the characteristic information of the highlighted interval of the preview image; Bright area feature information.
  4. 根据权利要求2所述的拍摄方法,其中,每个所述目标子区域的亮度位置权重由如下得到:The shooting method according to claim 2, wherein the brightness position weight of each of the target sub-regions is obtained as follows:
    获取所述目标子区域和所述预览图像中预设位置之间的距离信息;Obtaining distance information between the target sub-region and a preset position in the preview image;
    根据所述距离信息计算得到每个所述目标子区域的所述亮度位置权重。The brightness position weight of each of the target sub-regions is calculated according to the distance information.
  5. 根据权利要求3所述的拍摄方法,其中,所述拍摄方法还包括:根据所述目标子区域的亮度区间比例,计算得到亮度占比权重;The photographing method according to claim 3, wherein the photographing method further comprises: calculating and obtaining a brightness proportion weight according to the brightness interval ratio of the target sub-area;
    对应地,所述根据所有所述目标子区域的所述亮度位置权重和所述亮度区间比例,计算得到所述预览图像的亮度特征信息,包括:Correspondingly, the calculation according to the brightness position weights and the brightness interval ratios of all the target sub-regions to obtain the brightness characteristic information of the preview image includes:
    根据所有所述目标子区域的所述亮度位置权重、所述亮度占比权重和所述亮度区间比例,计算得到所述预览图像的亮度特征信息。The brightness feature information of the preview image is calculated according to the brightness position weights, the brightness proportion weights and the brightness interval ratios of all the target sub-regions.
  6. 根据权利要求5所述的拍摄方法,其中,在所述子区域的数量为M*N的情况下,所述目标子区域的数量为i*j,其中,i≤M,j≤N;The shooting method according to claim 5, wherein, when the number of the sub-areas is M*N, the number of the target sub-areas is i*j, where i≤M, j≤N;
    所述累积亮度分布数值为cdf i,j,i∈[1,M],j∈[1,N];当预设高亮阈值为[lum3,lum4],所述预设低亮阈值为[lum1,lum2],其中,lum4>lum3≥lum2>lum1; The value of the cumulative brightness distribution is cdf i,j , i∈[1,M],j∈[1,N]; when the preset high brightness threshold is [lum3,lum4], the preset low brightness threshold is [ lum1, lum2], wherein, lum4>lum3≥lum2>lum1;
    所述高亮区间比例由如下公式得到:L i,j=cdf i,j(lum4)-cdf i,j(lum3),其中,所述L i,j为所述高亮区间比例; The highlight interval ratio is obtained by the following formula: L i,j =cdf i,j (lum4)-cdf i,j (lum3), wherein, the L i,j is the highlight interval ratio;
    所述低亮区间比例由如下公式得到:D i,j=cdf i,j(lum2)-cdf i,j(lum1),其中,所述D i,j为所述低亮区间比例。 The ratio of the low-brightness interval is obtained by the following formula: D i,j =cdf i,j (lum2)-cdf i,j (lum1), wherein, the D i,j is the ratio of the low-brightness interval.
  7. 根据权利要求6所述的拍摄方法,其中,所述亮度位置权重由如下公式得到:The shooting method according to claim 6, wherein the brightness position weight is obtained by the following formula:
    Figure PCTCN2022099221-appb-100001
    Figure PCTCN2022099221-appb-100001
    其中,所述W d(i,j)为所述亮度位置权重,所述δ 1为可调参数。 Wherein, the W d (i, j) is the brightness position weight, and the δ 1 is an adjustable parameter.
  8. 根据权利要求7所述的拍摄方法,其中,所述亮度占比权重由如下公式得到:The shooting method according to claim 7, wherein the brightness proportion weight is obtained by the following formula:
    Figure PCTCN2022099221-appb-100002
    Figure PCTCN2022099221-appb-100002
    其中,所述W a(lum)为所述亮度位置权重,所述δ 2和所述u为可调参数。 Wherein, the W a (lum) is the brightness position weight, and the δ 2 and the u are adjustable parameters.
  9. 根据权利要求8所述的拍摄方法,其中,所述高亮区间特征信息由如下公式得到:The shooting method according to claim 8, wherein the feature information of the highlighted interval is obtained by the following formula:
    Figure PCTCN2022099221-appb-100003
    Figure PCTCN2022099221-appb-100003
    其中,所述vl k为所述高亮区间特征信息; Wherein, the vl k is the feature information of the highlighted interval;
    所述低亮区间特征信息由如下公式得到:The feature information of the low brightness interval is obtained by the following formula:
    Figure PCTCN2022099221-appb-100004
    Figure PCTCN2022099221-appb-100004
    其中,所述vd k为所述低亮区间特征信息。 Wherein, the vd k is characteristic information of the low brightness interval.
  10. 根据权利要求2至9中任意一项所述的拍摄方法,其中,当所述目标子区域的数量小于所述子区域的数量,所述目标子区域位于所述预览图像的中心位置或者位于所述预览图像的四周位置。The shooting method according to any one of claims 2 to 9, wherein, when the number of the target sub-areas is less than the number of the sub-areas, the target sub-area is located at the center of the preview image or at the The surrounding position of the preview image described above.
  11. 一种拍摄装置,其中,包括处理器、存储器、拍摄部件和显示屏幕,所述处理器分别连接至所述存储器、所述拍摄部件和所述显示屏幕,所述处理器包括特征提取单元和策略生成单元,所述拍摄部件包括光学摄像头;A photographing device, including a processor, a memory, a photographing component, and a display screen, the processor is respectively connected to the memory, the photographing component, and the display screen, and the processor includes a feature extraction unit and a strategy A generating unit, the photographing component includes an optical camera;
    所述拍摄部件被设置为通过所述光学摄像头获取当前场景对应的预览图像;The photographing component is configured to obtain a preview image corresponding to the current scene through the optical camera;
    所述显示屏幕被设置为显示所述当前场景对应的所述预览图像;The display screen is set to display the preview image corresponding to the current scene;
    所述特征提取单元被设置为获取所述显示屏幕所显示的所述预览图像,并对所述预览图像进行特征提取,得到所述预览图像的亮度特征信息;The feature extraction unit is configured to acquire the preview image displayed on the display screen, and perform feature extraction on the preview image to obtain brightness feature information of the preview image;
    所述策略生成单元被设置为将所述亮度特征信息输入至所述处理器中预先训练好的策略生成模型,得到曝光策略,其中,所述策略生成模型由神经网络根据样本图像所对应的亮度特征信息训练得到;The strategy generation unit is configured to input the brightness feature information into a pre-trained strategy generation model in the processor to obtain an exposure strategy, wherein the strategy generation model is configured by the neural network according to the brightness corresponding to the sample image The feature information is trained;
    所述拍摄部件还被设置为获取所述曝光策略,并基于所述曝光策略通过所述光学摄像头对所述当前场景进行拍摄。The photographing component is further configured to acquire the exposure strategy, and photograph the current scene through the optical camera based on the exposure strategy.
  12. 一种控制器,其中,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1至10中任意一项所述的拍摄方法。A controller, comprising: a memory, a processor, and a computer program stored on the memory and operable on the processor, when the processor executes the computer program, the claims 1 to 10 are implemented. Any one of the shooting methods described in.
  13. 一种拍摄设备,其中,包括如权利要求11所述的拍摄装置或者如权利要求12所述的控制器。A photographing device, comprising the photographing device according to claim 11 or the controller according to claim 12.
  14. 一种计算机可读存储介质,其中,存储有计算机可执行指令,所述计算机可执行指令用于执行如权利要求1至10中任意一项所述的拍摄方法。A computer-readable storage medium, wherein computer-executable instructions are stored, and the computer-executable instructions are used to execute the photographing method according to any one of claims 1-10.
PCT/CN2022/099221 2021-07-22 2022-06-16 Photographing method and apparatus, and controller, device and computer-readable storage medium WO2023000878A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110828028.4A CN115696027A (en) 2021-07-22 2021-07-22 Photographing method, photographing apparatus, controller, device, and computer-readable storage medium
CN202110828028.4 2021-07-22

Publications (1)

Publication Number Publication Date
WO2023000878A1 true WO2023000878A1 (en) 2023-01-26

Family

ID=84979817

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/099221 WO2023000878A1 (en) 2021-07-22 2022-06-16 Photographing method and apparatus, and controller, device and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN115696027A (en)
WO (1) WO2023000878A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116193264B (en) * 2023-04-21 2023-06-23 中国传媒大学 Camera adjusting method and system based on exposure parameters

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0534762A (en) * 1991-07-25 1993-02-12 Sanyo Electric Co Ltd Camera with automatic iris function
JPH05260370A (en) * 1992-01-14 1993-10-08 Sharp Corp Automatic iris circuit
US5331422A (en) * 1991-03-15 1994-07-19 Sharp Kabushiki Kaisha Video camera having an adaptive automatic iris control circuit
JPH06326919A (en) * 1993-05-13 1994-11-25 Sanyo Electric Co Ltd Automatic exposure control device
JPH09281544A (en) * 1996-04-16 1997-10-31 Nikon Corp Photometric device for camera
JP2008060989A (en) * 2006-08-31 2008-03-13 Noritsu Koki Co Ltd Photographed image correcting method and photographed image correcting module
US20170061237A1 (en) * 2015-08-24 2017-03-02 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
CN111447405A (en) * 2019-01-17 2020-07-24 杭州海康威视数字技术股份有限公司 Exposure method and device for video monitoring
CN111654594A (en) * 2020-06-16 2020-09-11 Oppo广东移动通信有限公司 Image capturing method, image capturing apparatus, mobile terminal, and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5331422A (en) * 1991-03-15 1994-07-19 Sharp Kabushiki Kaisha Video camera having an adaptive automatic iris control circuit
JPH0534762A (en) * 1991-07-25 1993-02-12 Sanyo Electric Co Ltd Camera with automatic iris function
JPH05260370A (en) * 1992-01-14 1993-10-08 Sharp Corp Automatic iris circuit
JPH06326919A (en) * 1993-05-13 1994-11-25 Sanyo Electric Co Ltd Automatic exposure control device
JPH09281544A (en) * 1996-04-16 1997-10-31 Nikon Corp Photometric device for camera
JP2008060989A (en) * 2006-08-31 2008-03-13 Noritsu Koki Co Ltd Photographed image correcting method and photographed image correcting module
US20170061237A1 (en) * 2015-08-24 2017-03-02 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
CN111447405A (en) * 2019-01-17 2020-07-24 杭州海康威视数字技术股份有限公司 Exposure method and device for video monitoring
CN111654594A (en) * 2020-06-16 2020-09-11 Oppo广东移动通信有限公司 Image capturing method, image capturing apparatus, mobile terminal, and storage medium

Also Published As

Publication number Publication date
CN115696027A (en) 2023-02-03

Similar Documents

Publication Publication Date Title
CN111418201B (en) Shooting method and equipment
CN108335279B (en) Image fusion and HDR imaging
CN108174118B (en) Image processing method and device and electronic equipment
JP6395810B2 (en) Reference image selection for motion ghost filtering
JP4240023B2 (en) Imaging apparatus, imaging method and imaging program, and image processing apparatus, image processing method and image processing program
WO2019148978A1 (en) Image processing method and apparatus, storage medium and electronic device
CN110445989B (en) Image processing method, image processing device, storage medium and electronic equipment
JP6218389B2 (en) Image processing apparatus and image processing method
EP3306913B1 (en) Photographing method and apparatus
US8953013B2 (en) Image pickup device and image synthesis method thereof
CN112565636A (en) Image processing method, device, equipment and storage medium
TW202022799A (en) Metering compensation method and related monitoring camera apparatus
WO2019019904A1 (en) White balance processing method and apparatus, and terminal
EP4340383A1 (en) Image processing method and related device thereof
CN111698493A (en) White balance processing method and device
WO2023000878A1 (en) Photographing method and apparatus, and controller, device and computer-readable storage medium
WO2022206353A1 (en) Image processing method, photographic apparatus, image processing apparatus and readable storage medium
US9473716B2 (en) Image processing method and image processing device
JP2009124264A (en) Image processing apparatus and image processing method
WO2015192545A1 (en) Photographing method and apparatus and computer storage medium
WO2016202073A1 (en) Image processing method and apparatus
CN112653845A (en) Exposure control method, exposure control device, electronic equipment and readable storage medium
WO2021223113A1 (en) Metering method, camera, electronic device, and computer-readable storage medium
TWI531231B (en) Method for producing high dynamic range image
JP2017068513A (en) Image processing device and method thereof, program, and storage medium

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE