CN115134974A - Model training method, illuminance determination method, device, and program product - Google Patents

Model training method, illuminance determination method, device, and program product Download PDF

Info

Publication number
CN115134974A
CN115134974A CN202110325567.6A CN202110325567A CN115134974A CN 115134974 A CN115134974 A CN 115134974A CN 202110325567 A CN202110325567 A CN 202110325567A CN 115134974 A CN115134974 A CN 115134974A
Authority
CN
China
Prior art keywords
dynamic range
range image
low dynamic
predicted
illumination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110325567.6A
Other languages
Chinese (zh)
Inventor
魏玮
刘玥
李皓翔
康昊
管理
华刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wormpex Technology Beijing Co Ltd
Original Assignee
Wormpex Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wormpex Technology Beijing Co Ltd filed Critical Wormpex Technology Beijing Co Ltd
Priority to CN202110325567.6A priority Critical patent/CN115134974A/en
Publication of CN115134974A publication Critical patent/CN115134974A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/105Controlling the light source in response to determined parameters
    • H05B47/11Controlling the light source in response to determined parameters by determining the brightness or colour temperature of ambient light
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B7/00Control of exposure by setting shutters, diaphragms or filters, separately or conjointly
    • G03B7/08Control effected solely on the basis of the response, to the intensity of the light received by the camera, of a built-in light-sensitive device
    • G03B7/099Arrangement of photoelectric elements in or on the camera
    • G03B7/0993Arrangement of photoelectric elements in or on the camera in the camera
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)

Abstract

The model training method, the illuminance determination method, the device and the program product relate to an image processing technology and comprise the following steps: acquiring a low dynamic range image of a preset environment acquired based on different exposure values and an actual illumination value of the preset environment when the low dynamic range image is acquired; and training a preset model according to each low dynamic range image and the actual illumination value corresponding to the low dynamic range image to obtain an illumination prediction model. According to the model training method, the illumination determination method, the device and the program product, the predicted illumination value can be obtained according to the obtained low dynamic range image in the store, a specially-assigned person is not required to collect data, and the brightness of the lighting lamp in the store can be adjusted in real time according to the predicted illumination value.

Description

Model training method, illuminance determination method, apparatus, and program product
Technical Field
The present disclosure relates to image processing technologies, and in particular, to a model training method, an illuminance determination method, an apparatus, and a program product.
Background
Currently, lighting lamps are provided in many stores to provide a suitable lighting environment. Because the illumination environment in the shop can be influenced to the luminance change of external environment, consequently, need adjust the luminance of light for the illumination environment is fit for the user and stops.
In the prior art, a specially-assigned person collects ambient illumination in a store by using an illumination measuring instrument, and then the brightness of an illuminating lamp is adjusted based on actual illumination in the store.
However, in this method, a special person is required to collect the illuminance of the store, and the efficiency of collecting the illuminance of the store is low, so that the luminance of the lighting lamp in the store cannot be adjusted in real time.
Disclosure of Invention
The disclosure provides a model training method, an illumination determination method, equipment and a program product, which aim to solve the problems that in the prior art, a special person is required to acquire illumination of a store and the brightness of an illuminating lamp in the store cannot be adjusted in real time.
According to a first aspect of the present application, there is provided a model training method, comprising:
acquiring a low dynamic range image of a preset environment acquired based on different exposure values and an actual illumination value of the preset environment when the low dynamic range image is acquired;
training a preset model according to each low dynamic range image and the actual illumination value corresponding to the low dynamic range image to obtain an illumination prediction model; the illumination prediction model is used for acquiring a high dynamic range image corresponding to a low dynamic range image, and the high dynamic range image is used for acquiring a predicted illumination value corresponding to the low dynamic range image.
According to a second aspect of the present application, there is provided an ambient illuminance determination method including:
acquiring a single-frame low dynamic range image obtained by shooting a preset environment;
inputting the single-frame low dynamic range image into an illumination prediction model to obtain a predicted high dynamic range image corresponding to the single-frame low dynamic range image;
determining a predicted illumination value corresponding to the preset environment according to the predicted high dynamic range image;
the illumination prediction model is obtained by training a low dynamic range image of a training environment acquired based on different exposure values and an actual illumination value of the training environment when the low dynamic range image is acquired.
According to a third aspect of the present application, there is provided a model training apparatus comprising:
an acquisition unit, configured to acquire a low dynamic range image of a preset environment acquired based on different exposure values, and an actual illuminance value of the preset environment when acquiring the low dynamic range image;
the processing unit is used for training a preset model according to each low dynamic range image and the actual illumination value corresponding to the low dynamic range image to obtain an illumination prediction model; the illumination prediction model is used to obtain a high dynamic range image corresponding to a low dynamic range image, and the high dynamic range image is used to obtain a predicted illumination value corresponding to the low dynamic range image.
According to a fourth aspect of the present application, there is provided an ambient illuminance determination apparatus including:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a single-frame low dynamic range image obtained by shooting a preset environment;
the identification unit is used for inputting the single-frame low dynamic range image into an illumination prediction model to obtain a predicted high dynamic range image corresponding to the single-frame low dynamic range image;
and the illumination value determining unit is used for obtaining a predicted illumination value corresponding to the preset environment according to the predicted high dynamic range image.
The illumination prediction model is obtained by training a low dynamic range image of a training environment acquired based on different exposure values and an actual illumination value of the training environment when the low dynamic range image is acquired.
According to a fifth aspect of the present application, there is provided an electronic device comprising a memory and a processor; wherein the content of the first and second substances,
the memory for storing a computer program;
the processor is configured to read the computer program stored in the memory, and execute the model training method according to the first aspect or the ambient illuminance determination method according to the second aspect according to the computer program in the memory.
According to a sixth aspect of the present application, there is provided a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, implement the model training method according to the first aspect or the ambient illuminance determination method according to the second aspect.
According to a seventh aspect of the present application, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the model training method according to the first aspect or the ambient illuminance determination method according to the second aspect.
The present disclosure provides a model training method, an illuminance determination method, a device, and a program product, including: acquiring a low dynamic range image of a preset environment acquired based on different exposure values and an actual illumination value of the preset environment when the low dynamic range image is acquired; training a preset model according to each low dynamic range image and the actual illumination value corresponding to the low dynamic range image to obtain an illumination prediction model; the illumination prediction model is used for acquiring a high dynamic range image corresponding to a low dynamic range image, and the high dynamic range image is used for acquiring a predicted illumination value corresponding to the low dynamic range image. According to the model training method, the illumination determination method, the device and the program product, the predicted illumination value can be obtained according to the obtained low dynamic range image in the store, a specially-assigned person is not required to collect data, and the brightness of the lighting lamp in the store can be adjusted in real time according to the predicted illumination value.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating a model training method according to an exemplary embodiment of the present application;
FIG. 2 is a schematic flow chart diagram illustrating a model training method according to another exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of a model training process shown in an exemplary embodiment of the present application;
fig. 4 is a flowchart illustrating an ambient illuminance determination method according to an exemplary embodiment of the present application;
fig. 5 is a flowchart illustrating an ambient illuminance determination method according to another exemplary embodiment of the present application;
FIG. 6 is a block diagram of a model training apparatus according to an exemplary embodiment of the present application;
FIG. 7 is a block diagram of a model training apparatus according to another exemplary embodiment of the present application;
fig. 8 is a block diagram illustrating an ambient illuminance determination apparatus according to an exemplary embodiment of the present application;
fig. 9 is a block diagram of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
At present, the brightness of the illuminating lamp can be adjusted according to the ambient illumination in the store room, so that a suitable illumination environment is provided. Specifically, the indoor ambient illumination can be collected by a specially-assigned person through an illumination measuring instrument, and then the brightness of the illuminating lamp is adjusted based on the indoor actual illumination.
However, it is inefficient to equip each store with an illuminance meter and to collect illuminance by a person specializing in the store using the meter. Moreover, the indoor ambient illumination changes in real time, and the mode of collecting the illumination by a specially-assigned person cannot collect the ambient illumination in real time, so that the brightness of the illuminating lamp cannot be adjusted based on the ambient illumination changing in real time.
In order to solve the technical problem, the scheme provided by the application comprises a model training method, wherein a high dynamic range image is obtained by acquiring a low dynamic range image and an actual illumination value of a preset environment to perform data training, and a prediction illumination value is obtained through the high dynamic range image. When the method is used, a high dynamic range image can be obtained through the model as long as a low dynamic range image of a preset environment is collected, a predicted illumination value is obtained through the high dynamic range image, and the brightness of an illuminating lamp is adjusted through the predicted illumination value. The method provided by the application can achieve the purpose of adjusting the brightness of the illuminating lamp in real time by adjusting the frequency of acquiring the low dynamic range image of the preset environment, and does not need a specially-assigned person to acquire the ambient illumination.
Fig. 1 is a flowchart illustrating a model training method according to an exemplary embodiment of the present application.
As shown in fig. 1, the model training method provided in this embodiment includes:
step 101, acquiring a Low Dynamic Range image (LDR) of a preset environment acquired based on different exposure values, and acquiring an actual illuminance value of the preset environment when the LDR image is acquired.
The preset environment may be a space in which the lighting environment of the preset environment is affected by a change in brightness of an external environment, for example, a room, a store, a house, or a factory building. The preset environment may be a plurality of environments, such as a plurality of store environments.
Specifically, the illuminance, i.e., the illumination intensity, refers to the luminous flux of the received visible light per unit area.
Specifically, the Dynamic Range (Dynamic Range) is a relative ratio of the highest value and the lowest value of the electrical signal, and reflects details that can be displayed in the highlight area and the dark area on the photo, and the larger the Dynamic Range is, the more the level is. The LDR image is an image with a low dynamic range, the coverage range of the dynamic range in the real world can span 106 to 109 orders of magnitude, the dynamic range that human eyes can capture is about 105 orders of magnitude, and the dynamic range of the LDR image can only reach 102 orders of magnitude.
Further, a plurality of LDR images may be acquired using different exposure values at the same image acquisition position using the image acquisition device. For example, 20 LDR images having different exposure values may be acquired for one image acquisition position.
In practical application, when the image acquisition device is used for acquiring a plurality of LDR images with different exposure values at the same image acquisition position, the illumination measurement instrument can be used for measuring illumination values at the same time, and the obtained result is an actual illumination value corresponding to the acquired LDR image.
The method provided by the present application may be executed by an electronic device with computing capability, for example, a computer or other devices. The electronic equipment can acquire the low dynamic range image of the preset environment acquired based on different exposure values and the actual illumination value of the preset environment when acquiring the low dynamic range image.
For example, an LDR image may be captured by the image capturing device and sent to the electronic device, so that the electronic device can obtain a plurality of LDR images; the illuminance measuring instrument can also collect illuminance when collecting an LDR image, and can send the collected actual illuminance value to the electronic equipment, so that the electronic equipment can obtain the actual illuminance value of the preset environment when collecting the LDR image. For example, a control unit may be provided, and the control unit may control the image collecting device and the illuminance measuring instrument simultaneously, so as to collect the LDR image of the preset environment and collect the actual illuminance value at the same time.
For another example, the LDR image captured by the image capturing device may be imported into an electronic device, the actual illuminance value captured by the illuminance measuring instrument may be imported into the electronic device, and the electronic device may determine the actual illuminance value corresponding to each LDR image according to the capturing time of the LDR image and the capturing time of the actual illuminance value.
And 102, training a preset model according to each low dynamic range image and the actual illumination value corresponding to the low dynamic range image to obtain an illumination prediction model.
The illumination prediction model is used for acquiring a high dynamic range image corresponding to the low dynamic range image, and the high dynamic range image is used for acquiring a predicted illumination value corresponding to the low dynamic range image.
The preset model may be a pre-built model, such as a neural network model.
Specifically, the electronic device may train the neural network model according to each low dynamic range image and an actual illuminance value corresponding to the low dynamic range image to obtain a target model, i.e., an illuminance prediction model.
Further, each LDR image may be used as training data, and the actual illumination value corresponding to the LDR image may be used as a data tag, so that the electronic device may train the preset model using the LDR image with the actual illumination value result.
In actual application, the preset model can process the input LDR image and identify the predicted illumination value corresponding to the LDR image. The predicted illumination value and the actual illumination value of the LDR image may also be compared, and the parameters in the predetermined model may be adjusted based on the comparison result. Through multiple iterations, the difference between the predicted illumination value determined by the preset model and the actual illumination value can meet the requirement, and then the illumination prediction model meeting the requirement is obtained.
When the preset model processes the input LDR image, the preset model may convert the LDR image into a High Dynamic Range (HDR) image based on internal parameters of the model, and the electronic device determines the predicted illuminance value according to the HDR image converted by the preset model.
The HDR image referred to in the present application is an image with a high dynamic range, and the dynamic range of the HDR image can usually reach 105 orders of magnitude.
In particular, in this embodiment, a standard HDR image corresponding to a plurality of LDR images corresponding to the same image acquisition position may also be generated in advance from the LDR images. Furthermore, the model can be trained by using the standard HDR image as a data label.
Further, in this embodiment, after the LDR image is converted into the HDR image by using the preset model, the electronic device may further compare the identified HDR image with a standard HDR image of the LDR image, and adjust parameters in the preset model based on a comparison result. Through multiple iterations, the difference between the HDR image identified by the preset model and the standard HDR image can meet the requirement, and then the illumination prediction model meeting the requirement is obtained.
In this embodiment, one training data may have two labels, and thus the model may be trained through two constraint conditions, so that the prediction result of the obtained illumination prediction model is more accurate.
In actual application, the trained illuminance prediction model may be provided in an illuminance recognition device, which may be, for example, a computer. The indoor LDR image can be acquired by the image acquisition device, and the acquired LDR image can be sent to the illumination identification device by the image acquisition device. The illumination recognition device can process the received LDR image by using the trained illumination prediction model to obtain a predicted HDR image, and the illumination recognition device can obtain a predicted illumination value according to the predicted HDR image. For example, the LDR image may be input into an illumination prediction model capable of outputting a predicted HDR pattern corresponding to the input LDR image, and the illumination recognition apparatus may determine an illumination prediction value from the predicted HDR image output by the model. In this embodiment, the illumination prediction value can be obtained from one LDR image, thereby improving the illumination value recognition efficiency.
The image acquisition device and the illumination identification equipment can be connected in a wired or wireless mode, so that the processes of acquiring the LDR image and obtaining the illumination predicted value can be automatically realized without manual acquisition. In addition, the LDR image can be obtained in real time, the illumination value corresponding to the LDR image is determined in real time, and the brightness of the indoor lamp can be adjusted in real time based on the illumination value.
The methods provided herein are each performed by a device provided with the methods provided herein, which is typically implemented in hardware and/or software.
The model training method comprises the steps of obtaining low dynamic range images of a preset environment acquired based on different exposure values and actual illumination values of the preset environment when the low dynamic range images are acquired; and training the preset model according to each low dynamic range image and the actual illumination value corresponding to the low dynamic range image to obtain an illumination prediction model. According to the method and the device, the illumination predicted value can be obtained according to one LDR image, the acquisition process of the LDR image and the illumination predicted value can be automatically realized, manual acquisition is not needed, and the illumination value identification efficiency is improved; and the LDR image can be obtained in real time, the illumination value corresponding to the LDR image is determined in real time, and then the brightness of the indoor lamp can be adjusted in real time based on the illumination value.
Fig. 2 is a schematic flowchart of a model training method according to another exemplary embodiment of the present application.
As shown in fig. 2, the model training method provided in this embodiment includes:
step 201, acquiring a low dynamic range image of a preset environment acquired based on different exposure values, and acquiring an actual illumination value of the preset environment when acquiring the low dynamic range image.
Step 201 is similar to step 101 in implementation manner and principle, and is not described again.
And step 202, fusing the low dynamic range images with different exposure values to obtain a standard high dynamic range image.
After LDR images of a preset environment acquired based on different exposure values are acquired, the LDR images can be fused to obtain an HDR image.
During fusion, a plurality of LDR images collected at the same position may be specifically subjected to fusion processing.
Specifically, the low dynamic range image of the preset environment in this embodiment may include: and acquiring a low dynamic range image of each point position in a preset environment based on different exposure values.
For example, 8 data collection devices may be used to collect data sets in 8 retail stores. The data acquisition device may be provided with a 360 degree panoramic camera and an illuminometer.
In one embodiment, the 360 degree panoramic camera and the illuminometer may both be mounted at a preset elevation of the data capture device, which may be, for example, the height of a human eye when the user is generally standing. In one embodiment, the illuminometer may be mounted facing upwards. The 360-degree panoramic camera can be a double-fish-eye lens camera calibrated by camera calibration.
The data acquisition device may be programmed with 20 different camera shutter speeds ranging from 5 milliseconds to 100 milliseconds. A plurality of point locations may be set and the data acquisition device is controlled to acquire LDR images of the point locations in the preset environment based on different exposure values. For example, the 8 retail stores may serve as a preset environment, 8345 locations may be selected in the preset environment as point locations, and the data acquisition device is controlled to collect LDR images and actual illumination values of the point locations every day.
The data acquisition device can send acquired data to an electronic device for executing the method provided by the embodiment.
Correspondingly, the low dynamic range image of the preset environment comprises the low dynamic range image of each point location in the preset environment acquired based on different exposure values, and when the standard high dynamic range image is obtained by fusing the low dynamic range images with different exposure values, the electronic device can perform preset fusion processing on the low dynamic range image with different exposure values corresponding to the same point location to obtain the standard high dynamic range image corresponding to the point location.
For example, a preset fusion processing method may be set in the electronic device, so that the electronic device may perform fusion processing on LDR images with different exposure values corresponding to the same point location to obtain an HDR image corresponding to each point location. For example, 20 LDR images with different exposure values may be included for point 1, and 20 LDR images with different exposure values may be included for point 2. The electronic device can fuse the LDR images of the point 1 to obtain the HDR image of the point 1, and the electronic device can fuse the LDR images of the point 2 to obtain the HDR image of the point 2.
Wherein the preset fusion process may include:
and carrying out fusion processing on the low dynamic range images with different exposure values corresponding to the same point position to obtain a first high dynamic range image.
Specifically, for example, Photosphere software may be used to perform fusion processing on single-frame LDR images with different exposure values at the same point, so as to generate a first HDR image. For example, if 8345 point locations are set, 8345 first HDR images may be generated.
Further, the first high dynamic range image may be calibrated by using a preset factor to obtain a standard high dynamic range image.
Due to the actual manufacturing variations of image acquisition devices, such as cameras, the camera response curves across the cameras are not the same. Therefore, the first HDR image generated by directly fusing the LDR images needs to be calibrated using a scaling factor, resulting in a standard HDR image. Therefore, the calibrated standard HDR image is used for model training, and the obtained recognition result of the model is more accurate.
Step 203, inputting the low dynamic range image into a preset model to obtain a predicted high dynamic range image corresponding to the low dynamic range image.
Wherein, when training the model, the electronic device may input the LDR images into a preset model, thereby generating a predicted HDR image corresponding to each LDR image based on preset model internal parameters.
For example, the electronic device may extract image features of the single-frame LDR image according to parameters inside the preset model, and then generate a predicted HDR image of the single-frame LDR image according to the image features.
Step 204, determining a predicted illumination value according to the predicted high dynamic range image.
Specifically, after the electronic device generates an HDR image corresponding to the LDR image based on the preset model, the electronic device may further determine a predicted luminance value according to the predicted HDR image.
In one embodiment, a preset model may be used to convert the LDR image into a predicted HDR image, and the electronic device may determine the predicted luminance value from the predicted HDR image determined by the preset model. In such an embodiment, the electronic device may train the model based on the determined predicted and actual illumination values.
Further, when determining the predicted luminance value according to the predicted HDR image, the electronic device may determine a luminance value corresponding to each pixel point according to pixel information in the predicted HDR image.
In practical application, after the predicted HDR image is generated, three channel values of each pixel point in the HDR image may be obtained, specifically, R, G, B three channel values. And then determining the brightness value L of each pixel point according to the three-channel value of each pixel point, wherein the brightness value L can be determined by adopting the following formula:
L=179×(0.2126·R+0.7152·G+0.0722·B)
wherein, R is the value of the pixel point in the R channel, G is the value of the pixel point in the G channel, and B is the value of the pixel point in the B channel.
Specifically, the predicted illumination value for predicting the high dynamic range image may be determined according to the brightness value of each pixel point.
For example, the sum of the luminance values of the respective pixels may be calculated as the predicted luminance value of the predicted HDR, and for example, the average value of the respective pixels may be calculated as the predicted luminance value of the predicted HDR.
Furthermore, the brightness of each pixel point in the designated area of the predicted high dynamic range image can be integrated to obtain the predicted illumination value of the predicted high dynamic range image.
For example, if the illuminometer is vertically placed upward, the northern hemisphere area in the predicted HDR image may be used as the designated area. Therefore, the luminance of each pixel point in the northern hemisphere region of the predicted HDR image can be integrated. Specifically, the predicted illuminance value may be determined using the following equation:
Figure BDA0002994526450000091
wherein the illumination value is predicted
Figure BDA0002994526450000092
The illumination condition of the position where the image acquisition device is located in the preset environment is represented. Wherein the content of the first and second substances,
Figure BDA0002994526450000093
is to predict the position in the HDR image
Figure BDA0002994526450000094
Luminance values of pixel points of (1), where θ and
Figure BDA0002994526450000095
is a parameter representing the coordinates of a polar coordinate system in the panoramic image.
And step 205, optimizing parameters in a preset model according to the standard high dynamic range image, the predicted high dynamic range image, the actual illumination value corresponding to the low dynamic range image and the predicted illumination value to obtain an illumination prediction model.
The standard HDR image is obtained by fusing a plurality of LDR images with different exposure values at the same point; the predicted HDR image is obtained by predicting a certain single-frame LDR image at the same point by using a preset model; the actual illumination value is the illumination value measured by an illuminometer when the LDR image is collected at the point; the predicted luminance value is a predicted luminance value determined from the predicted HDR.
Specifically, a standard high dynamic range image of a low dynamic range image and an actual illuminance value may be used as a label of the low dynamic range image; and optimizing parameters in the preset model according to the label of the low dynamic range image, the predicted high dynamic range image of the low dynamic range image determined based on the preset model and the predicted illumination value determined based on the predicted high dynamic range image to obtain an illumination prediction model.
The predicted HDR image is obtained by identifying a single-frame LDR image by a preset model, and the standard HDR image is obtained by fusing multi-frame LDR images, so that the standard HDR image can be regarded as an accurate image. The standard HDR image may be used as a label of the LDR image, specifically to constrain the predicted HDR image, and train the preset model.
For example, 20 frames of LDR images with different exposure values are collected for point a, and these LDR images may be fused to obtain a standard HDR image, and then the standard HDR image may be used as a label of the 20 frames of LDR images.
Further, the predicted luminance value is determined from the predicted HDR image, and the actual luminance value is measured using a luminance meter, and thus, the actual luminance value can be considered to be accurate. The actual illumination value can be used as a label of an LDR image, the predicted illumination value is restrained, and a preset model is trained.
For example, 20 frames of LDR images with different exposure values are acquired for point a, and an actual illumination value may also be acquired when the image is acquired, and then the actual illumination value may be used as a label of the 20 frames of LDR images.
Specifically, the mode of optimizing the preset model is to train the preset model by taking the standard high dynamic range image as a label for predicting the high dynamic range image and taking the actual illumination value as a label for predicting the illumination value, so as to optimize the model.
In practical application, the first loss can be determined according to the standard high dynamic range image and the prediction high dynamic range image;
determining a second loss according to the actual illumination value and the predicted illumination value corresponding to the low dynamic range image;
and optimizing parameters in a preset model according to the first loss and the second loss to obtain an illumination prediction model.
A loss function may be preset, and the loss function may be used to determine the first loss according to the standard high dynamic range image and the prediction high dynamic range image. The second loss may also be determined using the loss function, and the actual and predicted illumination values corresponding to the low dynamic range image.
Specifically, gradient pass-back can be performed according to the first loss and the second loss, parameters in a preset model are optimized, the first loss and/or the second loss can meet preset requirements through multiple iterations, and after the preset requirements are met, the model can be considered to be trained completely, and the illuminance prediction model is obtained.
Fig. 3 is a schematic diagram of a model training process according to an exemplary embodiment of the present application.
The embodiment is described with a multi-frame LDR image of one point in a preset environment.
As shown in fig. 3, a multi-frame LDR image 31 acquired at point a of the preset environment based on different exposure values may be acquired. A standard HDR image 32 may be generated from the multi-frame LDR image 31. When acquiring a multi-frame LDR image 31, the actual illumination value 33 may also be acquired.
Inputting any frame image 311 in the multi-frame LDR image 31 into the preset model 34 can obtain the predicted HDR image 35, and the electronic device can determine the predicted luminance value 36 according to the predicted HDR image 35. Thereafter, the electronic device may compare the standard HDR image 32 to the predicted HDR image 35, and may also compare the actual luminance value 33 to the predicted luminance value 36, thereby training the preset model 34 based on the two constraints.
The model training method further comprises the following steps:
and acquiring an exposure value when the low dynamic range image is acquired.
When a single frame LDR image is acquired, the exposure value used to acquire the LDR image may also be recorded. In fact, for most image capture devices, the exposure value may typically be obtained as part of an Exchangeable image file format (EXIF), and thus may be obtained via EXIF.
And fusing the low dynamic range images with different exposure values to obtain a standard high dynamic range image. The implementation and principle of this step are similar to those of step 202, and are not described again.
And inputting the low dynamic range image and the exposure value corresponding to the low dynamic range image into a preset model to obtain a predicted high dynamic range image corresponding to the low dynamic range image.
When the model is trained, the electronic device may input the LDR images and their corresponding exposure values into a preset model, so as to generate predicted HDR images corresponding to the respective LDR images based on internal parameters of the preset model.
For example, the electronic device may extract image features of a single-frame LDR image according to parameters inside a preset model, and then generate a predicted HDR image of the single-frame LDR image by combining an exposure value and the image features of the LDR image.
A predicted illumination value is determined from the predicted high dynamic range image.
And optimizing parameters in a preset model according to the standard high dynamic range image, the predicted high dynamic range image, the actual illumination value corresponding to the low dynamic range image and the predicted illumination value to obtain an illumination prediction model.
Using the exposure values of LDR images as training data may potentially help the pre-set model to better handle over-exposed and under-exposed regions, making more accurate HDR predictions.
The implementation and principle of the above two steps are similar to those of steps 204 and 205, and training information of exposure values is added on the basis of the above embodiment, which is not described again.
Fig. 4 is a flowchart illustrating an ambient illuminance determination method according to an exemplary embodiment of the present application.
As shown in fig. 4, the ambient illuminance determination method provided by this embodiment includes:
step 401, acquiring a single-frame low dynamic range image obtained by shooting a preset environment.
The method provided by the application can be an electronic device with computing capability, such as a computer.
Specifically, the LDR may be obtained by shooting with an image acquisition device and sent to an electronic device for executing the method provided by the present application, so that the electronic device obtains a single-frame LDR image of a preset environment.
The single-frame LDR image is a still image. A frame is a single video frame of the smallest unit in a video animation.
Step 402, inputting a single-frame low dynamic range image into an illumination prediction model to obtain a predicted HDR image corresponding to a preset environment; the illumination prediction model is obtained by training the low dynamic range image of the training environment acquired based on different exposure values and the actual illumination value of the training environment when acquiring the low dynamic range image.
Specifically, a pre-trained illuminance prediction model may be set in the electronic device, and the illuminance prediction model may be obtained by training according to any one of the embodiments shown in fig. 1 and fig. 2.
The acquired single frame LDR image may be input into an illumination prediction model that is capable of generating a predicted HDR image corresponding to the LDR image.
Step 403, determining a predicted luminance value corresponding to the preset environment according to the predicted HDR image.
Further, the electronic device can process the predicted HDR image to determine a predicted luminance value. For example, the brightness value corresponding to each pixel point can be determined according to the pixel information of the predicted high dynamic range image; and determining a prediction illumination value for predicting the high dynamic range image according to the brightness value of each pixel point.
Fig. 5 is a flowchart illustrating an ambient illuminance determination method according to another exemplary embodiment of the present application.
As shown in fig. 5, the ambient illuminance determination method provided by this embodiment includes:
step 501, acquiring a single-frame low dynamic range image obtained by shooting a preset environment, and acquiring an exposure value of the single-frame low dynamic range image.
The LDR image of the preset environment is acquired through the image acquisition device, and a single-frame LDR image in a store can be acquired through the panoramic camera, specifically, for example, through the double-fisheye lens camera. The image acquisition apparatus may send the captured LDR image to an electronic device for executing the method provided by this embodiment, so that the electronic device can acquire a single-frame LDR image.
Specifically, the image capture device may further send an exposure value used when capturing the LDR image to the electronic device. Wherein the exposure value may be obtained from the EXIF file of the camera.
Step 502, inputting a single-frame low dynamic range image and an exposure value when the single-frame low dynamic range image is collected into an illumination prediction model to obtain a prediction HDR image corresponding to a preset environment; the illumination prediction model is obtained by training a low dynamic range image of a training environment acquired based on different exposure values, and an actual illumination value and an exposure value of the training environment when the low dynamic range image is acquired.
Specifically, a pre-trained illuminance prediction model may be set in the electronic device, and the illuminance prediction model may be obtained by training through the embodiment shown in fig. 2. The training environment may be, for example, the preset environment in fig. 2.
For example, the predicted high dynamic range image may be displayed by an image display device, such as a computer or a camera. Therefore, the user can know the condition of the preset environment in real time.
Step 503, determining a predicted luminance value corresponding to the preset environment according to the predicted HDR image.
Optionally, the method provided by the present application may further include:
and step 504, adjusting the brightness of the illuminating lamp arranged in the preset environment according to the predicted illumination value.
The predicted illumination value can be transmitted to the automatic illumination lamp brightness adjusting module to adjust the illumination lamp brightness. The method can be implemented by a computer or a chip and a program. And transmitting the predicted illumination value to an automatic illumination lamp brightness adjusting module in a computer or a chip, obtaining an illumination lamp brightness adjusting signal through a program, and transmitting the signal to an illumination lamp for illumination lamp brightness adjustment.
Fig. 6 is a block diagram of a model training apparatus according to an exemplary embodiment of the present application.
As shown in fig. 6, the present application provides a model training apparatus 600, including:
an obtaining unit 610, configured to obtain a low dynamic range image of a preset environment acquired based on different exposure values, and an actual illuminance value of the preset environment when the low dynamic range image is acquired;
the processing unit 620 is configured to train the preset model according to each low dynamic range image and the actual illuminance value corresponding to the low dynamic range image, so as to obtain an illuminance prediction model; the illumination prediction model is used to obtain a high dynamic range image corresponding to the low dynamic range image, and the high dynamic range image is used to obtain a predicted illumination value corresponding to the low dynamic range image.
The principle, implementation and technical effect of the model training device provided by the application are similar to those of fig. 1, and are not repeated.
Fig. 7 is a block diagram of a model training apparatus according to another exemplary embodiment of the present application.
As shown in fig. 7, on the basis of the foregoing embodiment, in the model training apparatus 700 provided in the present application, the processing unit 620 includes:
the fusion module 621 is configured to fuse the low dynamic range images with different exposure values to obtain a standard high dynamic range image;
an identifying module 622, configured to input the low dynamic range image into a preset model, so as to obtain a predicted high dynamic range image corresponding to the low dynamic range image;
a determining module 623 for determining a predicted illumination value from the predicted high dynamic range image;
the training module 624 is configured to optimize parameters in the preset model according to the standard high dynamic range image, the predicted high dynamic range image, the actual illuminance value corresponding to the low dynamic range image, and the predicted illuminance value, so as to obtain an illuminance prediction model.
In the model training apparatus 700 provided in the present application, the low dynamic range image of the preset environment acquired by the acquiring unit 610 based on different exposure values includes: the system comprises a data acquisition module, a data processing module and a data processing module, wherein the data acquisition module is used for acquiring a low dynamic range image of each point location in a preset environment acquired based on different exposure values;
correspondingly, the fusion module 621 is specifically configured to perform preset fusion processing on the low dynamic range image with different exposure values corresponding to the same point location to obtain a standard high dynamic range image corresponding to the point location.
The fusion module 621 is specifically configured to perform fusion processing on each low dynamic range image with different exposure values corresponding to the same point location to obtain a first high dynamic range image; and calibrating the first high dynamic range image by using a preset factor to obtain a standard high dynamic range image.
The determining module 623 is specifically configured to:
determining the brightness value corresponding to each pixel point according to the pixel information of the predicted high dynamic range image;
and determining a prediction illumination value for predicting the high dynamic range image according to the brightness value of each pixel point.
The determining module 623 is specifically configured to:
and integrating the brightness of each pixel point in the appointed region of the predicted high dynamic range image to obtain a predicted illumination value of the predicted high dynamic range image.
Training module 624 is specifically configured to:
determining a first loss according to the standard high dynamic range image and the prediction high dynamic range image; determining a second loss according to the actual illumination value and the predicted illumination value corresponding to the low dynamic range image; and optimizing parameters in a preset model according to the first loss and the second loss to obtain an illumination prediction model.
In the model training apparatus 700 provided in the present application,
the obtaining unit 610 is further configured to obtain an exposure value when acquiring a low dynamic range image;
the processing unit 620 is specifically configured to:
fusing the low dynamic range images with different exposure values to obtain a standard high dynamic range image;
inputting the low dynamic range image and the exposure value corresponding to the low dynamic range image into a preset model to obtain a predicted high dynamic range image corresponding to the low dynamic range image;
determining a predicted illumination value according to the predicted high dynamic range image;
and optimizing parameters in a preset model according to the standard high dynamic range image, the predicted high dynamic range image, the actual illumination value corresponding to the low dynamic range image and the predicted illumination value to obtain an illumination prediction model.
Fig. 8 is a block diagram of an ambient illuminance determination apparatus according to an exemplary embodiment of the present application.
As shown in fig. 8, the present application provides an ambient illuminance determination apparatus 800, including:
the obtaining unit 810 is configured to obtain a single-frame low dynamic range image obtained by shooting a preset environment.
The recognition unit 820 is configured to input the single-frame low dynamic range image into the illuminance prediction model to obtain a predicted HDR image corresponding to a preset environment. The illumination prediction model is obtained by training the low dynamic range image of the training environment acquired based on different exposure values and the actual illumination value of the training environment when acquiring the low dynamic range image.
A luminance value determining unit 830 for determining a predicted luminance value corresponding to a preset environment from the predicted HDR image.
The obtaining unit 810 is further configured to obtain an exposure value when a single-frame low dynamic range image is acquired.
The recognition unit 820 is specifically configured to input the single-frame low dynamic range image and the exposure value when the single-frame low dynamic range image is collected into the illuminance prediction model to obtain a predicted HDR image corresponding to the preset environment. Wherein, the exposure value when acquiring the low dynamic range image is also used when training the illumination prediction model.
Optionally, the apparatus further comprises:
an adjusting unit 840 for adjusting the brightness of the illumination lamp set in the preset environment according to the predicted illumination value.
Fig. 9 is a block diagram of an electronic device according to an exemplary embodiment of the present application.
As shown in fig. 9, the electronic device provided in this embodiment includes:
a memory 901;
a processor 902; and
a computer program;
wherein a computer program is stored in the memory 901 and configured to be executed by the processor 902 to implement any one of the model training methods or the ambient illuminance determination methods as described above.
The present embodiments also provide a computer-readable storage medium, having stored thereon a computer program,
the computer program is executed by a processor to implement any one of the model training methods or the ambient illuminance determination method as above.
The present embodiment also provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the computer program implements any one of the above-mentioned model training method or the above-mentioned ambient illuminance determination method.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (17)

1. A method of model training, comprising:
acquiring a low dynamic range image of a preset environment acquired based on different exposure values and an actual illumination value of the preset environment when the low dynamic range image is acquired;
training a preset model according to each low dynamic range image and the actual illumination value corresponding to the low dynamic range image to obtain an illumination prediction model;
the illumination prediction model is used for acquiring a high dynamic range image corresponding to a low dynamic range image, and the high dynamic range image is used for acquiring a predicted illumination value corresponding to the low dynamic range image.
2. The method of claim 1, wherein the training a preset model according to the actual illumination value corresponding to each of the low dynamic range images to obtain an illumination prediction model comprises:
fusing the low dynamic range images according to different exposure values to obtain a standard high dynamic range image;
inputting the low dynamic range image into a preset model to obtain a predicted high dynamic range image corresponding to the low dynamic range image;
determining a predicted illumination value according to the predicted high dynamic range image;
and optimizing parameters in the preset model according to the standard high dynamic range image, the predicted high dynamic range image, the actual illumination value corresponding to the low dynamic range image and the predicted illumination value to obtain the illumination prediction model.
3. The method of claim 2, wherein the low dynamic range image of the preset environment comprises: acquiring a low dynamic range image of each point location in the preset environment based on different exposure values;
correspondingly, the fusion of the low dynamic range images with different exposure values to obtain a standard high dynamic range image comprises:
and carrying out preset fusion processing on the low dynamic range images with different exposure values corresponding to the same point location to obtain a standard high dynamic range image corresponding to the point location.
4. The method according to claim 3, wherein the pre-fusion process comprises:
carrying out fusion processing on each low dynamic range image with different exposure values corresponding to the same point position to obtain a first high dynamic range image;
and calibrating the first high dynamic range image by using a preset factor to obtain a standard high dynamic range image.
5. The method of claim 2, wherein determining a predicted luminance value from the predicted high dynamic range image comprises:
determining the brightness value corresponding to each pixel point according to the pixel information of the predicted high dynamic range image;
and determining the predicted illumination value of the predicted high dynamic range image according to the brightness value of each pixel point.
6. The method of claim 5, wherein said determining a predicted luminance value for said predicted high dynamic range image based on luminance values of respective pixel points comprises:
and integrating the brightness of each pixel point in the appointed region of the predicted high dynamic range image to obtain the predicted illumination value of the predicted high dynamic range image.
7. The method according to any one of claims 2-6, wherein said optimizing parameters in said predetermined model based on said standard high dynamic range image, said predicted high dynamic range image, said actual luminance value corresponding to said low dynamic range image, said predicted luminance value, resulting in said luminance prediction model comprises:
taking the standard high dynamic range image and the actual illumination value of the low dynamic range image as the label of the low dynamic range image;
and optimizing parameters in the preset model according to the label of the low dynamic range image, the predicted high dynamic range image of the low dynamic range image determined based on the preset model and the predicted illumination value determined based on the predicted high dynamic range image to obtain the illumination prediction model.
8. The method of claim 7, wherein said optimizing parameters in said preset model according to said label of said low dynamic range image, a predicted high dynamic range image of said low dynamic range image determined based on said preset model, a predicted luminance value determined based on said predicted high dynamic range image, resulting in said luminance prediction model comprises:
determining a first loss according to the standard high dynamic range image and the predicted high dynamic range image;
determining a second loss according to the actual illumination value corresponding to the low dynamic range image and the predicted illumination value;
and optimizing parameters in the preset model according to the first loss and the second loss to obtain the illumination prediction model.
9. The method of any of claims 2-6, further comprising:
acquiring an exposure value when the low dynamic range image is acquired;
inputting the low dynamic range image into a preset model to obtain a predicted high dynamic range image corresponding to the low dynamic range image, wherein the method comprises the following steps:
and inputting the low dynamic range image and the exposure value corresponding to the low dynamic range image into a preset model to obtain a predicted high dynamic range image corresponding to the low dynamic range image.
10. An ambient illuminance determination method characterized by comprising:
acquiring a single-frame low dynamic range image obtained by shooting a preset environment;
inputting the single-frame low dynamic range image into an illumination prediction model to obtain a predicted high dynamic range image corresponding to the single-frame low dynamic range image;
determining a predicted illumination value corresponding to the preset environment according to the predicted high dynamic range image;
the illumination prediction model is obtained by training a low dynamic range image of a training environment acquired based on different exposure values and an actual illumination value of the training environment when the low dynamic range image is acquired.
11. The method of claim 10, further comprising:
acquiring an exposure value when the single-frame low dynamic range image is acquired;
the inputting the single-frame low dynamic range image into an illumination prediction model includes:
inputting the single-frame low dynamic range image and an exposure value when the single-frame low dynamic range image is collected into an illumination prediction model;
wherein an exposure value at which the low dynamic range image is acquired is also used in training the illumination prediction model.
12. The method of claim 10 or 11, further comprising:
and adjusting the brightness of an illuminating lamp arranged in the preset environment according to the predicted illumination value.
13. A model training apparatus, comprising:
an acquisition unit, configured to acquire a low dynamic range image of a preset environment acquired based on different exposure values, and an actual illuminance value of the preset environment when acquiring the low dynamic range image;
the processing unit is used for training a preset model according to each low dynamic range image and the actual illumination value corresponding to the low dynamic range image to obtain an illumination prediction model; the illumination prediction model is used for acquiring a high dynamic range image corresponding to a low dynamic range image, and the high dynamic range image is used for acquiring a predicted illumination value corresponding to the low dynamic range image.
14. An ambient illuminance determination device characterized by comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a single-frame low dynamic range image obtained by shooting a preset environment;
the identification unit is used for inputting the single-frame low dynamic range image into an illumination prediction model to obtain a prediction high dynamic range image corresponding to the single-frame low dynamic range image;
an illuminance value determination unit, configured to obtain a predicted illuminance value corresponding to the preset environment according to the predicted high dynamic range image;
the illumination prediction model is obtained by training a low dynamic range image of a training environment acquired based on different exposure values and an actual illumination value of the training environment when the low dynamic range image is acquired.
15. An electronic device comprising a memory and a processor; wherein the content of the first and second substances,
the memory for storing a computer program;
the processor is configured to read the computer program stored in the memory and execute the method of any one of claims 1-9 or 10-12 according to the computer program in the memory.
16. A computer-readable storage medium having computer-executable instructions stored thereon which, when executed by a processor, perform the method of any one of claims 1-9 or 10-12.
17. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, carries out the method of any one of the preceding claims 1-9 or 10-12.
CN202110325567.6A 2021-03-26 2021-03-26 Model training method, illuminance determination method, device, and program product Pending CN115134974A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110325567.6A CN115134974A (en) 2021-03-26 2021-03-26 Model training method, illuminance determination method, device, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110325567.6A CN115134974A (en) 2021-03-26 2021-03-26 Model training method, illuminance determination method, device, and program product

Publications (1)

Publication Number Publication Date
CN115134974A true CN115134974A (en) 2022-09-30

Family

ID=83374173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110325567.6A Pending CN115134974A (en) 2021-03-26 2021-03-26 Model training method, illuminance determination method, device, and program product

Country Status (1)

Country Link
CN (1) CN115134974A (en)

Similar Documents

Publication Publication Date Title
CN108419023B (en) Method for generating high dynamic range image and related equipment
CN110149482B (en) Focusing method, focusing device, electronic equipment and computer readable storage medium
CN108805103B (en) Image processing method and device, electronic equipment and computer readable storage medium
US9491370B2 (en) Methods and apparatuses for providing guide information for a camera
CN108810413B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN102647449B (en) Based on the intelligent photographic method of cloud service, device and mobile terminal
CN100527789C (en) Apparatus`and method for photography
CN108805198B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108198152B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108900782A (en) Exposal control method, device and electronic equipment
CN104580878A (en) Automatic effect method for photography and electronic apparatus
CN110072052A (en) Image processing method, device, electronic equipment based on multiple image
CN110113538B (en) Intelligent shooting equipment, intelligent control method and device
CN108616700B (en) Image processing method and device, electronic equipment and computer readable storage medium
JP2011514563A (en) Face tracking in camera processor
CN108848306B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109712177B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN101911715A (en) The white balance calibration that is used for digital camera device
JP2008227569A (en) Photographing device, electronic device, photography control method and photography control program
CN110062159A (en) Image processing method, device, electronic equipment based on multiple image
CN104137529A (en) Method and apparatus for enhanced automatic adjustment of focus, exposure and white balance in digital photography
CN109862269A (en) Image-pickup method, device, electronic equipment and computer readable storage medium
CN102647450A (en) Intelligent shooting method and system based on cloud service
CN112361990B (en) Laser pattern extraction method and device, laser measurement equipment and system
CN112017137B (en) Image processing method, device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination