WO2021115419A1 - 图像处理方法、终端及存储介质 - Google Patents

图像处理方法、终端及存储介质 Download PDF

Info

Publication number
WO2021115419A1
WO2021115419A1 PCT/CN2020/135630 CN2020135630W WO2021115419A1 WO 2021115419 A1 WO2021115419 A1 WO 2021115419A1 CN 2020135630 W CN2020135630 W CN 2020135630W WO 2021115419 A1 WO2021115419 A1 WO 2021115419A1
Authority
WO
WIPO (PCT)
Prior art keywords
infrared
classification model
terminal
image
characteristic value
Prior art date
Application number
PCT/CN2020/135630
Other languages
English (en)
French (fr)
Inventor
王琳
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021115419A1 publication Critical patent/WO2021115419A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths

Definitions

  • the embodiments of the present application relate to the field of image processing technology, and in particular, to an image processing method, terminal, and storage medium.
  • scene prediction will become one of the important functions required by the terminal for image processing.
  • the terminal when the terminal is performing scene prediction, it can either deploy some additional auxiliary equipment to collect specific data and then identify the scene; it can also use image processing methods to distinguish the scene.
  • the embodiments of the present application provide an image processing method, terminal, and storage medium, which can reduce the complexity of prediction, thereby improving prediction efficiency, and at the same time, improving the accuracy of scene prediction, thereby improving image processing effects.
  • an embodiment of the present application provides an image processing method, which is applied to a first terminal, and the method includes:
  • the first infrared information, the second infrared information, and the visible light component corresponding to the current image are detected by the color temperature sensor; wherein the first infrared information and the second infrared information are obtained by the color temperature sensor using two different transceiver bands. of;
  • a scene prediction result is obtained according to the brightness parameter corresponding to the current image, the first infrared characteristic value, and the second infrared characteristic value, so as to perform image processing according to the scene prediction result; wherein, The preset classification model is used to classify multiple scenes according to different spectral energies.
  • an embodiment of the present application provides an image processing method, which is applied to a second terminal, and the method includes:
  • the pre-stored image library divides the pre-stored image library to obtain training data and test data; wherein, the pre-stored image library stores multiple images of different scenes, and different scenes correspond to different spectral energies;
  • a preset classification model is obtained; wherein, the preset classification model is used to classify multiple scenes according to different spectral energy.
  • an embodiment of the present application provides a first terminal, and the first terminal includes: a detection part, a generation part, and a first acquisition part,
  • the detection part is configured to detect the first infrared information, the second infrared information, and the visible light component corresponding to the current image through the color temperature sensor; wherein, the first infrared information and the second infrared information are used by the color temperature sensor. Obtained separately from two different transmit and receive bands;
  • the generating part is configured to generate a first infrared characteristic value and a second infrared characteristic value according to the first infrared information, the second infrared information, and the visible light component;
  • the first obtaining part is configured to obtain a scene prediction result based on a preset classification model, according to the brightness parameter corresponding to the current image, the first infrared characteristic value, and the second infrared characteristic value, so as to obtain a scene prediction result according to the Image processing is performed on the scene prediction result; wherein the preset classification model is used to classify multiple scenes according to the difference in spectral energy.
  • an embodiment of the present application provides a second terminal, where the second terminal includes: a division part, a second acquisition part, and a processing part,
  • the dividing part is configured to divide the pre-stored image library to obtain training data and test data; wherein, the pre-stored image library stores multiple images of different scenes, and different scenes correspond to different spectral energies;
  • the second acquisition part is configured to use the training data to train a preset loss function to obtain an initial classification model; and obtain a preset classification model according to the test data and the initial classification model; wherein, the The preset classification model is used to classify multiple scenes according to the difference in spectral energy.
  • an embodiment of the present application provides a first terminal.
  • the first terminal includes a first processor and a first memory storing executable instructions of the first processor. When the instructions are When executed by the first processor, the image processing method described above is implemented.
  • an embodiment of the present application provides a second terminal.
  • the second terminal includes a second processor and a second memory storing executable instructions of the second processor. When the instructions are When executed by the second processor, the image processing method described above is implemented.
  • an embodiment of the present application provides a computer-readable storage medium with a program stored thereon and applied to a first terminal and a second terminal.
  • the program is executed by a processor, the above-mentioned image is realized.
  • the embodiments of the present application provide an image processing method, terminal, and storage medium.
  • the first terminal detects the first infrared information, the second infrared information, and the visible light component corresponding to the current image through a color temperature sensor; wherein, the first infrared information and the second infrared information
  • the infrared information is obtained by the color temperature sensor using two different transmitting and receiving bands; according to the first infrared information, the second infrared information and the visible light component, the first infrared characteristic value and the second infrared characteristic value are generated; based on the preset classification model, according to The brightness parameter, the first infrared feature value, and the second infrared feature value corresponding to the current image are obtained to obtain the scene prediction result to perform image processing according to the scene prediction result; among them, the preset classification model is used to perform multiple scenes according to different spectral energy.
  • the second terminal divides the pre-stored image library to obtain training data and test data; among them, the pre-stored image library stores multiple images of different scenes, and different scenes correspond to different spectral energies; use the training data to train the preset loss function , Obtain the initial classification model; According to the test data and the initial classification model, obtain the preset classification model.
  • the image processing method proposed in the embodiment of this application can first use the color temperature sensor to collect the visible light component and two different infrared information in the spectrum corresponding to the current image, and then use the visible light component and two different infrared information to determine The corresponding two infrared feature values, combined with the brightness parameters corresponding to the current image, realize the scene prediction of the current image based on a preset classification model, where the preset classification model is based on the infrared feature data and brightness features of the image in the pre-stored image library The data is obtained through training and testing.
  • the terminal uses the infrared and brightness features of the image to train the preset classification model, and then based on the preset classification model, predicts the scene of the current image based on the infrared and brightness features of the current image. Then, image processing can be performed according to the scene prediction result, which can reduce the complexity of prediction, thereby improving prediction efficiency, and at the same time, improving the accuracy of scene prediction, thereby improving the image processing effect.
  • Figure 1 is the first schematic diagram of the implementation process of the image processing method
  • Figure 2 is the first schematic diagram of the location of the color temperature sensor
  • Figure 3 is a second schematic diagram of the location of the color temperature sensor
  • Figure 4 is a schematic diagram of the current setting of the color temperature sensor
  • Figure 5 is the third schematic diagram of the position of the color temperature sensor
  • Figure 6 is a fourth schematic diagram of the position of the color temperature sensor
  • Figure 7 is a schematic diagram of the spectral curve response of the color temperature sensor
  • Figure 8 is a schematic diagram of different detection channels
  • Fig. 9 is a schematic diagram of a time domain signal before time-frequency transformation
  • Figure 10 is a schematic diagram of a frequency domain signal after time-frequency transformation
  • Figure 11 is a second schematic diagram of the implementation process of the image processing method
  • Figure 12 is a schematic diagram of the spectral energy distribution of a fluorescent lamp
  • Figure 13 is a schematic diagram of the spectral energy distribution of sunlight
  • Figure 14 is a schematic diagram of the spectral energy distribution of an incandescent lamp
  • FIG. 15 is a schematic diagram 1 of the composition structure of the first terminal
  • Figure 16 is a second schematic diagram of the composition structure of the first terminal
  • FIG. 17 is a first schematic diagram of the composition structure of the second terminal
  • FIG. 18 is a second schematic diagram of the composition structure of the second terminal.
  • the terminal There are many scenarios for the terminal to predict scenes. Specifically, there are methods based on external devices, such as wireless network (Wireless Fidelity, Wi-Fi), light sensing, and infrared devices; and methods based on the image itself. Among them, the methods based on the image itself can be divided into traditional threshold classification methods and machine learning methods.
  • wireless network Wireless Fidelity, Wi-Fi
  • Wi-Fi Wireless Fidelity, Wi-Fi
  • light sensing light sensing
  • infrared devices infrared devices
  • the methods based on the image itself can be divided into traditional threshold classification methods and machine learning methods.
  • the way the terminal performs image processing may be different.
  • AE automatic exposure
  • AVB automatic white balance
  • a good scene prediction method can help the AWB algorithm to easily obtain a beautiful restoration effect. Whether for low-brightness outdoor scenes or high-brightness indoor scenes, it can reduce the difficulty of restoration of the AWB algorithm itself.
  • the AE algorithm if the scene corresponding to the current image can be accurately determined as outdoor, there is no need to consider the issue of anti-flash at all, which can provide more flexibility.
  • image processing methods for scene prediction when using image processing methods for scene prediction, on the one hand, feature extraction needs to rely on full-size images (such as 4000 ⁇ 3000), and apply multi-scale filtering methods to extract a large number of structural features, and portable mobile phones such as mobile phones
  • the image signal processing (ISP) of the terminal can usually only provide small-size images (such as 120 ⁇ 90). At this time, the accuracy of the features obtained by the terminal using the filtering method based on the full-size image will be greatly reduced. Thereby reducing the accuracy of scene prediction.
  • image processing methods extract high-dimensional structurally related features from the current image. The number of features is usually large. It is difficult to perform real-time processing when used in portable terminals such as mobile phones, thereby reducing the scene prediction. Forecast efficiency.
  • the scene recognition algorithm based on YUV data is located behind the demosaic algorithm on the ISP, and tends to see the final scene. Due to the deviation in the time domain, it cannot be a good solution for the AE, AWB and automatic focus of the ISP front end. , AF) used.
  • the method for scene prediction based on image processing has high computational complexity, reduces prediction efficiency, and has poor scene prediction accuracy.
  • the embodiment of the present application proposes an image processing method, which can first use the color temperature sensor to collect the visible light component and two different infrared information in the spectrum corresponding to the current image, and then use the visible light component and two different infrared information.
  • the information determines the corresponding two infrared feature values, and combines the brightness parameters corresponding to the current image to realize the scene prediction of the current image based on the preset classification model, where the preset classification model is based on the infrared feature data of the image in the pre-stored image library And brightness characteristic data are obtained by training and testing. That is to say, in this application, the terminal uses the infrared feature and brightness feature of the image to train the preset classification model, and then based on the preset classification model, predicts the scene of the current image according to the infrared feature and brightness feature of the current image. Then, image processing can be performed according to the scene prediction result, which can reduce the complexity of prediction, thereby improving prediction efficiency, and at the same time, improving the accuracy of scene prediction, thereby improving the image processing effect.
  • FIG. 1 is a schematic diagram of the implementation process of the image processing method.
  • the method for the first terminal to perform image processing may include the following step:
  • Step 101 Detect the first infrared information, the second infrared information, and the visible light component corresponding to the current image by the color temperature sensor; wherein the first infrared information and the second infrared information are respectively acquired by the color temperature sensor using two different transmitting and receiving bands.
  • the first terminal may first obtain the first infrared information, the second infrared information, and the visible light component through the detection of the configured color temperature sensor.
  • the first infrared information and the second infrared information may be different infrared data respectively obtained by the color temperature sensor using two different transmitting and receiving bands.
  • the first terminal can be any device with communication and storage functions, such as a tablet computer, a mobile phone, an e-reader, a remote control, a personal computer (PC), Notebook computers, in-vehicle devices, Internet TVs, wearable devices and other equipment.
  • a tablet computer a mobile phone, an e-reader, a remote control, a personal computer (PC), Notebook computers, in-vehicle devices, Internet TVs, wearable devices and other equipment.
  • PC personal computer
  • the first terminal may be a device that uses a preset classification model to perform image processing, where the first terminal may also be a device that learns and trains the preset classification model at the same time.
  • the first terminal may be provided with a photographing device for image collection.
  • the first terminal may be provided with at least one front camera and at least one rear camera.
  • the current image may be obtained by shooting by the first terminal through a set shooting device.
  • the first terminal may also be provided with a color temperature sensor.
  • the first terminal may be provided with a color temperature sensor on one side of the front camera, or at the rear Set the color temperature sensor on the side of the camera.
  • the terminal is provided with a front camera on the front cover and a rear camera on the back cover. Therefore, the color temperature sensor can be provided in the first area of the front cover; wherein the first area characterizes the same as the front camera.
  • the color temperature sensor can also be arranged in the second area of the back cover; wherein, the second area represents the area adjacent to the rear camera.
  • FIG. 2 is a schematic diagram of the position of the color temperature sensor
  • FIG. 3 is a schematic diagram of the location of the color temperature sensor.
  • a color temperature sensor is arranged on the left side of the front camera of the first terminal
  • a color temperature sensor is arranged on the lower side of the rear camera of the first terminal.
  • the common color temperature sensor setting method is to set the color temperature sensor in the bangs area of the full screen.
  • Figure 4 is a schematic diagram of the current color temperature sensor setting. As shown in Figure 4, the terminal places the color temperature sensor in the bangs area. Below the ink. However, the color temperature sensor arranged in the bangs area correspondingly needs to open a large ink hole on the terminal, which has a greater impact on the appearance of industrial design (Industrlal Design, ID).
  • the top of the first terminal may be provided with a gap, and therefore, the first terminal may have the color temperature sensor disposed in the gap at the top.
  • Figure 5 is the third schematic diagram of the position of the color temperature sensor
  • Figure 6 is the fourth schematic diagram of the position of the color temperature sensor. As shown in Figures 5 and 6, the color temperature sensor is set in the gap on the top of the first terminal, no matter on the front of the first terminal ( Figure 5) or the back side ( Figure 6), the color temperature sensor will not affect the appearance of the first terminal, and the color temperature sensor arranged in the gap does not require the first terminal to open a large ink hole.
  • the first terminal can detect the environmental parameters corresponding to the current image through the configured color temperature sensor.
  • the color temperature sensor can detect and obtain the red R, green G, and blue B corresponding to the current image.
  • Visible light C, full spectrum WB (wide band), correlated colour temperature (correlated colour temperature, cct), and the flash frequency (Flicker Frequency, FD) of the two channels are respectively FD1 and FD2.
  • Figure 7 is a schematic diagram of the spectral response of the color temperature sensor. As shown in Figure 7, as the wavelength changes, the changes in the spectral response curves corresponding to R, G, B, C, WB, FD1 and FD2 detected by the color temperature sensor are different of.
  • the first infrared information and the second infrared information are not the same.
  • the first infrared information can be used to measure the intensity of the infrared band from 800 nm to 900 nm
  • the second infrared information Infrared information can be used to measure the intensity of the infrared band from 950nm to 1000nm.
  • the color temperature sensor configured in the first terminal can detect the infrared light in the environment corresponding to the current image through different transmitting and receiving bands, so that the first infrared information and the second infrared information can be obtained.
  • Fig. 8 is a schematic diagram of different detection channels. As shown in Fig. 8, the first terminal can respectively use the two frequencies of 50 Hz and 60 Hz to perform infrared band detection.
  • the first terminal may obtain the first time domain information detected by the first infrared channel through the color temperature sensor, that is, obtain the first infrared information; at the same time, the first terminal may also obtain the first infrared information through The color temperature sensor obtains the second time domain information detected by the second infrared channel, that is, obtains the second infrared information.
  • the first terminal can also obtain the components of the visible light band through the color temperature sensor, that is, obtain the visible light components.
  • Step 102 Generate a first infrared characteristic value and a second infrared characteristic value according to the first infrared information, the second infrared information, and the visible light component.
  • the first terminal uses the color temperature sensor to detect and obtain the first infrared information, the second infrared information, and the visible light component, it can directly generate the current image according to the first infrared information, the second infrared information, and the visible light component. Corresponding first infrared characteristic value and second infrared characteristic value.
  • the first terminal when it generates the first infrared characteristic value and the second infrared characteristic value, it may first perform time-frequency conversion processing on the first infrared information, so as to obtain the first infrared information.
  • a first DC component corresponding to the infrared information and at the same time, time-frequency conversion processing can be performed on the second infrared information, so that the second DC component corresponding to the second infrared information can be obtained.
  • Figure 9 is a schematic diagram of the time-domain signal before time-frequency transformation
  • Figure 10 is a schematic diagram of the frequency-domain signal after time-frequency transformation. Converted to the corresponding DC component.
  • the first terminal after the first terminal performs time-frequency conversion processing on the first infrared information and the second infrared information respectively to obtain the first direct current component and the second direct current component, the first terminal can use the first direct current component and the second direct current component.
  • the direct current component, the second direct current component and the visible light component further generate the first infrared characteristic value and the second infrared characteristic value.
  • the first terminal may use the second direct current component and the visible light component to calculate the first infrared characteristic value.
  • the first terminal may also use the first direct current component and the second The DC component is calculated to obtain the second infrared characteristic value.
  • the first infrared feature can be used to measure the intensity of the infrared wave band from 800 nm to 900 nm
  • the second infrared feature can be used to measure the intensity of the infrared wave band from 950 nm to 1000 nm.
  • Step 103 Based on the preset classification model, the scene prediction result is obtained according to the brightness parameter, the first infrared characteristic value and the second infrared characteristic value corresponding to the current image, so as to perform image processing according to the scene prediction result; wherein the preset classification model is used In order to classify multiple scenes according to the difference of spectral energy.
  • the first terminal may use the current image based on the preset classification model.
  • the scene prediction result corresponding to the current image is obtained, and then image processing can be performed on the current image according to the scene prediction result.
  • the preset classification model may be used to classify multiple scenes according to different spectral energy to obtain the type of the scene.
  • the preset classification model may be a classifier obtained by training of the first terminal based on infrared features and brightness features. That is, the first terminal can use the preset classification model to distinguish between outdoor scenes and indoor scenes according to the difference in spectral energy.
  • the present application can directly use the infrared band information detected by the color temperature sensor to obtain distinctive feature information. That is to say, in this application, the first terminal may use the infrared information obtained by the color temperature sensor as the feature information for scene prediction based on the preset classification model according to the difference in spectral energy.
  • the preset classification model used for scene prediction may be a logistic regression model, Bayesian classifier, ensemble learning, decision tree, Support Vector Machine (SVM) model, etc. Typical classification model.
  • the first terminal can train the preset classification model based on parameters such as infrared feature data and brightness feature data corresponding to the pre-stored image library, so that the preset classification model obtained by training can be based on The brightness parameter, the first infrared characteristic value and the second infrared characteristic value corresponding to the current image are outputted to the classification parameter corresponding to the current image.
  • the first terminal obtains the scene prediction result based on the preset classification model and the brightness parameter, the first infrared characteristic value and the second infrared characteristic value corresponding to the current image. Before performing image processing on the scene prediction result, it is necessary to obtain the brightness parameter corresponding to the current image. Specifically, the first terminal can read the corresponding attribute parameter from the attribute information corresponding to the current image, and then use the attribute parameter to determine the brightness corresponding to the current image parameter.
  • the attribute parameter may be a specific parameter corresponding to the image obtained by the photographing device when the first terminal photographs the current image.
  • the attribute parameters may include aperture value parameters, shutter speed parameters, and sensitivity parameters.
  • the aperture value parameter Av is a quantitative expression of the aperture value F_number, and the aperture is usually expressed as an F value. Further, the aperture value parameter Av can be expressed as log(F_number); the shutter speed parameter Tv , Is a quantitative expression of shutter speed, shutter speed is usually expressed as a fraction of 1/Shutter_Speed, further, the shutter speed parameter Tv can be expressed as log(1/Shutter_Speed); sensitivity parameter Sv is a quantification of sensitivity (ISO) Expression, further, the sensitivity parameter Sv can be expressed as log(ISO).
  • the AE algorithm usually adjusts the brightness of the image by adjusting the aperture size, the shutter speed, and the sensitivity.
  • the Av value under outdoor natural light is greater than the indoor Av value
  • the Tv value under outdoor natural light is greater than the indoor Tv value
  • the Sv value under outdoor natural light is less than the indoor Sv value.
  • the attribute information is set for the image taken by the photographing device configured by the first terminal for it, and is used to store the attribute information and photographing data of the recorded image.
  • the attribute information includes attribute information and shooting data corresponding to the current image.
  • the first terminal reads the Av, Tv, Sv and other attribute parameters corresponding to the current image from the exchangeable image file format (Exif) stored in advance.
  • the aperture value parameters, shutter speed parameters, and sensitivity parameters can effectively reflect the brightness in the scene where the current image is located, and therefore can be further based on the aperture value parameters, shutter speed parameters, and The sensitivity parameter determines the brightness parameter to predict the scene corresponding to the current image.
  • the first terminal may first normalize the attribute parameter, and then obtain the brightness parameter.
  • normalization is a dimensionless processing method, which makes the absolute value of the physical system value into a certain relative value relationship. Specifically, the normalization process has become an effective way to simplify calculations and reduce the magnitude.
  • the preset classification model is obtained by the first terminal based on parameters such as infrared feature data and brightness feature data corresponding to the pre-stored image library, the first terminal is using the preset classification model When performing scene prediction on the current image, the brightness parameter corresponding to the current image needs to be obtained first.
  • the aperture value parameter, the shutter speed parameter, and the sensitivity parameter are the specific attribute parameters corresponding to the shooting device when the first terminal captures the current image, for different scenes, the aperture There are big differences in the values of the value parameters, shutter speed parameters, and sensitivity parameters.
  • the Av value under natural outdoor light is greater than the indoor Av value
  • the Tv value under outdoor natural light is greater than the indoor Tv value
  • the Sv under outdoor natural light The value is less than the indoor Sv value.
  • the first terminal when it performs scene prediction on the current image, it can first perform normalization processing on the aperture value parameters, shutter speed parameters, and sensitivity parameters corresponding to the current image, and then use the normalized aperture value parameters,
  • the normalized shutter speed parameter and the normalized sensitivity parameter are input into the preset classification model as brightness feature information.
  • the preset classification model is learned and trained based on infrared feature data and brightness feature data
  • the first terminal uses the preset classification model to perform scene prediction, except for the current
  • the brightness parameter corresponding to the image also needs to use the infrared parameter corresponding to the current image. Therefore, the first terminal needs to combine the first infrared characteristic value and the second infrared characteristic value that characterize the infrared parameter on the basis of the brightness parameter, so as to obtain the current The classification parameter corresponding to the image.
  • the classification parameters are used to predict the scene.
  • the first terminal is based on the preset classification model, according to the brightness parameter, the first infrared characteristic value and the second infrared characteristic value corresponding to the current image, to obtain the scene prediction result according to
  • classification parameters may be obtained based on a preset classification model, and then the scene prediction result corresponding to the current image is determined according to the classification parameters.
  • the first terminal after outputting the classification parameters corresponding to the current image based on the preset classification model, the first terminal can directly use the classification parameters to determine the scene prediction result corresponding to the current image. Specifically, the first terminal may use the classification parameters to perform scene prediction to obtain the scene prediction result. Among them, the scene prediction result can be an indoor scene or an outdoor scene.
  • the first terminal when the first terminal uses the classification parameters to perform scene prediction, when the classification parameters belong to the first preset value range, the first terminal may consider the scene prediction result to be an indoor scene; When the classification parameter belongs to the second preset value range, the first terminal may consider the scene prediction result to be an outdoor scene.
  • the terminal may be provided with a corresponding first preset value range and a second preset value range, where the first preset value range and the second preset value The ranges do not overlap.
  • the first preset value range may be set to (-20, 0)
  • the second preset value range may be set to (0, 33).
  • the settings of the first preset value range and the second preset value range correspond to preset classification models, that is, for different preset classification models
  • the first preset value range and the second preset value range set by the terminal may also be different. Therefore, this application does not specifically limit the values of the first preset value range and the second preset value range.
  • the first terminal may use the scene prediction result of the current image to perform further processing on the current image. Specifically, the first terminal may use the scene prediction result to perform white balance processing and brightness adjustment processing on the current image.
  • the scene prediction result is an outdoor scene
  • the color temperature and the color deviation value duv can be directly set, then A more ideal white balance effect is obtained, and a better quality white balance image is obtained.
  • the AWB algorithm can simply set the color temperature to 5000 ⁇ 5000k and the color deviation value to 0.001 ⁇ 0.005, you can get an ideal white balance effect. For outdoor scenes lacking sky reference under low brightness, and large areas of pure color scenes, better and ideal processing effects can be obtained.
  • the scene prediction result is an outdoor scene
  • the scene prediction result when using the scene prediction result to adjust the brightness of the current image through the AE algorithm, there is no need to consider the effect of stroboscopic, but directly through Reduce the exposure time to suppress motion blur, so that the adjusted image solves the problem of blur.
  • the exposure time can be directly reduced to suppress motion blur.
  • a first terminal detects first infrared information, second infrared information, and visible light components corresponding to a current image through a color temperature sensor; wherein the first infrared information and the second infrared information are color temperature sensors Obtained using two different transceiver bands; according to the first infrared information, the second infrared information and the visible light component, the first infrared characteristic value and the second infrared characteristic value are generated; based on the preset classification model, according to the brightness corresponding to the current image The parameters, the first infrared feature value, and the second infrared feature value are used to obtain a scene prediction result to perform image processing according to the scene prediction result; wherein, the preset classification model is used to classify multiple scenes according to different spectral energy.
  • the image processing method proposed in the embodiment of this application can first use the color temperature sensor to collect the visible light component and two different infrared information in the spectrum corresponding to the current image, and then use the visible light component and two different infrared information to determine The corresponding two infrared feature values, combined with the brightness parameters corresponding to the current image, realize the scene prediction of the current image based on a preset classification model, where the preset classification model is based on the infrared feature data and brightness features of the image in the pre-stored image library The data is obtained through training and testing.
  • the terminal uses the infrared and brightness features of the image to train the preset classification model, and then based on the preset classification model, predicts the scene of the current image based on the infrared and brightness features of the current image. Then, image processing can be performed according to the scene prediction result, which can reduce the complexity of prediction, thereby improving prediction efficiency, and at the same time, improving the accuracy of scene prediction, thereby improving the image processing effect.
  • the first terminal obtains the scene prediction result based on the preset classification model according to the brightness parameter, the first infrared characteristic value and the second infrared characteristic value corresponding to the current image.
  • the brightness parameter, the first infrared characteristic value, and the second infrared characteristic value may be input to the preset classification model, and the classification parameters may be output.
  • the first terminal after the first terminal generates the first infrared characteristic value and the second infrared characteristic value according to the first infrared information, the second infrared information, and the visible light component, and obtains the brightness parameter at the same time, the brightness parameter,
  • the first infrared feature value and the second infrared feature value are input to the preset classification model, so that the classification parameters corresponding to the current image can be output.
  • the first terminal may use normalized aperture value parameters, normalized shutter speed parameters, and normalized sensitivity parameters as brightness parameters corresponding to the current image.
  • Brightness feature, the first infrared feature value and the second infrared feature value are used as the infrared feature corresponding to the current image, that is, the first terminal inputs the brightness feature and infrared feature corresponding to the current image into the preset classification model.
  • the classification parameters that characterize the scene type of the current image can be output.
  • the first terminal when the first terminal generates the first infrared characteristic value and the second infrared characteristic value according to the first infrared information, the second infrared information, and the visible light component, it may specifically first perform the first infrared characteristic value and the second infrared characteristic value respectively.
  • the infrared information and the second infrared information are subjected to time-frequency conversion processing, so that the first direct current component corresponding to the first infrared information and the second direct current component corresponding to the second infrared information can be obtained; then the first direct current component and the second direct current component can be used
  • Two direct current components and visible light components further generate a first infrared characteristic value and a second infrared characteristic value.
  • the first terminal when the first terminal generates the first infrared characteristic value, the first terminal can be calculated based on the second direct current component Dc (FD2) and the visible light component C according to the following formula (1).
  • the first terminal when the first terminal generates the second infrared characteristic value, it can be based on the first direct current component Dc(FD1) and the second direct current component Dc( FD2) Calculate and obtain the second infrared characteristic value IR2:
  • the Dc operator represents obtaining the DC component of the corresponding channel
  • FD1DC is Dc(FD1)
  • FD2DC is Dc(FD2).
  • An image processing method proposed in an embodiment of this application may first use a color temperature sensor to collect the visible light component and two different infrared information in the spectrum corresponding to the current image, and then use the visible light component and two different infrared information to determine the corresponding Two infrared feature values, combined with the brightness parameters corresponding to the current image, realize the scene prediction of the current image based on a preset classification model.
  • the preset classification model is based on the infrared feature data and brightness feature data of the image in the pre-stored image library. Obtained from training and testing.
  • the terminal uses the infrared feature and brightness feature of the image to train the preset classification model, and then based on the preset classification model, predicts the scene of the current image according to the infrared feature and brightness feature of the current image. Then, image processing can be performed according to the scene prediction result, which can reduce the complexity of the prediction, thereby improving the prediction efficiency, and at the same time, the accuracy of the scene prediction can be improved, thereby improving the image processing effect.
  • FIG. 11 is a second schematic diagram of the implementation process of the image processing method.
  • the method for the second terminal to perform image processing may include the following steps:
  • Step 201 Divide the pre-stored image library to obtain training data and test data; wherein, the pre-stored image library stores multiple images of different scenes, and different scenes correspond to different spectral energies.
  • the second terminal may first perform division processing on the pre-stored image library, so as to obtain training data and test data.
  • the pre-stored image library may store multiple images of different scenes, where the multiple images of different scenes in the pre-stored image library correspond to different spectral energies.
  • the second terminal may be any device with communication and storage functions.
  • tablet computers mobile phones, e-readers, remote controls, personal computers (PC), notebook computers, in-vehicle devices, Internet TVs, wearable devices and other devices.
  • PC personal computers
  • the second terminal may be a device for learning and training a preset classification model, where the second terminal may also be a device for performing image processing using the preset classification model. That is, in this application, the first terminal and the second terminal may be the same device.
  • the pre-stored image library can be used to train and test the preset classification model.
  • the pre-stored image library may include multiple images of indoor scenes and multiple images of outdoor scenes.
  • the terminal can randomly divide images of different scenes in the pre-stored image library, so as to obtain training data and test data.
  • the training data and the test data are completely different, that is, the data corresponding to an image in the pre-stored image library can only be training data or test data.
  • the terminal when the terminal obtains training data and test data by dividing the pre-stored image library, it may first divide the images of different scenes in the pre-stored image library into training images and test images.
  • the second terminal when the second terminal divides the pre-stored image library, it needs to follow the principle that the training image and the test image do not overlap, that is, any image in the pre-stored image library can only be used for training.
  • One of image or test image is one of image or test image.
  • 1024 images of indoor scenes are stored in the pre-stored image library of the second terminal, and 1134 images of outdoor scenes are stored at the same time.
  • the second terminal When the second terminal is training the preset classification model, it can randomly select from the pre-stored images. Extract 80% of the images from the library as training images, and 20% of the images as test images.
  • the second terminal after the second terminal divides the pre-stored image library into training images and test images, it can generate training data according to the first infrared feature data and the first brightness feature data corresponding to the training images
  • the test data can be generated according to the second infrared characteristic data and the second brightness characteristic data corresponding to the test image.
  • the training data includes the infrared information and infrared information corresponding to the training image.
  • the brightness information is the first infrared feature data and the first brightness feature data; at the same time, the test data includes the infrared information and brightness information corresponding to the test image, that is, the second infrared feature data and the second brightness feature data.
  • the first infrared characteristic data may include two different infrared direct current components corresponding to the training image.
  • the second infrared characteristic data may include two different infrared direct current components corresponding to the test image.
  • the first brightness characteristic data may include an aperture value parameter, a shutter speed parameter, and a sensitivity parameter corresponding to the training image.
  • the second brightness characteristic data may include the aperture value parameter, the shutter speed parameter, and the sensitivity parameter corresponding to the test image.
  • the second terminal needs five feature information when training the preset classification model, which specifically includes two different infrared DC components and three aperture values that characterize brightness. Parameters, shutter speed parameters, and sensitivity parameters.
  • Step 202 Use the training data to train a preset loss function to obtain an initial classification model.
  • the second terminal after the second terminal performs division processing on the pre-stored image library to obtain training data and test data, it may first use the training data to train the preset loss function, so as to obtain the initial classification model.
  • the second terminal may use typical classification models such as logistic regression model, Bayesian classifier, ensemble learning, decision tree, SVM model, etc. to train the preset classification model.
  • the preset loss function may be the hinge loss parameter (Hinge Loss) shown in the following formula (3):
  • y represents the hinge loss
  • the second terminal uses the training data to train the preset loss function, it can train the preset loss function according to the first infrared feature data and the first brightness feature data.
  • the initial classification model can be obtained.
  • the second terminal when the second terminal uses the training data to train the initial classification model, since the training data includes five feature information, the second terminal can select Use linear check to train the initial classification model, specifically, the step size is 0.01 and the gamma is 60000.
  • Step 203 Obtain a preset classification model according to the test data and the initial classification model; wherein, the preset classification model is used to classify multiple scenes according to different spectral energy.
  • the second terminal uses the training data to train the preset loss function to obtain the initial classification model, it may continue to obtain the preset classification model based on the test data and the initial classification model.
  • the preset classification model may be used to classify multiple scenes according to different spectral energy to obtain the type of the scene.
  • the preset classification model may be a classifier obtained by training of the first terminal based on infrared features and brightness features. That is, the first terminal can use the preset classification model to distinguish between outdoor scenes and indoor scenes according to the difference in spectral energy.
  • the second terminal after the second terminal completes the training of the initial classification model based on the training data, it can test the initial classification model according to the test data, so as to obtain the preset classification model.
  • the second terminal when the second terminal obtains the preset classification model based on the test data and the initial classification model, it may first use the second infrared feature data and the second brightness feature data to compare the initial classification model After testing, the test results can be obtained; then the initial classification model can be modified according to the test results, and finally the preset classification model can be obtained.
  • the test result may be an accuracy parameter.
  • the second terminal when the second terminal performs test processing on the initial classification model according to the test data, it may obtain the accuracy according to the test data and the initial classification model.
  • the accuracy parameter corresponding to the test data if the accuracy parameter is less than the preset accuracy threshold, the second terminal can adjust the initial classification model according to the test data, so that the preset classification model can be obtained.
  • the second terminal can send the test data to the trained initial classification model for testing, verify the accuracy of the model, and obtain the accuracy parameters corresponding to the test data, and then can be based on The accuracy parameter sends the wrongly judged test data to the initial classification model for fine-tuning, thereby improving the generalization of the initial classification model, and finally obtaining the preset classification model.
  • the second terminal may continuously train different rounds of preset classification models based on the pre-stored image library. Specifically, for different rounds of training, the second terminal divides the obtained training data and The test data are not the same, and the final results are also different. Table 1 shows the test result statistics. As shown in Table 1, different preset classification models trained based on different training data and test data have different prediction accuracy and comprehensive accuracy for the scene.
  • the second terminal continuously conducts different rounds of preset classification model training based on the pre-stored image library, and after obtaining different preset classification models, it can select a preset classification model with better accuracy for image processing .
  • FIG 12 is a schematic diagram of the spectral energy distribution of fluorescent lamps
  • Figure 13 is a schematic diagram of the spectral energy distribution of daylight
  • Figure 14 is a schematic diagram of the spectral energy distribution of incandescent lamps, as shown in Figures 12, 13, and 14, from fluorescent lamps, daylight and incandescent It can be seen from the spectral energy distribution of different light sources such as lamps that the infrared band energy of 800nm ⁇ 900nm is very weak in the indoor fluorescent lamp scene, while there is still quite strong energy in the infrared band 800nm ⁇ 900nm under daylight.
  • the present application can directly use the infrared band information detected by the color temperature sensor to obtain distinctive feature information. That is to say, in this application, the second terminal can use the infrared information obtained by the color temperature sensor as the feature information for training the preset classification model. Accordingly, the second terminal can use the infrared information obtained by the color temperature sensor as the feature information based on the preset classification model. Feature information for scene prediction.
  • the terminal when generating the preset classification model, the terminal trains the preset classification model based on parameters such as infrared feature data and brightness feature data corresponding to the pre-stored image library.
  • the terminal may use the brightness parameter, the first infrared characteristic value, and the second infrared characteristic value corresponding to the current image to determine the scene type of the current image.
  • the feature information of the image required by the terminal includes both the corresponding infrared features and Including the corresponding brightness characteristics.
  • the second terminal divides the pre-stored image library to obtain training data and test data; wherein, the pre-stored image library stores multiple images of different scenes, and different scenes correspond to different spectra.
  • Energy Use the training data to train the preset loss function to obtain the initial classification model; according to the test data and the initial classification model, obtain the preset classification model.
  • the image processing method proposed in the embodiment of this application can first use the color temperature sensor to collect the visible light component and two different infrared information in the spectrum corresponding to the current image, and then use the visible light component and two different infrared information to determine The corresponding two infrared feature values, combined with the brightness parameters corresponding to the current image, realize the scene prediction of the current image based on a preset classification model, where the preset classification model is based on the infrared feature data and brightness features of the image in the pre-stored image library The data is obtained through training and testing.
  • the terminal uses the infrared feature and brightness feature of the image to train the preset classification model, and then based on the preset classification model, predicts the scene of the current image according to the infrared feature and brightness feature of the current image. Then, image processing can be performed according to the scene prediction result, which can reduce the complexity of the prediction, thereby improving the prediction efficiency, and at the same time, the accuracy of the scene prediction can be improved, thereby improving the image processing effect.
  • FIG. 15 is a schematic diagram 1 of the composition structure of the first terminal.
  • the first terminal 1 proposed in the embodiment of the present application may include a detection part 11, which generates Part 12, the first acquisition part 13 and the processing part 14.
  • the detection part 11 is configured to detect the first infrared information, the second infrared information and the visible light component corresponding to the current image through the color temperature sensor; wherein the first infrared information and the second infrared information are used by the color temperature sensor Obtained separately from two different transceiver bands;
  • the generating part 12 is configured to generate a first infrared characteristic value and a second infrared characteristic value according to the first infrared information, the second infrared information, and the visible light component;
  • the first obtaining part 13 is configured to obtain a scene prediction result based on a preset classification model, according to the brightness parameter corresponding to the current image, the first infrared characteristic value, and the second infrared characteristic value.
  • the scene prediction result is subjected to image processing; wherein, the preset classification model is used to classify multiple scenes according to the difference in spectral energy.
  • the first acquiring part 13 is specifically configured to input the brightness parameter, the first infrared characteristic value, and the second infrared characteristic value into the preset classification
  • the model outputs the classification parameter; when the classification parameter belongs to a first preset value range, it is determined that the scene prediction result is an indoor scene; when the classification parameter belongs to a second preset value range, the scene is determined
  • the prediction result is an outdoor scene; wherein, the first preset value range and the second preset value range do not overlap.
  • the generating part 12 is specifically configured to perform time-frequency conversion processing on the first infrared information to obtain a first direct current component; perform time-frequency conversion on the second infrared information Transform processing to obtain a second direct current component; determine the first infrared characteristic value based on the second direct current component and the visible light component; and determine the first infrared characteristic value based on the first direct current component and the second direct current component The second infrared characteristic value.
  • the first acquisition part 13 is further configured to be based on a preset classification model, according to the brightness parameter corresponding to the current image, the first infrared feature value, and the second The infrared feature value is used to obtain the scene prediction result, so that the attribute parameter corresponding to the current image is read before image processing is performed according to the scene prediction result; the attribute parameter is normalized to obtain the brightness parameter.
  • the attribute parameter includes an aperture value parameter, a shutter speed parameter, and a sensitivity parameter.
  • the processing part 14 is specifically configured to use the scene prediction result to perform automatic white balance processing on the current image to obtain a white balanced image.
  • the processing part 14 is further specifically configured to use the scene prediction result to adjust the brightness of the current image to obtain an adjusted image.
  • the first terminal is provided with a front camera on the front cover, a rear camera is provided on the rear cover, and the color temperature sensor is provided in the first area of the front cover
  • the first area characterizes the area adjacent to the front camera
  • the color temperature sensor is arranged in the second area of the back cover; wherein, the second area characterizes the area adjacent to the The area adjacent to the rear camera.
  • a gap is provided on the top of the first terminal, and the color temperature sensor is provided in the gap.
  • FIG. 16 is a second schematic diagram of the composition structure of the first terminal.
  • the first terminal 1 proposed in the embodiment of the present application may further include a first processor 15, and a first processor that stores executable instructions of the first processor 15.
  • the memory 16 furthermore, the first terminal 1 may further include a first communication interface 17 and a first bus 18 for connecting the first processor 15, the first memory 16 and the first communication interface 17.
  • the above-mentioned first processor 15 may be an Application Specific Integrated Circuit (ASIC), a digital signal processor (Digital Signal Processor, DSP), or a digital signal processing device (Digital Signal Processing Device). , DSPD), Programmable Logic Device (ProgRAMmable Logic Device, PLD), Field Programmable Gate Array (Field ProgRAMmable Gate Array, FPGA), Central Processing Unit (CPU), Controller, Microcontroller, Microprocessing At least one of the devices. It is understandable that, for different devices, the electronic devices used to implement the above-mentioned processor functions may also be other, which is not specifically limited in the embodiment of the present application.
  • the first terminal 1 may also include a first memory 16, which may be connected to the first processor 15, where the first memory 16 is used to store executable program code, and the program code includes computer operation instructions.
  • the memory 16 may include a high-speed RAM memory, and may also include a non-volatile memory, for example, at least two disk memories.
  • the first bus 18 is used to connect the first communication interface 17, the first processor 15, the first memory 16, and the mutual communication between these devices.
  • the first memory 16 is used to store instructions and data.
  • the above-mentioned first processor 15 is configured to detect the first infrared information, the second infrared information and the visible light component corresponding to the current image through the color temperature sensor; wherein, the first infrared information and The second infrared information is obtained by the color temperature sensor using two different transmitting and receiving bands; according to the first infrared information, the second infrared information, and the visible light component, a first infrared characteristic value and a second infrared characteristic value are generated.
  • Feature value based on a preset classification model, according to the brightness parameter corresponding to the current image, the first infrared feature value, and the second infrared feature value, a scene prediction result is obtained, so as to perform image processing according to the scene prediction result; wherein
  • the preset classification model is used to classify multiple scenes according to the difference of spectral energy.
  • the aforementioned first memory 16 may be a volatile memory (volatile memory), such as random-access memory (Random-Access Memory, RAM); or a non-volatile memory (non-volatile memory), such as Read-Only Memory (ROM), Flash memory (flash memory), Hard Disk Drive (HDD) or Solid-State Drive (SSD); or a combination of the above types of memory, and add
  • volatile memory such as random-access memory (Random-Access Memory, RAM)
  • non-volatile memory such as Read-Only Memory (ROM), Flash memory (flash memory), Hard Disk Drive (HDD) or Solid-State Drive (SSD); or a combination of the above types of memory, and add
  • a processor 15 provides instructions and data.
  • the functional modules in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be realized in the form of hardware or software function module.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this embodiment is essentially or correct
  • the part that the prior art contributes or all or part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes several instructions to enable a computer device (which can be a personal computer).
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
  • An embodiment of the present application proposes a first terminal that detects first infrared information, second infrared information, and visible light components corresponding to the current image through a color temperature sensor; wherein the first infrared information and the second infrared information are color temperature
  • the sensor uses two different transmitting and receiving bands to obtain separately; according to the first infrared information, the second infrared information and the visible light component, the first infrared characteristic value and the second infrared characteristic value are generated; based on the preset classification model, according to the current image corresponding
  • the brightness parameter, the first infrared feature value, and the second infrared feature value are used to obtain the scene prediction result to perform image processing according to the scene prediction result; wherein the preset classification model is used to classify multiple scenes according to the difference in spectral energy.
  • the image processing method proposed in the embodiment of this application can first use the color temperature sensor to collect the visible light component and two different infrared information in the spectrum corresponding to the current image, and then use the visible light component and two different infrared information to determine The corresponding two infrared feature values, combined with the brightness parameters corresponding to the current image, realize the scene prediction of the current image based on a preset classification model, where the preset classification model is based on the infrared feature data and brightness features of the image in the pre-stored image library The data is obtained through training and testing.
  • the terminal uses the infrared feature and brightness feature of the image to train the preset classification model, and then based on the preset classification model, predicts the scene of the current image according to the infrared feature and brightness feature of the current image. Then, image processing can be performed according to the scene prediction result, which can reduce the complexity of the prediction, thereby improving the prediction efficiency, and at the same time, the accuracy of the scene prediction can be improved, thereby improving the image processing effect.
  • FIG. 17 is a schematic diagram 1 of the composition structure of the second terminal.
  • the second terminal 2 proposed in the embodiment of the present application may include a partition 21 and a Two acquisition part 22.
  • the dividing part 21 is configured to divide the pre-stored image library to obtain training data and test data; wherein, the pre-stored image library stores multiple images of different scenes, and different scenes correspond to different spectral energies;
  • the second acquisition part 22 is configured to train a preset loss function using the training data to obtain an initial classification model; and obtain a preset classification model according to the test data and the initial classification model; wherein The preset classification model is used to classify multiple scenes according to different spectral energies.
  • the dividing part 21 is specifically configured to divide the multiple images into training images and test images; according to the first infrared feature data and the first brightness corresponding to the training images Feature data, generating the training data; generating the test data according to the second infrared feature data and the second brightness feature data corresponding to the test image.
  • the second acquiring part 22 is specifically configured to train the preset loss function according to the first infrared characteristic data and the first brightness characteristic data to obtain the The initial classification model.
  • the second acquisition part 22 is also specifically configured to use the second infrared feature data and the second brightness feature data to test the initial classification model to obtain a test Result; and correcting the initial classification model according to the test result to obtain the preset classification model.
  • FIG. 18 is a second schematic diagram of the composition structure of the second terminal.
  • the second terminal 2 proposed in the embodiment of the present application may further include a second processor 23, and a second processor 23 that stores executable instructions of the second processor 23.
  • the second terminal 2 may further include a second communication interface 25 and a second bus 26 for connecting the second processor 23, the second memory 24, and the second communication interface 25.
  • the second terminal 2 may further include a second memory 24, and the second memory 24 may be connected to the second processor 23, where the second memory 24 is used to store executable program codes.
  • the code includes computer operation instructions.
  • the second memory 24 may include a high-speed RAM memory, or may also include a non-volatile memory, for example, at least two disk memories.
  • the second bus 26 is used to connect the second communication interface 25, the second processor 23, the second memory 24, and the mutual communication between these devices.
  • the second memory 24 is used to store instructions and data.
  • the above-mentioned second processor 23 is configured to divide the pre-stored image library to obtain training data and test data; wherein, the pre-stored image library stores multiple images of different scenes.
  • Different scenarios correspond to different spectral energies; use the training data to train the preset loss function to obtain the initial classification model; obtain the preset classification model according to the test data and the initial classification model; wherein, the prediction
  • the classification model is used to classify multiple scenes according to the difference of spectral energy.
  • An embodiment of the present application proposes a second terminal that divides a pre-stored image library to obtain training data and test data; wherein, the pre-stored image library stores multiple images of different scenes, and different scenes correspond to different Spectral energy; use the training data to train the preset loss function to obtain the initial classification model; according to the test data and the initial classification model, obtain the preset classification model.
  • the image processing method proposed in the embodiment of this application can first use the color temperature sensor to collect the visible light component and two different infrared information in the spectrum corresponding to the current image, and then use the visible light component and two different infrared information to determine The corresponding two infrared feature values, combined with the brightness parameters corresponding to the current image, realize the scene prediction of the current image based on a preset classification model, where the preset classification model is based on the infrared feature data and brightness features of the image in the pre-stored image library The data is obtained through training and testing.
  • the terminal uses the infrared feature and brightness feature of the image to train the preset classification model, and then based on the preset classification model, predicts the scene of the current image according to the infrared feature and brightness feature of the current image. Then, image processing can be performed according to the scene prediction result, which can reduce the complexity of the prediction, thereby improving the prediction efficiency, and at the same time, the accuracy of the scene prediction can be improved, thereby improving the image processing effect.
  • the embodiment of the present application provides a computer-readable storage medium with a program stored thereon, and when the program is executed by a processor, the above-mentioned image processing method is realized.
  • the program instructions corresponding to an image processing method in this embodiment can be stored on storage media such as optical disks, hard disks, USB flash drives, etc.
  • storage media such as optical disks, hard disks, USB flash drives, etc.
  • the first infrared information, the second infrared information, and the visible light component corresponding to the current image are detected by the color temperature sensor; wherein the first infrared information and the second infrared information are obtained by the color temperature sensor using two different transceiver bands respectively of;
  • a scene prediction result is obtained according to the brightness parameter corresponding to the current image, the first infrared characteristic value, and the second infrared characteristic value, so as to perform image processing according to the scene prediction result; wherein, The preset classification model is used to classify multiple scenes according to different spectral energies.
  • the method further includes the following steps:
  • the pre-stored image library divides the pre-stored image library to obtain training data and test data; wherein, the pre-stored image library stores multiple images of different scenes, and different scenes correspond to different spectral energies;
  • a preset classification model is obtained; wherein, the preset classification model is used to classify multiple scenes according to different spectral energies.
  • this application can be provided as methods, systems, or computer program products. Therefore, this application may adopt the form of hardware embodiments, software embodiments, or embodiments combining software and hardware. Moreover, this application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, optical storage, etc.) containing computer-usable program codes.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device realizes the functions specified in one or more processes in the schematic diagram and/or one block or more in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing functions specified in one or more processes in the schematic diagram and/or one block or more in the block diagram.
  • the embodiments of the present application provide an image processing method, terminal, and storage medium.
  • the first terminal detects the first infrared information, the second infrared information, and the visible light component corresponding to the current image through a color temperature sensor; wherein, the first infrared information and the second infrared information
  • the infrared information is obtained by the color temperature sensor using two different transmitting and receiving bands; according to the first infrared information, the second infrared information and the visible light component, the first infrared characteristic value and the second infrared characteristic value are generated; based on the preset classification model, according to The brightness parameter, the first infrared feature value, and the second infrared feature value corresponding to the current image are obtained to obtain the scene prediction result to perform image processing according to the scene prediction result; among them, the preset classification model is used to perform multiple scenes according to different spectral energy.
  • the second terminal divides the pre-stored image library to obtain training data and test data; among them, the pre-stored image library stores multiple images of different scenes, and different scenes correspond to different spectral energies; use the training data to train the preset loss function , Obtain the initial classification model; According to the test data and the initial classification model, obtain the preset classification model.
  • the image processing method proposed in the embodiment of this application can first use the color temperature sensor to collect the visible light component and two different infrared information in the spectrum corresponding to the current image, and then use the visible light component and two different infrared information to determine The corresponding two infrared feature values, combined with the brightness parameters corresponding to the current image, realize the scene prediction of the current image based on a preset classification model, where the preset classification model is based on the infrared feature data and brightness features of the image in the pre-stored image library The data is obtained through training and testing.
  • the terminal uses the infrared feature and brightness feature of the image to train the preset classification model, and then based on the preset classification model, predicts the scene of the current image according to the infrared feature and brightness feature of the current image. Then, image processing can be performed according to the scene prediction result, which can reduce the complexity of the prediction, thereby improving the prediction efficiency, and at the same time, the accuracy of the scene prediction can be improved, thereby improving the image processing effect.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

一种图像处理方法、终端及存储介质,该图像处理方法包括:通过色温传感器检测当前图像对应的第一红外信息、第二红外信息以及可见光分量;其中,第一红外信息和第二红外信息为色温传感器利用两个不同的收发波段分别获取的(101);根据第一红外信息、第二红外信息以及可见光分量,生成第一红外特征值和第二红外特征值(102);基于预设分类模型,根据当前图像对应的亮度参数、第一红外特征值以及第二红外特征值,获得场景预测结果,以根据场景预测结果进行图像处理;其中,预设分类模型用于按照光谱能量的不同对多个场景进行分类(103)。

Description

图像处理方法、终端及存储介质
本申请基于申请号为201911271535.1、申请日为2019年12月12日、申请名称为“图像处理方法、终端及存储介质”的在先中国专利申请提出,并要求该在先中国专利申请的优先权,该在先中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请实施例涉及图像处理技术领域,尤其涉及一种图像处理方法、终端及存储介质。
背景技术
在进行图像处理时,如果可以确定出当前图像所处的场景,如室内场景或室外场景,便有助于获得更高的图像还原效果。也就是说,场景预测会成为终端进行图像处理时所需要的重要功能之一。目前,终端在进行场景预测时,既可以通过部署一些额外的辅助设备采集特定数据后进行场景的识别;还可以借助图像处理的方法进行场景的区分。
然而,借助额外的辅助设备进行场景预测,在部署阶段代价较高并且准备工作复杂,极大地限制了场景预测的普适性和易用性,便捷性较差;而目前基于图像处理进行场景预测的方法,具有较高的计算复杂度,降低了预测效率,且场景预测的精确度较差,从而降低了图像的处理效果。
发明内容
本申请实施例提供了一种图像处理方法、终端及存储介质,能够减低预测的复杂度,从而可以提高预测效率,同时可以提升场景预测的精确度,进而提高了图像的处理效果。
本申请实施例的技术方案是这样实现的:
第一方面,本申请实施例提供了一种图像处理方法,所述图像处理方法应用于第一终端,所述方法包括:
通过色温传感器检测当前图像对应的第一红外信息、第二红外信息以及可见光分量;其中,所述第一红外信息和所述第二红外信息为所述色温传感器利用两个不同的收发波段分别获取的;
根据所述第一红外信息、第二红外信息以及所述可见光分量,生成第一红外特征值和第二红外特征值;
基于预设分类模型,根据所述当前图像对应的亮度参数、所述第一红外特征值以及所述第二红外特征值,获得场景预测结果,以根据所述场景预测结果进行图像处理;其中,所述预设分类模型用于按照光谱能量的不同对多个场景进行分类。
第二方面,本申请实施例提供了一种图像处理方法,所述图像处理方法应用于第二终端,所述方法包括:
对预存图像库进行划分处理,获得训练数据和测试数据;其中,所述预存图像库中存储不同场景的多个图像,不同场景对应不同的光谱能量;
利用所述训练数据对预设损失函数进行训练,获得初始分类模型;
根据所述测试数据和所述初始分类模型,获得预设分类模型;其中,所述预设分类 模型用于按照光谱能量的不同对多个场景进行分类。
第三方面,本申请实施例提供了一种第一终端,所述第一终端包括:检测部分,生成部分,第一获取部分,
所述检测部分,配置为通过色温传感器检测当前图像对应的第一红外信息、第二红外信息以及可见光分量;其中,所述第一红外信息和所述第二红外信息为所述色温传感器利用两个不同的收发波段分别获取的;
所述生成部分,配置为根据所述第一红外信息、第二红外信息以及所述可见光分量,生成第一红外特征值和第二红外特征值;
所述第一获取部分,配置为基于预设分类模型,根据所述当前图像对应的亮度参数、所述第一红外特征值以及所述第二红外特征值,获得场景预测结果,以根据所述场景预测结果进行图像处理;其中,所述预设分类模型用于按照光谱能量的不同对多个场景进行分类。
第四方面,本申请实施例提供了一种第二终端,所述第二终端包括:划分部分,第二获取部分以及处理部分,
所述划分部分,配置为对预存图像库进行划分处理,获得训练数据和测试数据;其中,所述预存图像库中存储不同场景的多个图像,不同场景对应不同的光谱能量;
所述第二获取部分,配置为利用所述训练数据对预设损失函数进行训练,获得初始分类模型;以及根据所述测试数据和所述初始分类模型,获得预设分类模型;其中,所述预设分类模型用于按照光谱能量的不同对多个场景进行分类。
第五方面,本申请实施例提供了一种第一终端,所述第一终端包括第一处理器、存储有所述第一处理器可执行指令的第一存储器,当所述指令被所述第一处理器执行时,实现如上所述的图像处理方法。
第六方面,本申请实施例提供了一种第二终端,所述第二终端包括第二处理器、存储有所述第二处理器可执行指令的第二存储器,当所述指令被所述第二处理器执行时,实现如上所述的图像处理方法。
第七方面,本申请实施例提供了一种计算机可读存储介质,其上存储有程序,应用于第一终端和第二终端中,所述程序被处理器执行时,实现如上所述的图像处理方法。
本申请实施例提供了一种图像处理方法、终端及存储介质,第一终端通过色温传感器检测当前图像对应的第一红外信息、第二红外信息以及可见光分量;其中,第一红外信息和第二红外信息为色温传感器利用两个不同的收发波段分别获取的;根据第一红外信息、第二红外信息以及可见光分量,生成第一红外特征值和第二红外特征值;基于预设分类模型,根据当前图像对应的亮度参数、第一红外特征值以及第二红外特征值,获得场景预测结果,以根据场景预测结果进行图像处理;其中,预设分类模型用于按照光谱能量的不同对多个场景进行分类。第二终端对预存图像库进行划分处理,获得训练数据和测试数据;其中,预存图像库中存储不同场景的多个图像,不同场景对应不同的光谱能量;利用训练数据对预设损失函数进行训练,获得初始分类模型;根据测试数据和初始分类模型,获得预设分类模型。由此可见,本申请实施例提出的图像处理方法,可以先利用色温传感器采集当前图像对应的光谱中的可见光分量和两个不同的红外信息,然后利用可见光分量和两个不同的红外信息确定出对应的两个红外特征值,并结合当前图像对应的亮度参数,基于预设分类模型实现当前图像的场景预测,其中,预设分类模型为基于预存图像库中的图像的红外特征数据和亮度特征数据进行训练和测试获得的。也就是说,在本申请中,终端利用图像的红外特征和亮度特征对预设分类模型进行训练,然后基于预设分类模型,根据当前图像的红外特征和亮度特征对当前图像进行场景的预测,便可以根据场景预测结果进行图像处理,能够减低预测的复杂度,从而可以提高预 测效率,同时可以提升场景预测的精确度,进而提高了图像的处理效果。
附图说明
图1为图像处理方法的实现流程示意图一;
图2为色温传感器的位置示意图一;
图3为色温传感器的位置示意图二;
图4为目前色温传感器的设置示意图;
图5为色温传感器的位置示意图三;
图6为色温传感器的位置示意图四;
图7为色温传感器的光谱曲线响应示意图;
图8为不同的检测通道的示意图;
图9为时频变换前的时域信号示意图;
图10为时频变换后的频域信号示意图;
图11为图像处理方法的实现流程示意图二;
图12为荧光灯的光谱能量分布的示意图;
图13为日光的光谱能量分布的示意图;
图14为白炽灯的光谱能量分布的示意图;
图15为第一终端的组成结构示意图一;
图16为第一终端的组成结构示意图二;
图17为第二终端的组成结构示意图一;
图18为第二终端的组成结构示意图二。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。可以理解的是,此处所描述的具体实施例仅仅用于解释相关申请,而非对该申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关申请相关的部分。
终端进行场景预测有很多的方案,具体地,既有基于外部器件的方法,例如无线网(Wireless Fidelity,Wi-Fi)、光感、红外等设备;也有基于图像本身的方法。其中,基于图像本身的方法又可以分为传统阈值分类方法和基于机器学习方法。
不同的场景中,终端进行图像处理时的方式可能有所不同,例如,在室内场景下,自动曝光(Automatic Exposure,AE)需要时刻考虑开启抗工频闪策略;而针对低亮度室外场景,则需要选择更加合适的自动白平衡(Automatic white balance,AWB)算法来还原图像,例如,在AWB算法中,如果可以判定当前光源为室外光源则可以简单的将AWB色温设定到D55的位置,画面就可以得到很好的色彩还原效果。
由此可见,良好的场景预测方法可以帮助AWB算法轻松得到漂亮的还原效果,无论对于低亮度的室外场景,还是高亮度的室内场景,都可以降低AWB算法本身的还原难度。相应地,在AE算法中,如果可以准确的判定当前图像对应的场景为室外,则完全不需要考虑抗闪的问题,从而可以提供更多的灵活性。
目前,在使用图像处理方法进行场景预测时,一方面,特征提取需要依赖全尺寸的图像(如4000×3000),并应用多尺度的滤波方法提取出大量的结构性特征,而如手机等便携式终端的图像信号处理(Image Signal Processing,ISP)通常只能提供小尺寸的图像,(如120×90),此时终端使用基于全尺寸图像的滤波方法得到的特征的精度就会 大为降低,从而降低了场景预测的准确性。另一方面,图像处理方法从当前图像中提取出高维的结构性相关的特征,特征数量通常较多,在如手机等便携式终端内使用时很难做的实时处理,从而降低了场景预测的预测效率。
进一步地,从实际效果来看,复杂的结构性特征在面对不规则分割的天空、纯色场景、室内人造建筑时,场景预测的预测精度都会降低。
基于YUV数据的场景识别算法在ISP上位于去马赛克demosaic算法之后,倾向于最终看到的景象,由于存在时域上的偏差而无法很好的为ISP前端的AE、AWB以及自动对焦(Automatic Focus,AF)所使用。
综上所述,现有技术中,基于图像处理的方式进行场景预测的方法,具有较高的计算复杂度,降低了预测效率,且场景预测的精确度较差。为了解决上述缺陷,本申请实施例提出了一种图像处理方法,可以先利用色温传感器采集当前图像对应的光谱中的可见光分量和两个不同的红外信息,然后利用可见光分量和两个不同的红外信息确定出对应的两个红外特征值,并结合当前图像对应的亮度参数,基于预设分类模型实现当前图像的场景预测,其中,预设分类模型为基于预存图像库中的图像的红外特征数据和亮度特征数据进行训练和测试获得的。也就是说,在本申请中,终端利用图像的红外特征和亮度特征对预设分类模型进行训练,然后基于预设分类模型,根据当前图像的红外特征和亮度特征对当前图像进行场景的预测,便可以根据场景预测结果进行图像处理,能够减低预测的复杂度,从而可以提高预测效率,同时可以提升场景预测的精确度,进而提高了图像的处理效果。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
本申请一实施例提供了一种图像处理方法,图1为图像处理方法的实现流程示意图一,如图1所示,在本申请的实施例中,第一终端进行图像处理的方法可以包括以下步骤:
步骤101、通过色温传感器检测当前图像对应的第一红外信息、第二红外信息以及可见光分量;其中,第一红外信息和第二红外信息为色温传感器利用两个不同的收发波段分别获取的。
在本申请的实施例中,第一终端可以先通过配置的色温传感器检测获得第一红外信息,第二红外信息以及可见光分量。其中,第一红外信息和第二红外信息可以是色温传感器利用两个不同的收发波段分别获取的不同的红外数据。
需要说明的是,在本申请的实施例中,第一终端可以为任何具备通信和存储功能的设备,例如:平板电脑、手机、电子阅读器、遥控器、个人计算机(Personal Computer,PC)、笔记本电脑、车载设备、网络电视、可穿戴设备等设备。
具体地,第一终端可以为利用预设分类模型进行图像处理的设备,其中,第一终端同时也可以为对预设分类模型进行学习训练的设备。
进一步地,在本申请的实施例中,第一终端可以设置有进行图像采集的拍摄装置,具体地,第一终端可以设置有至少一个前置摄像头和至少一个后置摄像头。
可以理解的是,在本申请的实施例中,当前图像可以为第一终端通过设置的拍摄装置拍摄获得的。
需要说明的是,在本申请的实施例中,第一终端还可以设置有色温传感器,具体地,在本申请中,第一终端可以在前置摄像头的一侧设置色温传感器,也可以在后置摄像头的一侧设置色温传感器。具体地,终端在前盖上设置有前置摄像头,在后盖上设置有后置摄像头,因此,色温传感器可以设置在前盖的第一区域中;其中,第一区域表征与前置摄像头相邻的区域;或者,色温传感器还可以设置在后盖的第二区域中;其中,第二 区域表征与后置摄像头相邻的区域。
示例性的,在本申请中,图2为色温传感器的位置示意图一,图3为色温传感器的位置示意图二,如图2所示,在第一终端的前置摄像头的左侧配置有色温传感器,如图3所示,在第一终端的后置摄像头的下侧配置有色温传感器。
目前,常见的色温传感器的设置方法,均将色温传感器设置在全面屏的刘海区域中,具体地,图4为目前色温传感器的设置示意图,如图4所示,终端将色温传感器放置于刘海区域的油墨下方。然而,设置在刘海区域中的色温传感器,相应地需要终端开大油墨孔,对工业设计(Industrlal Design,ID)外观影响较大。
相比之下,在本申请的实施例中,第一终端的顶部可以设置有缝隙,因此,第一终端可以将色温传感器设置在顶部的缝隙中。图5为色温传感器的位置示意图三,图6为色温传感器的位置示意图四,如图5和图6所示,色温传感器设置在第一终端的顶部的缝隙中,无论在第一终端的正面(图5)还是背面(图6),色温传感器都不会影响第一终端的外观,且设置在缝隙中的色温传感器,也不需要第一终端开大油墨孔。
进一步地,在本申请的实施例中,第一终端可以通过配置的色温传感器对当前图像对应的环境参数进行检测,具体地,色温传感器可以检测获得当前图像对应的红R、绿G、蓝B、可见光C、全光谱WB(wide band)、相关色温(correlated colour temperature,cct)以及两个通道的闪光频率(Flicker Frequency,FD),分别为FD1和FD2等参数。
图7为色温传感器的光谱曲线响应示意图,如图7所示,随着波长的变化,色温传感器检测获得的R、G、B、C、WB、FD1以及FD2对应的光谱响应曲线的变化是不同的。
需要说明的是,在本申请的实施例中,第一红外信息和第二红外信息是不相同的,具体地,第一红外信息可以用于对800nm~900nm的红外波段强度进行衡量,第二红外信息可以用于对950nm~1000nm的红外波段强度进行衡量。
进一步地,在本申请的实施例中,第一终端配置的色温传感器,可以通过不同的收发波段对当前图像对应的环境中的红外光线进行检测,从而可以获得第一红外信息和第二红外信息。图8为不同的检测通道的示意图,如图8所示,第一终端可以分别利用50Hz和60Hz这两个频率进行红外波段的检测。
需要说明的是,在本申请的实施例中,第一终端可以通过色温传感器获得第一红外通道所检测获得的第一时域信息,即获得第一红外信息;同时,第一终端还可以通过色温传感器获得第二红外通道所检测获得的第二时域信息,即获得第二红外信息。
相应地,第一终端还可以通过色温传感器获得可见光波段的分量,即获得可见光分量。
步骤102、根据第一红外信息、第二红外信息以及可见光分量,生成第一红外特征值和第二红外特征值。
在本申请的实施例中,第一终端在利用色温传感器检测获得第一红外信息、第二红外信息以及可见光分量之后,可以直接根据第一红外信息、第二红外信息以及可见光分量,生成当前图像对应的第一红外特征值和第二红外特征值。
需要说明的是,在本申请的实施例中,第一终端在进行第一红外特征值和第二红外特征值的生成时,可以先对第一红外信息进行时频变换处理,从而可以获得第一红外信息对应的第一直流分量,同时,可以对第二红外信息进行时频变换处理,从而可以获得第二红外信息对应的第二直流分量。图9为时频变换前的时域信号示意图,图10为时频变换后的频域信号示意图,如图9和图10所示,在进行时频变换处理之后,可以将时域的红外信息转换为相应的直流分量。
进一步地,在本申请的实施例中,第一终端在分别对第一红外信息和第二红外信息 进行时频变换处理,获得第一直流分量和第二直流分量之后,便可以利用第一直流分量、第二直流分量以及可见光分量,进一步生成第一红外特征值和第二红外特征值。
需要说明的是,在本申请的实施例中,第一终端可以利用第二直流分量和可见光分量,计算获得第一红外特征值,同时,第一终端还可以利用第一直流分量和第二直流分量,计算获得第二红外特征值。
进一步地,在本申请的实施例中,第一红外特征可以用于衡量800nm~900nm的红外波段强度,第二红外特征可以用于衡量950nm~1000nm的红外波段强度。
步骤103、基于预设分类模型,根据当前图像对应的亮度参数、第一红外特征值以及第二红外特征值,获得场景预测结果,以根据场景预测结果进行图像处理;其中,预设分类模型用于按照光谱能量的不同对多个场景进行分类。
在本申请的实施例中,第一终端在根据第一红外信息、第二红外信息以及可见光分量,生成第一红外特征值和第二红外特征值之后,可以基于预设分类模型,利用当前图像对应的亮度参数、第一红外特征值以及第二红外特征值,获得当前图像对应的场景预测结果,然后便可以根据场景预测结果对当前图像进行图像处理。
需要说明的是,在本申请的实施例中,预设分类模型可以用于按照光谱能量的不同对多个场景进行分类,获得场景的类型。具体地,预设分类模型可以为第一终端基于红外特征和亮度特征训练获得的分类器。也就是说,第一终端可以利用预设分类模型,按照光谱能量的不同,对室外场景和室内场景进行区分。
可以理解的是,在本申请的实施例中,从荧光灯、日光以及白炽灯等不同光源的光谱能量分布情况可以看出,室内荧光灯场景下800nm~900nm红外波段能量十分微弱,而日光下红外波段800nm~900nm还有相当强的能量存在,950nm以后开始剧烈衰减,相比之下,白炽灯在红外波段800nm~1000nm的能量呈现越来越强的趋势。因此,本申请可以直接利用色温传感器检测获得的红外波段信息获得差别性的特征信息。也就是说,在本申请中,第一终端可以按照光谱能量的不同,可以将色温传感器获得的红外信息作为基于预设分类模型进行场景预测的特征信息。
进一步地,在本申请的实施例中,用于场景预测的预设分类模型可以为逻辑回归模型、贝叶斯分类器、集成学习、决策树、支持向量机(Support Vector Machine,SVM)模型等典型的分类模型。
示例性地,在本申请的实施例中,第一终端可以基于预存图像库对应的红外特征数据和亮度特征数据等参数对预设分类模型进行训练,从而使训练获得的预设分类模型可以基于当前图像对应的亮度参数、第一红外特征值以及第二红外特征值,输出当前图像对应的分类参数。
进一步地,在本申请的实施例中,第一终端在基于预设分类模型,根据当前图像对应的亮度参数、第一红外特征值以及第二红外特征值,获得场景预测结果,以根据所述场景预测结果进行图像处理之前,需要先获取当前图像对应的亮度参数,具体地,第一终端可以从当前图像对应的属性信息中读取对应的属性参数,然后利用属性参数确定当前图像对应的亮度参数。
需要说明的是,在本申请的实施例中,属性参数可以为第一终端在拍摄当前图像时,拍摄装置获得的图像所对应的具体参数。具体地,属性参数可以包括光圈值参数、快门速度参数以及感光度参数。
具体地,在本申请的实施例中,光圈值参数Av,是光圈值F_number的量化表述,光圈通常以F值表述,进一步地,光圈值参数Av可以表述为log(F_number);快门速度参数Tv,是快门速度的量化表述,快门速度通常以1/Shutter_Speed的分数形式表示,进一步地,快门速度参数Tv可以表述为log(1/Shutter_Speed);感光度参数Sv,是感 光度(ISO)的量化表述,进一步地,感光度参数Sv可以表述为log(ISO)。
进一步地,在本申请的实施例中,AE算法通常通过调整光圈大小,快门速度以及感光度来调整图像的亮度。其中,室外自然光下的Av值大于室内的Av值,室外自然光下的Tv值大于室内的Tv值,而室外自然光下的Sv值小于室内的Sv值。
具体地,在本申请的实施例中,属性信息为第一终端为其配置的拍摄装置所拍摄的图像设定的,用于存储记录图像的属性信息和拍摄数据。也就是说,属性信息中包括有当前图像对应的属性信息和拍摄数据。
例如,第一终端从预先存储的可交换图像文件格式(Exchangeable image file format,Exif)中读取当前图像对应的Av、Tv、Sv等属性参数。
可以理解的是,在本申请的实施例中,光圈值参数、快门速度参数以及感光度参数可以有效地反映当前图像所处的场景中的亮度,因此可以进一步根据光圈值参数、快门速度参数以及感光度参数确定亮度参数,以对当前图像对应的场景进行预测。
需要说明的是,在本申请的实施例中,第一终端在获取当前图像对应的属性参数之后,可以先对属性参数进行归一化处理,然后获得亮度参数。
需要说明的是,在本申请的实施例中,归一化是一种无量纲处理手段,使物理系统数值的绝对值变成某种相对值关系。具体地,归一化处理已经成为简化计算,缩小量值的有效办法。
进一步地,在本申请的实施例中,由于预设分类模型是第一终端基于预存图像库对应的红外特征数据和亮度特征数据等参数训练获得的,因此,第一终端在利用预设分类模型对当前图像进行场景预测时,需要先获取当前图像对应的亮度参数。
需要说明的是,在本申请的实施例中,虽然光圈值参数、快门速度参数以及感光度参数是第一终端在拍摄当前图像时拍摄装置所对应的具体属性参数,但是对于不同的场景,光圈值参数、快门速度参数以及感光度参数的取值存在较大差异,例如,室外自然光下的Av值大于室内的Av值,室外自然光下的Tv值大于室内的Tv值,而室外自然光下的Sv值小于室内的Sv值。因此,第一终端在对当前图像进行场景预测时,可以先分别对当前图像对应的光圈值参数、快门速度参数以及感光度参数进行归一化处理,然后利用归一化后的光圈值参数、归一化后的快门速度参数以及归一化后的感光度参数,作为亮度特征信息输入至预设分类模型中。
需要说明的是,在本申请的实施例中,由于预设分类模型是基于红外特征数据和亮度特征数据进行学习训练的,因此,第一终端在利用预设分类模型进行场景预测时,除了当前图像对应的亮度参数,还需要利用当前图像对应的红外参数,因此,第一终端需要在亮度参数的基础上,结合表征红外参数的第一红外特征值和第二红外特征值,从而可以获得当前图像对应的分类参数。其中,分类参数用于对场景进行预测。
可以理解的是,在本申请的实施例中,第一终端在基于预设分类模型,根据当前图像对应的亮度参数、第一红外特征值以及第二红外特征值,获得场景预测结果,以根据所述场景预测结果进行图像处理时,可以先基于预设分类模型获得分类参数,然后根据分类参数确定出当前图像对应的场景预测结果。
需要说明的是,在本申请的实施例中,第一终端在基于预设分类模型输出与当前图像对应的分类参数之后,便可以直接利用分类参数确定出当前图像对应的场景预测结果。具体地,第一终端可以利用分类参数进行场景预测,获得场景预测结果。其中,场景预测结果可以为室内场景或室外场景。
可以理解的是,在本申请的实施例中,第一终端在利用分类参数进行场景预测时,当分类参数属于第一预设数值范围时,第一终端可以认为场景预测结果为室内场景;当分类参数属于第二预设数值范围时,第一终端可以认为场景预测结果为室外场景。
示例性的,在本申请中,对应于预设分类模型,终端可以设置有相应的第一预设数值范围和第二预设数值范围,其中,第一预设数值范围和第二预设数值范围不重合。例如,第一预设数值范围可以设置为(-20,0),第二预设数值范围可以设置为(0,33)。
需要说明的是,在本申请的实施例中,第一预设数值范围和第二预设数值范围的设置是与预设分类模型相对应的,也就是说,对于不同的预设分类模型,终端设置的第一预设数值范围和第二预设数值范围也可能是不同的,因此,对于第一预设数值范围和第二预设数值范围的取值,本申请不作具体限定。
进一步地,在本申请的实施例中,第一终端在根据分类参数确定当前图像对应的场景预测结果之后,可以利用当前图像的场景预测结果对当前图像进行进一步地的处理。具体地,第一终端可以利用场景预测结果对当前图像进行白平衡处理、亮度调节处理等。
示例性的,在本申请的实施例中,如果场景预测结果为室外场景,那么在利用该场景预测结果对当前图像进行自动白平衡时,可以直接对色温和色偏差值duv进行设置,便可以得到比较理想的白平衡效果,获得质量更佳的白平衡后的图像。例如,对于大面积纯色场景,利用场景预测结果进行AWB处理时,处理参数为R/G=1.000,B/G=1.008,没有利用场景预测结果进行AWB处理时,处理参数为R/G=0.9712,B/G=1.0594。
也就是说,在本申请中,准确的场景预测结果对AWB算法的应用非常重要,当场景判定为室外时,AWB算法可以较为简单的将色温设置为5000~5000k,色偏差值设置在0.001~0.005,就可以得到比较理想的白平衡效果。针对低亮度下室外缺乏天空参考的场景,以及大面积纯色场景时可以得到更佳理想的处理效果。
示例性的,在本申请的实施例中,如果场景预测结果为室外场景,那么在利用该场景预测结果通过AE算法对当前图像进行亮度调节时,无需再考虑频闪的影响,而是直接通过减少曝光时间来抑制运动模糊,从而使调节后的图像解决了模糊的问题。例如,对于室外场景,利用场景预测结果进行亮度调节处理时,可以直接减少曝光时间,以抑制运动模糊,相比之下,没有利用场景预测结果进行亮度调节处理,便需要对频闪的影响进行考虑。
本申请实施例提出的一种图像处理方法,第一终端通过色温传感器检测当前图像对应的第一红外信息、第二红外信息以及可见光分量;其中,第一红外信息和第二红外信息为色温传感器利用两个不同的收发波段分别获取的;根据第一红外信息、第二红外信息以及可见光分量,生成第一红外特征值和第二红外特征值;基于预设分类模型,根据当前图像对应的亮度参数、第一红外特征值以及第二红外特征值,获得场景预测结果,以根据场景预测结果进行图像处理;其中,预设分类模型用于按照光谱能量的不同对多个场景进行分类。由此可见,本申请实施例提出的图像处理方法,可以先利用色温传感器采集当前图像对应的光谱中的可见光分量和两个不同的红外信息,然后利用可见光分量和两个不同的红外信息确定出对应的两个红外特征值,并结合当前图像对应的亮度参数,基于预设分类模型实现当前图像的场景预测,其中,预设分类模型为基于预存图像库中的图像的红外特征数据和亮度特征数据进行训练和测试获得的。也就是说,在本申请中,终端利用图像的红外特征和亮度特征对预设分类模型进行训练,然后基于预设分类模型,根据当前图像的红外特征和亮度特征对当前图像进行场景的预测,便可以根据场景预测结果进行图像处理,能够减低预测的复杂度,从而可以提高预测效率,同时可以提升场景预测的精确度,进而提高了图像的处理效果。
基于上述实施例,在本申请的实施例中,第一终端基于预设分类模型,根据当前图像对应的亮度参数、第一红外特征值以及第二红外特征值,获得场景预测结果,以根据所述场景预测结果进行图像处理时,可以将亮度参数、第一红外特征值以及第二红外特征值输入至预设分类模型,输出分类参数。
在本申请的实施例中,第一终端在根据第一红外信息、第二红外信息以及可见光分量,生成第一红外特征值和第二红外特征值,同时获得亮度参数之后,可以将亮度参数、第一红外特征值以及第二红外特征值输入至预设分类模型,从而可以输出获得当前图像对应的分类参数。
进一步地,在本申请的实施例中,第一终端可以将归一化后的光圈值参数、归一化后的快门速度参数以及归一化后的感光度参数等亮度参数作为当前图像对应的亮度特征,将第一红外特征值和第二红外特征值作为当前图像对应的红外特征,也就是说,第一终端将当前图像对应的亮度特征和红外特征输入至预设分类模型之中,便可以输出表征当前图像的场景类型的分类参数。
在本申请的实施例中,进一步地,第一终端在根据第一红外信息、第二红外信息以及可见光分量,生成第一红外特征值和第二红外特征值时,具体可以先分别对第一红外信息和第二红外信息进行时频变换处理,从而可以获得第一红外信息对应的第一直流分量和第二红外信息对应的第二直流分量;然后便可以利用第一直流分量、第二直流分量以及可见光分量,进一步生成第一红外特征值和第二红外特征值。
需要说明的是,在本申请的实施例中,第一终端在生成第一红外特征值时,可以按照如下公式(1),基于第二直流分量Dc(FD2)和可见光分量C计算获得第一红外特征值IR1:
IR1=(Dc(FD2)-C)/Dc(FD2)               (1)
需要说明的是,在本申请的实施例中,第一终端在生成第二红外特征值时,可以按照如下公式(2),基于第一直流分量Dc(FD1)和第二直流分量Dc(FD2)计算获得第二红外特征值IR2:
IR2=(Dc(FD1)-Dc(FD2))/Dc(FD2)              (2)
其中,Dc操作符代表获取对应通道的直流分量,FD1DC即为Dc(FD1),FD2DC即为Dc(FD2)。
本申请实施例提出的一种图像处理方法,可以先利用色温传感器采集当前图像对应的光谱中的可见光分量和两个不同的红外信息,然后利用可见光分量和两个不同的红外信息确定出对应的两个红外特征值,并结合当前图像对应的亮度参数,基于预设分类模型实现当前图像的场景预测,其中,预设分类模型为基于预存图像库中的图像的红外特征数据和亮度特征数据进行训练和测试获得的。也就是说,在本申请中,终端利用图像的红外特征和亮度特征对预设分类模型进行训练,然后基于预设分类模型,根据当前图像的红外特征和亮度特征对当前图像进行场景的预测,便可以根据场景预测结果进行图像处理,能够减低预测的复杂度,从而可以提高预测效率,同时可以提升场景预测的精确度,进而提高了图像的处理效果。
本申请的再一实施例提出了一种图像处理方法,图11为图像处理方法的实现流程示意图二,如图11所示,第二终端进行图像处理的方法可以包括以下步骤:
步骤201、对预存图像库进行划分处理,获得训练数据和测试数据;其中,预存图像库中存储不同场景的多个图像,不同场景对应不同的光谱能量。
在本申请的实施例中,第二终端可以先对预存图像库进行划分处理,从而可以获得训练数据和测试数据。具体地,预存图像库中可以存储有不同场景的多张图像,其中,预存图像库中的不同场景的多张图像,对应有不同的光谱能量。
进一步地,在本申请的实施例中,第二终端可以为任何具备通信和存储功能的设备。例如:平板电脑、手机、电子阅读器、遥控器、个人计算机(Personal Computer,PC)、笔记本电脑、车载设备、网络电视、可穿戴设备等设备。
具体地,第二终端可以为对预设分类模型进行学习训练的设备,其中,第二终端同 时也可以为利用预设分类模型进行图像处理的设备。也就是说,在本申请中,第一终端和第二终端可以为同一设备。
需要说明的是,在本申请的实施例中,预存图像库可以用于对预设分类模型进行训练和测试。
进一步地,在本申请的实施例中,预存图像库可以包括多张室内场景的图像和多张室外场景的图像。进一步地,在本申请中,终端可以对预存图像库中的不同场景的图像进行随机划分,从而可以获得训练数据和测试数据。其中,训练数据和测试数据是完全不同的,即预存图像库中的一个图像对应的数据只能为训练数据或者测试数据中的一种。
需要说明的是,在本申请的实施例中,终端在通过划分预存图像库获得训练数据和测试数据时,可以先将预存图像库中的不同场景的图像,划分为训练图像和测试图像。具体地,在本申请的实施例中,第二终端在进行预存图像库的划分时,需要遵循训练图像和测试图像不重合的原则,即预存图像库中的任意一张图像,只能为训练图像或者测试图像中的一种。
示例性地,第二终端存储的预存图像库中存储有1024张室内场景的图像,同时存储有1134张室外场景的图像,第二终端在进行预设分类模型的训练时,可以随机从预存图像库中抽取80%的图像作为训练图像,20%的图像作为测试图像。
进一步地,在本申请的实施例中,第二终端在将预存图像库划分为训练图像和测试图像之后,便可以根据训练图像对应的第一红外特征数据和第一亮度特征数据,生成训练数据,同时,还可以根据测试图像对应的第二红外特征数据和第二亮度特征数据,生成测试数据。
需要说明的是,在本申请的实施例中,第二终端在进行预设分类模型的训练时,需要结合图像的亮度信息和红外信息,因此,训练数据中包括有训练图像对应的红外信息和亮度信息,即第一红外特征数据和第一亮度特征数据;同时,测试数据中包括有测试图像对应的红外信息和亮度信息,即第二红外特征数据和第二亮度特征数据。
进一步地,在本申请的实施例中,第一红外特征数据可以包括训练图像对应的两个不同的红外直流分量。相应地,第二红外特征数据可以包括测试图像对应的两个不同的红外直流分量。
进一步地,在本申请的实施例中,第一亮度特征数据可以包括训练图像对应的光圈值参数、快门速度参数以及感光度参数。相应地,第二亮度特征数据可以包括测试图像对应的光圈值参数、快门速度参数以及感光度参数。
由此可见,在本申请的实施例中,第二终端在进行预设分类模型的训练时所需要的特征信息有五个,具体包括两个不同的红外直流分量和三个表征亮度的光圈值参数、快门速度参数以及感光度参数。
步骤202、利用训练数据对预设损失函数进行训练,获得初始分类模型。
在本申请的实施例中,第二终端在对预存图像库进行划分处理,获得训练数据和测试数据之后,可以先利用训练数据对预设损失函数进行训练,从而可以获得初始分类模型。
需要说明的是,在本申请的实施例中,第二终端可以利用逻辑回归模型、贝叶斯分类器、集成学习、决策树、SVM模型等典型的分类模型进行预设分类模型的训练。示例性地,第二终端在利用SVM模型仅进行训练时,预设损失函数可以为如下公式(3)所示的合页损失参数(Hinge Loss):
Figure PCTCN2020135630-appb-000001
其中,y表征合页损失,x表征函数间隔,即x=y(w×x+b)。具体地,在申请中, 第二终端基于上述公式(3)进行训练时,需要在最小化合页损失的前提下计算出w和b,从而可以获得预设分类模型。
可以理解的是,在本申请的实施例中,第二终端在利用训练数据对预设损失函数进行训练时,可以根据第一红外特征数据和第一亮度特征数据对预设损失函数进行训练,从而便可以获得初始分类模型。
进一步地,在本申请的实施例中,第二终端在利用训练数据对初始分类模型进行训练时,由于训练数据包括有五个特征信息,因此,在训练参数的选择时,第二终端可以选择使用线性核对初始分类模型进行训练,具体地,步长为0.01,gamma为60000。
步骤203、根据测试数据和初始分类模型,获得预设分类模型;其中,预设分类模型用于按照光谱能量的不同对多个场景进行分类。
在本申请的实施例中,第二终端在利用训练数据对预设损失函数进行训练,获得初始分类模型之后,可以继续根据测试数据和初始分类模型,进一步获得预设分类模型。
需要说明的是,在本申请的实施例中,预设分类模型可以用于按照光谱能量的不同对多个场景进行分类,获得场景的类型。具体地,预设分类模型可以为第一终端基于红外特征和亮度特征训练获得的分类器。也就是说,第一终端可以利用预设分类模型,按照光谱能量的不同,对室外场景和室内场景进行区分。
进一步地,在本申请的实施中,第二终端在基于训练数据完成对初始分类模型的训练之后,便可以根据测试数据对初始分类模型进行测试处理,从而可以获得预设分类模型。
需要说明的是,在本申请的实施例中,第二终端在根据测试数据和初始分类模型,获得预设分类模型时,可以先利用第二红外特征数据和第二亮度特征数据对初始分类模型进行测试,从而可以获得测试结果;然后可以根据测试结果修正初始分类模型,最终便可以获得预设分类模型。
可以理解的是,在本申请的实施例中,测试结果可以为准确性参数,具体地,第二终端在根据测试数据对初始分类模型进行测试处理时,可以根据测试数据和初始分类模型,获得测试数据对应的准确性参数,如果准确性参数小于预设准确性阈值,那么第二终端可以根据测试数据对初始分类模型进行调整处理,从而便可以获得预设分类模型。
由此可见,在本申请的实施例中,第二终端可以将测试数据送入到训练好的初始分类模型中进行测试,验证模型的准确性,获得测试数据对应的准确性参数,然后可以根据准确性参数将判断错误的测试数据再次送入初始分类模型中进行fine-tuning,从而提高初始分类模型的泛化性,最终获得预设分类模型。
示例性地,在本申请中,第二终端可以基于预存图像库不断的进行不同轮数的预设分类模型的训练,具体地,对于不同轮数的训练,第二终端划分获得的训练数据和测试数据是不相同的,最终获得的结果也存在差异。表1为测试结果统计,如表1所示,基于不同的训练数据和测试数据所训练获得的不同的预设分类模型,对于场景进行预测的正确率以及综合正确率也是不同的。
表1
测试轮数 室内正确率 室外正确率 综合正确率
1 96.41% 96.36% 96.38%
2 95.52% 96.87% 96.19%
3 96.21% 96.73% 96.47%
4 96.79% 96.12% 96.45%
5 96.33% 96.38% 96.35%
平均 96.25% 96.49% 96.36%
由此可见,第二终端在基于预存图像库不断的进行不同轮数的预设分类模型的训练,获得不同的预设分类模型之后,可以选择准确率更为优秀的预设分类模型进行图像处理。
在本申请中,在光谱中380nm~780nm的光线可以被人眼察觉,我们称之为可见光波段。800nm往后的区域通常称为红外波段,人眼无法察觉。图12为荧光灯的光谱能量分布的示意图,图13为日光的光谱能量分布的示意图,图14为白炽灯的光谱能量分布的示意图,如图12、13、14所示,从荧光灯、日光以及白炽灯等不同光源的光谱能量分布情况可以看出,室内荧光灯场景下800nm~900nm红外波段能量十分微弱,而日光下红外波段800nm~900nm还有相当强的能量存在,950nm以后开始剧烈衰减,相比之下,白炽灯在红外波段800nm~1000nm的能量呈现越来越强的趋势。因此,本申请可以直接利用色温传感器检测获得的红外波段信息获得差别性的特征信息。也就是说,在本申请中,第二终端可以将色温传感器获得的红外信息作为训练预设分类模型的特征信息,相应地,第二终端可以将色温传感器获得的红外信息作为基于预设分类模型进行场景预测的特征信息。
需要说明的是,在本申请的实施例中,在生成预设分类模型时,终端是基于预存图像库对应的红外特征数据和亮度特征数据等参数对预设分类模型进行训练,因此,在基于训练获得的预设分类模型进行图像处理时,终端可以利用当前图像对应的亮度参数、第一红外特征值以及第二红外特征值,确定当前图像的场景类型。
也就是说,在本申请的实施例中,无论在预设分类模型的生成过程中,还是在预设分类模型的使用过程中,终端所需要的图像的特征信息既包括对应的红外特征,也包括对应的亮度特征。
本申请实施例提出的一种图像处理方法,第二终端对预存图像库进行划分处理,获得训练数据和测试数据;其中,预存图像库中存储不同场景的多个图像,不同场景对应不同的光谱能量;利用训练数据对预设损失函数进行训练,获得初始分类模型;根据测试数据和初始分类模型,获得预设分类模型。由此可见,本申请实施例提出的图像处理方法,可以先利用色温传感器采集当前图像对应的光谱中的可见光分量和两个不同的红外信息,然后利用可见光分量和两个不同的红外信息确定出对应的两个红外特征值,并结合当前图像对应的亮度参数,基于预设分类模型实现当前图像的场景预测,其中,预设分类模型为基于预存图像库中的图像的红外特征数据和亮度特征数据进行训练和测试获得的。也就是说,在本申请中,终端利用图像的红外特征和亮度特征对预设分类模型进行训练,然后基于预设分类模型,根据当前图像的红外特征和亮度特征对当前图像进行场景的预测,便可以根据场景预测结果进行图像处理,能够减低预测的复杂度,从而可以提高预测效率,同时可以提升场景预测的精确度,进而提高了图像的处理效果。
基于上述实施例,在本申请的再一实施例中,图15为第一终端的组成结构示意图一,如图15所示,本申请实施例提出的第一终端1可以包括检测部分11,生成部分12,第一获取部分13以及处理部分14。
所述检测部分11,配置为通过色温传感器检测当前图像对应的第一红外信息、第二红外信息以及可见光分量;其中,所述第一红外信息和所述第二红外信息为所述色温传感器利用两个不同的收发波段分别获取的;
所述生成部分12,配置为根据所述第一红外信息、第二红外信息以及所述可见光分量,生成第一红外特征值和第二红外特征值;
所述第一获取部分13,配置为基于预设分类模型,根据所述当前图像对应的亮度参数、所述第一红外特征值以及所述第二红外特征值,获得场景预测结果,以根据所述场景预测结果进行图像处理;其中,所述预设分类模型用于按照光谱能量的不同对多个场景进行分类。
进一步地,在本申请的实施例中,所述第一获取部分13,具体配置为将所述亮度参数、所述第一红外特征值以及所述第二红外特征值输入至所述预设分类模型,输出所述分类参数;当所述分类参数属于第一预设数值范围时,确定所述场景预测结果为室内场景;当所述分类参数属于第二预设数值范围时,确定所述场景预测结果为室外场景;其中,所述第一预设数值范围与所述第二预设数值范围不重合。
进一步地,在本申请的实施例中,所述生成部分12,具体配置为对所述第一红外信息进行时频变换处理,获得第一直流分量;对所述第二红外信息进行时频变换处理,获得第二直流分量;根据所述第二直流分量和所述可见光分量,确定所述第一红外特征值;以及根据所述第一直流分量和所述第二直流分量,确定所述第二红外特征值。
进一步地,在本申请的实施例中,所述第一获取部分13,还配置为基于预设分类模型,根据所述当前图像对应的亮度参数、所述第一红外特征值以及所述第二红外特征值,获得场景预测结果,以根据所述场景预测结果进行图像处理之前,读取所述当前图像对应的属性参数;对所述属性参数进行归一化处理,获得所述亮度参数。
进一步地,在本申请的实施例中,所述属性参数包括光圈值参数、快门速度参数以及感光度参数。
进一步地,在本申请的实施例中,所述处理部分14,具体配置为利用所述场景预测结果对所述当前图像进行自动白平衡处理,获得白平衡后的图像。
进一步地,在本申请的实施例中,所述处理部分14,还具体配置为利用所述场景预测结果对所述当前图像进行亮度调节,获得调节后的图像。
进一步地,在本申请的实施例中,所述第一终端在前盖上设置有前置摄像头,在后盖上设置有后置摄像头,所述色温传感器设置在所述前盖的第一区域中;其中,所述第一区域表征与所述前置摄像头相邻的区域;或者,所述色温传感器设置在所述后盖的第二区域中;其中,所述第二区域表征与所述后置摄像头相邻的区域。
进一步地,在本申请的实施例中,所述第一终端的顶部设置有缝隙,所述色温传感器设置在所述缝隙中。
图16为第一终端的组成结构示意图二,如图16所示,本申请实施例提出的第一终端1还可以包括第一处理器15、存储有第一处理器15可执行指令的第一存储器16,进一步地,第一终端1还可以包括第一通信接口17,和用于连接第一处理器15、第一存储器16以及第一通信接口17的第一总线18。
在本申请的实施例中,上述第一处理器15可以为特定用途集成电路(Application Specific Integrated Circuit,ASIC)、数字信号处理器(Digital Signal Processor,DSP)、数字信号处理装置(Digital Signal Processing Device,DSPD)、可编程逻辑装置(ProgRAMmable Logic Device,PLD)、现场可编程门阵列(Field ProgRAMmable Gate Array,FPGA)、中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器中的至少一种。可以理解地,对于不同的设备,用于实现上述处理器功能的电子器件还可以为其它,本申请实施例不作具体限定。第一终端1还可以包括第一存储器16,该第一存储器16可以与第一处理器15连接,其中,第一存储器16用于存储可执行程序代码,该程序代码包括计算机操作指令,第一存储器16可能包含高速RAM存储器,也可能还包括非易失性存储器,例如,至少两个磁盘存储器。
在本申请的实施例中,第一总线18用于连接第一通信接口17、第一处理器15以及第一存储器16以及这些器件之间的相互通信。
在本申请的实施例中,第一存储器16,用于存储指令和数据。
进一步地,在本申请的实施例中,上述第一处理器15,用于通过色温传感器检测当前图像对应的第一红外信息、第二红外信息以及可见光分量;其中,所述第一红外信息 和所述第二红外信息为所述色温传感器利用两个不同的收发波段分别获取的;根据所述第一红外信息、第二红外信息以及所述可见光分量,生成第一红外特征值和第二红外特征值;基于预设分类模型,根据所述当前图像对应的亮度参数、所述第一红外特征值以及所述第二红外特征值,获得场景预测结果,以根据场景预测结果进行图像处理;其中,所述预设分类模型用于按照光谱能量的不同对多个场景进行分类。
在实际应用中,上述第一存储器16可以是易失性存储器(volatile memory),例如随机存取存储器(Random-Access Memory,RAM);或者非易失性存储器(non-volatile memory),例如只读存储器(Read-Only Memory,ROM),快闪存储器(flash memory),硬盘(Hard Disk Drive,HDD)或固态硬盘(Solid-State Drive,SSD);或者上述种类的存储器的组合,并向第一处理器15提供指令和数据。
另外,在本实施例中的各功能模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中,基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或processor(处理器)执行本实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本申请实施例提出的一种第一终端,该第一终端通过色温传感器检测当前图像对应的第一红外信息、第二红外信息以及可见光分量;其中,第一红外信息和第二红外信息为色温传感器利用两个不同的收发波段分别获取的;根据第一红外信息、第二红外信息以及可见光分量,生成第一红外特征值和第二红外特征值;基于预设分类模型,根据当前图像对应的亮度参数、第一红外特征值以及第二红外特征值,获得场景预测结果,以根据场景预测结果进行图像处理;其中,预设分类模型用于按照光谱能量的不同对多个场景进行分类。由此可见,本申请实施例提出的图像处理方法,可以先利用色温传感器采集当前图像对应的光谱中的可见光分量和两个不同的红外信息,然后利用可见光分量和两个不同的红外信息确定出对应的两个红外特征值,并结合当前图像对应的亮度参数,基于预设分类模型实现当前图像的场景预测,其中,预设分类模型为基于预存图像库中的图像的红外特征数据和亮度特征数据进行训练和测试获得的。也就是说,在本申请中,终端利用图像的红外特征和亮度特征对预设分类模型进行训练,然后基于预设分类模型,根据当前图像的红外特征和亮度特征对当前图像进行场景的预测,便可以根据场景预测结果进行图像处理,能够减低预测的复杂度,从而可以提高预测效率,同时可以提升场景预测的精确度,进而提高了图像的处理效果。
基于上述实施例,在本申请的再一实施例中,图17为第二终端的组成结构示意图一,如图17所示,本申请实施例提出的第二终端2可以包括划分部分21和第二获取部分22。
所述划分部分21,配置为对预存图像库进行划分处理,获得训练数据和测试数据;其中,所述预存图像库中存储不同场景的多个图像,不同场景对应不同的光谱能量;
所述第二获取部分22,配置为利用所述训练数据对预设损失函数进行训练,获得初始分类模型;以及根据所述测试数据和所述初始分类模型,获得预设分类模型;其中,所述预设分类模型用于按照光谱能量的不同对多个场景进行分类。
进一步地,在本申请的实施例中,所述划分部分21,具体配置为将所述多个图像划分为训练图像和测试图像;根据所述训练图像对应的第一红外特征数据和第一亮度特征数据,生成所述训练数据;根据所述测试图像对应的第二红外特征数据和第二亮度特征数据,生成所述测试数据。
进一步地,在本申请的实施例中,所述第二获取部分22,具体配置为根据所述第一红外特征数据和所述第一亮度特征数据对所述预设损失函数进行训练,获得所述初始分类模型。
进一步地,在本申请的实施例中,所述第二获取部分22,还具体配置为利用所述第二红外特征数据和所述第二亮度特征数据对所述初始分类模型进行测试,获得测试结果;以及根据所述测试结果修正所述初始分类模型,获得所述预设分类模型。
图18为第二终端的组成结构示意图二,如图18所示,本申请实施例提出的第二终端2还可以包括第二处理器23、存储有第二处理器23可执行指令的第二存储器24,进一步地,第二终端2还可以包括第二通信接口25,和用于连接第二处理器23、第二存储器24以及第二通信接口25的第二总线26。
在本申请的实施例中,第二终端2还可以包括第二存储器24,该第二存储器24可以与第二处理器23连接,其中,第二存储器24用于存储可执行程序代码,该程序代码包括计算机操作指令,第二存储器24可能包含高速RAM存储器,也可能还包括非易失性存储器,例如,至少两个磁盘存储器。
在本申请的实施例中,第二总线26用于连接第二通信接口25、第二处理器23以及第二存储器24以及这些器件之间的相互通信。
在本申请的实施例中,第二存储器24,用于存储指令和数据。
进一步地,在本申请的实施例中,上述第二处理器23,用于对预存图像库进行划分处理,获得训练数据和测试数据;其中,所述预存图像库中存储不同场景的多个图像,不同场景对应不同的光谱能量;利用所述训练数据对预设损失函数进行训练,获得初始分类模型;根据所述测试数据和所述初始分类模型,获得预设分类模型;其中,所述预设分类模型用于按照光谱能量的不同对多个场景进行分类。
本申请实施例提出的一种第二终端,该第二终端对预存图像库进行划分处理,获得训练数据和测试数据;其中,预存图像库中存储不同场景的多个图像,不同场景对应不同的光谱能量;利用训练数据对预设损失函数进行训练,获得初始分类模型;根据测试数据和初始分类模型,获得预设分类模型。由此可见,本申请实施例提出的图像处理方法,可以先利用色温传感器采集当前图像对应的光谱中的可见光分量和两个不同的红外信息,然后利用可见光分量和两个不同的红外信息确定出对应的两个红外特征值,并结合当前图像对应的亮度参数,基于预设分类模型实现当前图像的场景预测,其中,预设分类模型为基于预存图像库中的图像的红外特征数据和亮度特征数据进行训练和测试获得的。也就是说,在本申请中,终端利用图像的红外特征和亮度特征对预设分类模型进行训练,然后基于预设分类模型,根据当前图像的红外特征和亮度特征对当前图像进行场景的预测,便可以根据场景预测结果进行图像处理,能够减低预测的复杂度,从而可以提高预测效率,同时可以提升场景预测的精确度,进而提高了图像的处理效果。
本申请实施例提供一种计算机可读存储介质,其上存储有程序,该程序被处理器执行时实现如上所述的图像处理方法。
具体来讲,本实施例中的一种图像处理方法对应的程序指令可以被存储在光盘,硬盘,U盘等存储介质上,当存储介质中的与一种图像处理方法对应的程序指令被一电子设备读取或被执行时,包括如下步骤:
通过色温传感器检测当前图像对应的第一红外信息、第二红外信息以及可见光分量; 其中,所述第一红外信息和所述第二红外信息为所述色温传感器利用两个不同的收发波段分别获取的;
根据所述第一红外信息、第二红外信息以及所述可见光分量,生成第一红外特征值和第二红外特征值;
基于预设分类模型,根据所述当前图像对应的亮度参数、所述第一红外特征值以及所述第二红外特征值,获得场景预测结果,以根据所述场景预测结果进行图像处理;其中,所述预设分类模型用于按照光谱能量的不同对多个场景进行分类。
当存储介质中的与一种图像处理方法对应的程序指令被一电子设备读取或被执行时,还包括如下步骤:
对预存图像库进行划分处理,获得训练数据和测试数据;其中,所述预存图像库中存储不同场景的多个图像,不同场景对应不同的光谱能量;
利用所述训练数据对预设损失函数进行训练,获得初始分类模型;
根据所述测试数据和所述初始分类模型,获得预设分类模型;其中,所述预设分类模型用于按照光谱能量的不同对多个场景进行分类。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的实现流程示意图和/或方框图来描述的。应理解可由计算机程序指令实现流程示意图和/或方框图中的每一流程和/或方框、以及实现流程示意图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在实现流程示意图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在实现流程示意图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在实现流程示意图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
工业实用性
本申请实施例提供了一种图像处理方法、终端及存储介质,第一终端通过色温传感器检测当前图像对应的第一红外信息、第二红外信息以及可见光分量;其中,第一红外信息和第二红外信息为色温传感器利用两个不同的收发波段分别获取的;根据第一红外信息、第二红外信息以及可见光分量,生成第一红外特征值和第二红外特征值;基于预设分类模型,根据当前图像对应的亮度参数、第一红外特征值以及第二红外特征值,获得场景预测结果,以根据场景预测结果进行图像处理;其中,预设分类模型用于按照光谱能量的不同对多个场景进行分类。第二终端对预存图像库进行划分处理,获得训练数据和测试数据;其中,预存图像库中存储不同场景的多个图像,不同场景对应不同的光 谱能量;利用训练数据对预设损失函数进行训练,获得初始分类模型;根据测试数据和初始分类模型,获得预设分类模型。由此可见,本申请实施例提出的图像处理方法,可以先利用色温传感器采集当前图像对应的光谱中的可见光分量和两个不同的红外信息,然后利用可见光分量和两个不同的红外信息确定出对应的两个红外特征值,并结合当前图像对应的亮度参数,基于预设分类模型实现当前图像的场景预测,其中,预设分类模型为基于预存图像库中的图像的红外特征数据和亮度特征数据进行训练和测试获得的。也就是说,在本申请中,终端利用图像的红外特征和亮度特征对预设分类模型进行训练,然后基于预设分类模型,根据当前图像的红外特征和亮度特征对当前图像进行场景的预测,便可以根据场景预测结果进行图像处理,能够减低预测的复杂度,从而可以提高预测效率,同时可以提升场景预测的精确度,进而提高了图像的处理效果。

Claims (18)

  1. 一种图像处理方法,所述图像处理方法应用于第一终端,所述方法包括:
    通过色温传感器检测当前图像对应的第一红外信息、第二红外信息以及可见光分量;其中,所述第一红外信息和所述第二红外信息为所述色温传感器利用两个不同的收发波段分别获取的;
    根据所述第一红外信息、第二红外信息以及所述可见光分量,生成第一红外特征值和第二红外特征值;
    基于预设分类模型,根据所述当前图像对应的亮度参数、所述第一红外特征值以及所述第二红外特征值,获得场景预测结果,以根据所述场景预测结果进行图像处理;其中,所述预设分类模型用于按照光谱能量的不同对多个场景进行分类。
  2. 根据权利要求1所述的方法,其中,所述基于预设分类模型,根据所述当前图像对应的亮度参数、所述第一红外特征值以及所述第二红外特征值,获得场景预测结果,以根据所述场景预测结果进行图像处理,包括:
    将所述亮度参数、所述第一红外特征值以及所述第二红外特征值输入至所述预设分类模型,输出分类参数;
    当所述分类参数属于第一预设数值范围时,确定所述场景预测结果为室内场景;
    当所述分类参数属于第二预设数值范围时,确定所述场景预测结果为室外场景;其中,所述第一预设数值范围与所述第二预设数值范围不重合。
  3. 根据权利要求1所述的方法,其中,所述根据所述第一红外信息、第二红外信息以及所述可见光分量,生成第一红外特征值和第二红外特征值,包括:
    对所述第一红外信息进行时频变换处理,获得第一直流分量;对所述第二红外信息进行时频变换处理,获得第二直流分量;
    根据所述第二直流分量和所述可见光分量,确定所述第一红外特征值;
    根据所述第一直流分量和所述第二直流分量,确定所述第二红外特征值。
  4. 根据权利要求1所述的方法,其中,所述基于预设分类模型,根据所述当前图像对应的亮度参数、所述第一红外特征值以及所述第二红外特征值,获得场景预测结果,以根据所述场景预测结果进行图像处理之前,所述方法还包括:
    读取所述当前图像对应的属性参数;
    对所述属性参数进行归一化处理,获得所述亮度参数。
  5. 根据权利要求4所述的方法,其中,所述属性参数包括光圈值参数、快门速度参数以及感光度参数。
  6. 根据权利要求1所述的方法,其中,所述根据所述场景预测结果进行图像处理,包括:
    利用所述场景预测结果对所述当前图像进行自动白平衡处理,获得白平衡后的图像。
  7. 根据权利要求1所述的方法,其中,所述根据所述场景预测结果进行图像处理,包括:
    利用所述场景预测结果对所述当前图像进行亮度调节,获得调节后的图像。
  8. 根据权利要求1所述的方法,其中,所述第一终端在前盖上设置有前置摄像头,在后盖上设置有后置摄像头,
    所述色温传感器设置在所述前盖的第一区域中;其中,所述第一区域表征与所述前置摄像头相邻的区域;
    或者,
    所述色温传感器设置在所述后盖的第二区域中;其中,所述第二区域表征与所述后 置摄像头相邻的区域。
  9. 根据权利要求1所述的方法,其中,所述第一终端的顶部设置有缝隙,
    所述色温传感器设置在所述缝隙中。
  10. 一种图像处理方法,所述图像处理方法应用于第二终端,所述方法包括:
    对预存图像库进行划分处理,获得训练数据和测试数据;其中,所述预存图像库中存储不同场景的多个图像,不同场景对应不同的光谱能量;
    利用所述训练数据对预设损失函数进行训练,获得初始分类模型;
    根据所述测试数据和所述初始分类模型,获得预设分类模型;其中,所述预设分类模型用于按照光谱能量的不同对多个场景进行分类。
  11. 根据权利要求10所述的方法,其中,所述对预存图像库进行划分处理,获得训练数据和测试数据,包括:
    将所述多个图像划分为训练图像和测试图像;
    根据所述训练图像对应的第一红外特征数据和第一亮度特征数据,生成所述训练数据;
    根据所述测试图像对应的第二红外特征数据和第二亮度特征数据,生成所述测试数据。
  12. 根据权利要求11所述的方法,其中,所述利用所述训练数据对预设损失函数进行训练,获得初始分类模型,包括:
    根据所述第一红外特征数据和所述第一亮度特征数据对所述预设损失函数进行训练,获得所述初始分类模型。
  13. 根据权利要求11所述的方法,其中,所述根据所述测试数据和所述初始分类模型,获得预设分类模型,包括:
    利用所述第二红外特征数据和所述第二亮度特征数据对所述初始分类模型进行测试,获得测试结果;
    根据所述测试结果修正所述初始分类模型,获得所述预设分类模型。
  14. 一种第一终端,所述第一终端包括:检测部分,生成部分,第一获取部分,
    所述检测部分,配置为通过色温传感器检测当前图像对应的第一红外信息、第二红外信息以及可见光分量;其中,所述第一红外信息和所述第二红外信息为所述色温传感器利用两个不同的收发波段分别获取的;
    所述生成部分,配置为根据所述第一红外信息、第二红外信息以及所述可见光分量,生成第一红外特征值和第二红外特征值;
    所述第一获取部分,配置为基于预设分类模型,根据所述当前图像对应的亮度参数、所述第一红外特征值以及所述第二红外特征值,获得场景预测结果,以根据所述场景预测结果进行图像处理;其中,所述预设分类模型用于按照光谱能量的不同对多个场景进行分类。
  15. 一种第二终端,所述第二终端包括:划分部分和第二获取部分,
    所述划分部分,配置为对预存图像库进行划分处理,获得训练数据和测试数据;其中,所述预存图像库中存储不同场景的多个图像,不同场景对应不同的光谱能量;
    所述第二获取部分,配置为利用所述训练数据对预设损失函数进行训练,获得初始分类模型;以及根据所述测试数据和所述初始分类模型,获得预设分类模型;其中,所述预设分类模型用于按照光谱能量的不同对多个场景进行分类。
  16. 一种第一终端,所述第一终端包括第一处理器、存储有所述第一处理器可执行指令的第一存储器,当所述指令被所述第一处理器执行时,实现如权利要求1-9任一项所述的方法。
  17. 一种第二终端,所述第二终端包括第二处理器、存储有所述第二处理器可执行指令的第二存储器,当所述指令被所述第二处理器执行时,实现如权利要求10-13任一项所述的方法。
  18. 一种计算机可读存储介质,其上存储有程序,应用于第一终端和第二终端中,所述程序被处理器执行时,实现如权利要求1-13任一项所述的方法。
PCT/CN2020/135630 2019-12-12 2020-12-11 图像处理方法、终端及存储介质 WO2021115419A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911271535.1A CN111027489B (zh) 2019-12-12 2019-12-12 图像处理方法、终端及存储介质
CN201911271535.1 2019-12-12

Publications (1)

Publication Number Publication Date
WO2021115419A1 true WO2021115419A1 (zh) 2021-06-17

Family

ID=70208843

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/135630 WO2021115419A1 (zh) 2019-12-12 2020-12-11 图像处理方法、终端及存储介质

Country Status (2)

Country Link
CN (1) CN111027489B (zh)
WO (1) WO2021115419A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071369A (zh) * 2022-12-13 2023-05-05 哈尔滨理工大学 一种红外图像处理方法及装置

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111027489B (zh) * 2019-12-12 2023-10-20 Oppo广东移动通信有限公司 图像处理方法、终端及存储介质
CN111918047A (zh) * 2020-07-27 2020-11-10 Oppo广东移动通信有限公司 拍照控制方法及装置、存储介质和电子设备
CN112750448B (zh) * 2020-08-07 2024-01-16 腾讯科技(深圳)有限公司 声音场景的识别方法、装置、设备及存储介质
CN114338962B (zh) * 2020-09-29 2023-04-18 华为技术有限公司 成像方法和装置
CN115242949A (zh) * 2022-07-21 2022-10-25 Oppo广东移动通信有限公司 摄像模组与电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105898260A (zh) * 2016-04-07 2016-08-24 广东欧珀移动通信有限公司 一种调节摄像头白平衡的方法及装置
CN109685746A (zh) * 2019-01-04 2019-04-26 Oppo广东移动通信有限公司 图像亮度调整方法、装置、存储介质及终端
CN109784237A (zh) * 2018-12-29 2019-05-21 北京航天云路有限公司 基于迁移学习的残差网络训练的场景分类方法
CN109977731A (zh) * 2017-12-27 2019-07-05 深圳市优必选科技有限公司 一种场景的识别方法、识别设备及终端设备
CN111027489A (zh) * 2019-12-12 2020-04-17 Oppo广东移动通信有限公司 图像处理方法、终端及存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009013725A1 (en) * 2007-07-25 2009-01-29 Nxp B.V. Indoor/outdoor detection
CN103493212B (zh) * 2011-03-29 2016-10-12 欧司朗光电半导体有限公司 借助于两个光电二极管确定主导光源的类型的单元
KR101766029B1 (ko) * 2015-08-26 2017-08-08 주식회사 넥서스칩스 조도 검출 장치 및 그 방법
CN105455781A (zh) * 2015-11-17 2016-04-06 努比亚技术有限公司 信息处理方法及电子设备
CN106993175B (zh) * 2016-01-20 2019-08-20 瑞昱半导体股份有限公司 产生供自动白平衡校正运算使用的像素筛选范围的方法
CN107622281B (zh) * 2017-09-20 2021-02-05 Oppo广东移动通信有限公司 图像分类方法、装置、存储介质及移动终端
CN113890989B (zh) * 2017-10-14 2023-07-11 华为技术有限公司 一种拍摄方法以及电子装置
CN108304821B (zh) * 2018-02-14 2020-12-18 Oppo广东移动通信有限公司 图像识别方法及装置、图像获取方法及设备、计算机设备及非易失性计算机可读存储介质
CN108881876B (zh) * 2018-08-17 2021-02-02 Oppo广东移动通信有限公司 对图像进行白平衡处理的方法、装置和电子设备
CN110233971B (zh) * 2019-07-05 2021-07-09 Oppo广东移动通信有限公司 一种拍摄方法及终端、计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105898260A (zh) * 2016-04-07 2016-08-24 广东欧珀移动通信有限公司 一种调节摄像头白平衡的方法及装置
CN109977731A (zh) * 2017-12-27 2019-07-05 深圳市优必选科技有限公司 一种场景的识别方法、识别设备及终端设备
CN109784237A (zh) * 2018-12-29 2019-05-21 北京航天云路有限公司 基于迁移学习的残差网络训练的场景分类方法
CN109685746A (zh) * 2019-01-04 2019-04-26 Oppo广东移动通信有限公司 图像亮度调整方法、装置、存储介质及终端
CN111027489A (zh) * 2019-12-12 2020-04-17 Oppo广东移动通信有限公司 图像处理方法、终端及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071369A (zh) * 2022-12-13 2023-05-05 哈尔滨理工大学 一种红外图像处理方法及装置
CN116071369B (zh) * 2022-12-13 2023-07-14 哈尔滨理工大学 一种红外图像处理方法及装置

Also Published As

Publication number Publication date
CN111027489A (zh) 2020-04-17
CN111027489B (zh) 2023-10-20

Similar Documents

Publication Publication Date Title
WO2021115419A1 (zh) 图像处理方法、终端及存储介质
US11210768B2 (en) Digital image auto exposure adjustment
US10949958B2 (en) Fast fourier color constancy
US10120267B2 (en) System and method for re-configuring a lighting arrangement
US20160071289A1 (en) Image composition device, image composition method, and recording medium
US20170064179A1 (en) Method and Apparatus for Auto Exposure Value Detection for High Dynamic Range Imaging
US9460521B2 (en) Digital image analysis
CN106101541A (zh) 一种终端、拍照设备及其基于人物情绪的拍摄方法
JP7152065B2 (ja) 画像処理装置
CN113452980B (zh) 图像处理方法、终端及存储介质
US20210021750A1 (en) Method and Device for Balancing Foreground-Background Luminosity
US20140125836A1 (en) Robust selection and weighting for gray patch automatic white balancing
TW201444336A (zh) 色域轉換系統之膚色最佳化方法與裝置
US11457189B2 (en) Device for and method of correcting white balance of image
TW201503689A (zh) 物件之景深分割方法與系統
WO2024007948A1 (zh) 频闪图像处理方法、装置、电子设备和可读存储介质
US11687316B2 (en) Audio based image capture settings
CN110909696B (zh) 一种场景检测方法、装置、存储介质及终端设备
CN110929663B (zh) 一种场景预测方法及终端、存储介质
US8953063B2 (en) Method for white balance adjustment
CN109903248B (zh) 一种生成自动白平衡模型的方法和图像处理方法
US20160292825A1 (en) System and method to refine image data
Hussain et al. Colour constancy using sub-blocks of the image
CN114630095B (zh) 目标场景图像的自动白平衡方法及装置、终端
WO2022174456A1 (zh) 图像白平衡调整方法、装置、拍摄设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20898109

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20898109

Country of ref document: EP

Kind code of ref document: A1