CN115187559A - Illumination detection method and device for image, storage medium and electronic equipment - Google Patents

Illumination detection method and device for image, storage medium and electronic equipment Download PDF

Info

Publication number
CN115187559A
CN115187559A CN202210860445.1A CN202210860445A CN115187559A CN 115187559 A CN115187559 A CN 115187559A CN 202210860445 A CN202210860445 A CN 202210860445A CN 115187559 A CN115187559 A CN 115187559A
Authority
CN
China
Prior art keywords
detection
data
image
target image
spectral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210860445.1A
Other languages
Chinese (zh)
Inventor
王琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202210860445.1A priority Critical patent/CN115187559A/en
Publication of CN115187559A publication Critical patent/CN115187559A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The disclosure provides an illumination detection method for an image, an illumination detection device for an image, a computer-readable storage medium and an electronic device, and relates to the technical field of computer vision. The illumination detection method for the image comprises the following steps: acquiring a target image acquired by an image sensor and spectral data of a plurality of detection areas acquired by a plurality of spectral sensors; each spectral sensor corresponds to a detection area, and each detection area corresponds to a sub-area in the target image; and analyzing the spectrum data of each detection area to obtain the illumination information of each subarea in the target image. The illumination information detection method and the illumination information detection device aim at the problems of low detection precision and long time consumption in the illumination information detection process, and the detection efficiency of illumination information detection in the image is improved to a certain extent by detecting the illumination information of the image based on the spectrum sensor.

Description

Illumination detection method and device for image, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to an illumination detection method for an image, an illumination detection apparatus for an image, a computer-readable storage medium, and an electronic device.
Background
Techniques for detecting illumination in images have been applied to many scenes. For example, the color of the image can be adjusted according to the illumination detection result of the image, so that the display effect of the image in the screen is consistent with that of human eyes.
In the related art, the illumination detection of the image is usually required to be performed through a clustering algorithm, however, the method is easily misled by a large area of confusing colors in the image, and the detection result is inaccurate.
Disclosure of Invention
The present disclosure provides a method for detecting illumination for an image, an apparatus for detecting illumination for an image, a computer-readable storage medium, and an electronic device, thereby solving, at least to some extent, the problem of inaccurate illumination detection in the related art.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a lighting detection method for an image, comprising: acquiring a target image acquired by an image sensor and spectral data of a plurality of detection areas acquired by a plurality of spectral sensors; each spectral sensor corresponds to a detection area, and each detection area corresponds to a sub-area in the target image; and analyzing the spectrum data of each detection area to obtain the illumination information of each subarea in the target image.
According to a second aspect of the present disclosure, there is provided a lighting detection apparatus for an image, comprising: a data acquisition module configured to acquire a target image acquired by the image sensor and spectral data of a plurality of detection areas acquired by the plurality of spectral sensors; each spectral sensor corresponds to a detection area, and each detection area corresponds to a sub-area in the target image; and the illumination information detection module is configured to analyze the spectral data of each detection area to obtain illumination information of each sub-area in the target image.
According to a third aspect of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, implements the illumination detection method for an image of the first aspect described above and possible implementations thereof.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; a memory for storing executable instructions of the processor. Wherein the processor is configured to perform the illumination detection method for an image of the first aspect described above and possible implementations thereof via execution of the executable instructions.
The technical scheme of the disclosure has the following beneficial effects:
according to the method, the target image needing illumination detection and the spectrum data of each subarea in the target image collected by the plurality of spectrum sensors are firstly obtained, and then the illumination information of each subarea is obtained by analyzing the spectrum data of each subarea. On one hand, each spectral sensor acquires spectral data in a corresponding target image sub-area, so that the acquisition precision of the spectral data is improved; the illumination information of each subregion is obtained based on the spectral data of each subregion in the target image, so that the interference caused by one or more large-area colors in the target image is strongly inhibited, and the illumination detection precision of the image is improved. On the other hand, a plurality of spectral sensors simultaneously act on the target image to acquire spectral data in parallel, so that the time consumption of the detection process is reduced, the detection speed is increased, and the efficiency of illumination detection on the image is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It should be apparent that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived by those of ordinary skill in the art without inventive effort.
FIG. 1 illustrates the distribution of illumination information of a target image obtained using a clustering algorithm;
FIG. 2 shows a clustering result of illumination detection using a clustering algorithm when multiple colors in a target image have high occupancy rates;
FIG. 3 illustrates a system architecture of the exemplary embodiment operating environment;
fig. 4 shows a flowchart of a method for illumination detection of an image in the present exemplary embodiment;
fig. 5 shows a schematic view of a detection area in the present exemplary embodiment;
fig. 6 is a schematic diagram illustrating a process of acquiring illumination information of each sub-area in the present exemplary embodiment;
FIG. 7 is a diagram illustrating a light source classification model in accordance with an exemplary embodiment;
fig. 8 is a schematic diagram showing a process of extracting feature data of each detection region in the present exemplary embodiment;
fig. 9 is a schematic diagram illustrating a process of determining a light source of a target image according to illumination information in the present exemplary embodiment;
fig. 10 shows a histogram of color temperatures and a histogram of color deviation values in the present exemplary embodiment;
FIG. 11 shows a flow chart for target image illumination detection in accordance with the present exemplary embodiment;
fig. 12 is a schematic configuration diagram showing a light detection apparatus for an image in the present exemplary embodiment;
fig. 13 is a schematic structural view showing a camera module according to the present exemplary embodiment;
fig. 14A shows a schematic diagram of a conventional bayer array arrangement;
fig. 14B shows a schematic diagram of a quad bayer array arrangement;
FIG. 15 shows a schematic diagram of a set of spectral splitters and a set of spectral sensors in the present exemplary embodiment;
FIG. 16 is a schematic view showing an array arrangement of spectrum sensors in the present exemplary embodiment;
FIG. 17 is a schematic diagram showing light source distribution and window division in the present exemplary embodiment;
fig. 18A shows an image after global color correction processing is performed on a reference image;
fig. 18B shows an image after the local color correction processing is performed on the reference image;
fig. 19 shows a schematic diagram of a 3 × 3 extraction template in the present exemplary embodiment;
fig. 20 is a schematic structural view showing another camera module in the present exemplary embodiment;
fig. 21 is a schematic view showing an electronic apparatus in the present exemplary embodiment;
fig. 22 shows a schematic diagram of a mobile terminal in the present exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the embodiments of the disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
In the related art, the pixels in the image are usually clustered according to the RGB color space, and the distribution of the illumination information in the image is determined according to the clustering result. Common clustering algorithms may include a k-means clustering algorithm (k-means clustering algorithm), for example, the result of clustering the illumination information in the image using the k-means clustering algorithm may be as shown in fig. 1, and the abscissa and ordinate in fig. 1 may represent the R/G color gamut and the B/G color gamut, respectively.
However, when the distribution area of one or more colors in an image is large, the illumination information detection is performed on the image according to the clustering algorithm, which is easily misled by a large area of colors in the image, thereby causing detection failure. As shown in fig. 2, fig. 2 (a) may be an image of illumination information to be detected, which includes only one kind of illumination information, but includes a large area of plants having the same color and a large area of ground having the same color, and fig. 2 (b) is a clustering result of the image, and as can be seen from fig. 2 (b), the related art uses a clustering algorithm to identify the plants and the ground as two kinds of illumination information. Therefore, the colors of the large-area plants and the colors of the ground in fig. 2 (a) mislead the detection of the clustering algorithm, which results in the failure of the detection of the illumination information and reduces the accuracy of the detection of the illumination information.
In addition, when the related art uses a similar clustering algorithm to perform illumination detection on an image, at least two rounds of clustering are generally required to be performed on the image to obtain a more accurate clustering result, so that the detection time is longer, and the detection efficiency is lower.
In view of one or more of the above problems, exemplary embodiments of the present disclosure first provide a lighting detection method for an image. The system architecture of the operating environment of the exemplary embodiment is described below with reference to fig. 3.
Referring to fig. 3, system architecture 300 may include a terminal device 310 and a server 320. The terminal device 310 may be an electronic device having a photographing function, such as a smart phone, a tablet computer, and a camera, and the terminal device 310 may include an image sensor and a plurality of spectrum sensors, for example, the image sensor and the plurality of spectrum sensors may be integrated in a camera module of the terminal device 310, and are configured to collect a target image and spectrum data. The server 320 broadly refers to a background system that provides a lighting detection related service for images in the present exemplary embodiment, such as may be a server that implements a lighting detection method. Server 320 may be a server or a cluster of servers, and is not limited by this disclosure. The terminal device 310 and the server 320 may form a connection through a wired or wireless communication link for data interaction.
In one embodiment, the terminal device 310 may first acquire a target image acquired by an image sensor and spectral data of a plurality of detection regions acquired by a plurality of spectral sensors, where each spectral sensor acquires spectral data of one detection region correspondingly, and each detection region corresponds to one sub-region in the target image; and analyzing the spectral data of each detection area to obtain the illumination information of each subarea in the target image.
In one embodiment, the terminal device 310 acquires a target image acquired by an image sensor and spectrum data of a plurality of detection areas acquired by a plurality of spectrum sensors, and then sends the spectrum data of the plurality of detection areas to the server 320, and after receiving the spectrum data of the plurality of detection areas sent by the terminal device 310, the server 320 analyzes the spectrum data of each detection area to acquire illumination information of each sub-area in the target image.
As can be seen from the above, the illumination detection method for an image in the present exemplary embodiment may be performed by the terminal device 310 or the server 320 described above.
The illumination detection method for an image is explained below with reference to fig. 4. Fig. 4 shows an exemplary flow of a lighting detection method for an image, comprising the following steps S410 to S420:
step S410, acquiring a target image acquired by an image sensor and spectrum data of a plurality of detection areas acquired by a plurality of spectrum sensors; each spectral sensor corresponds to a detection area, and each detection area corresponds to a sub-area in the target image;
step S420, analyzing the spectrum data of each detection region to obtain the illumination information of each sub-region in the target image.
Based on the method, on one hand, each spectral sensor acquires spectral data in the corresponding target image sub-area, so that the acquisition precision of the spectral data is improved; the illumination information of each subregion is obtained based on the spectral data of each subregion in the target image, so that the interference caused by one or more large-area colors in the target image is strongly inhibited, and the illumination detection precision of the image is improved. On the other hand, the plurality of spectral sensors simultaneously act on the target image to acquire spectral data in parallel, so that the time consumption of the detection process is reduced, the detection speed is increased, and the illumination detection efficiency of the image is improved.
Each step in fig. 4 is explained in detail below.
Referring to fig. 4, in step S410, a target image acquired by an image sensor and spectrum data of a plurality of detection areas acquired by a plurality of spectrum sensors are acquired; each spectral sensor corresponds to a detection region, and each detection region corresponds to a sub-region in the target image.
The image sensor is used for acquiring a target image to be detected. In one embodiment, the image sensor may be an imaging sensor of three R/G/B color channels used in a mobile phone or other imaging device, and the resolution of the image sensor may be, for example, 4000 × 3000 or 2000 × 1500, and the image sensor may cover an imaging region of 380 to 780nm (close to a wavelength band that can be sensed by human eyes) in a visible light wavelength band. The resolution of the image sensor, and other parameters such as the wavelength range that can be covered are not particularly limited by the present disclosure.
The target image may be an image that needs illumination detection, and may include one or more light sources, and the target image may be an image stored in the terminal device 310, or an image re-captured by the imaging device.
The spectral sensor may be a sensor for acquiring spectral data of a target image, and the system configuration of the spectral sensor may include an optical portion and a control/display portion. When the optical portion may include an imaging optical element and a light splitting element for splitting a spectrum by using a light splitter, a detector is usually required, and therefore, the spectrum sensor usually includes a plurality of detectors for detecting a plurality of spectra in an image; and each spectrum in the image may correspond to a channel in the image. The spectral band that can be detected by the spectral sensor is not particularly limited by the present disclosure, and for example, the spectral sensor may include one pixel and seven detectors, the seven detectors may cover a total of the wavelengths of the spectrum from 350 to 1000nm, and the seven detectors may detect seven spectra having dominant wavelengths of 430nm,450m,500nm,530nm, 575nm,650nm and 850nm and half-wave widths of at least more than 20 nm; therefore, the spectral sensor can perform light splitting processing based on one pixel point in the target image to acquire spectral data of seven channels of the pixel point.
In one embodiment, multiple spectral sensors may be employed in combination to enable illumination detection of the entire target image. The number and arrangement of the plurality of spectral sensors are not particularly limited in the present disclosure, and for example, the plurality of spectral sensors may be arranged in six rows and eight columns to perform illumination detection of the entire target image. In one embodiment, the combination of the spectral sensors may be determined based on the maximum number of detected light sources, and the change in the number of spectral sensors mainly affects the number of light sources that can be detected.
The spectral data may be spectral distribution data acquired by the spectral sensor from a corresponding sub-region of the target image. The present disclosure is not limited to specific content of the spectral data, and in one embodiment, the spectral data may include single spectral data and multi-spectral data, and when generating the multi-spectral data, the spectral sensor may correspond to a plurality of detectors, each of which may detect a spectrum of a different wavelength band in the detection region, and the spectra of the different wavelength bands may constitute the multi-spectral data in the detection region.
In addition, the spectrum sensor can also be used in a spectrum camera, the spectrum camera can cover a wider spectrum band, and can detect more spectrum types, for example, the spectrum camera can cover a band of 300nm to 2000nm, and can detect 6 to 40 spectrum types, the resolution of which can be lower than that of the image sensor but greater than that of a network main screen (QVGA).
The detection regions are image regions detected by the spectral sensors, and in one embodiment, each spectral sensor may collect spectral data of its corresponding detection region. The shape, size, arrangement and number of the detection regions are not particularly limited in this disclosure.
The sub-region of the target image is a part of the target image, and for example, the target image may be divided into 64 × 48 sub-regions for illumination detection. The shape, size, arrangement and number of the sub-regions are not particularly limited in this disclosure. In one embodiment, each detection region corresponds to a sub-region in the target map. For example, 48 spectral sensors can be arranged in a 6 x 8 pattern, and the 48 6 x 8 distributed spectral sensors can be approximated as a spectral camera with 6 x 8 pixels and a viewing angle of 120 degrees. In the present exemplary embodiment, each spectrum sensor corresponds to one detection region, as shown in fig. 5, the target image may be divided into 48 sub-regions according to the distribution of the detection region, and each detection region corresponds to one sub-region in the target image, so that each spectrum sensor may correspond to detect the spectrum distribution of one sub-region.
After the spectrum data of each detection region is acquired, the spectrum data may be analyzed, and with reference to fig. 4, in step S420, the spectrum data of each detection region is analyzed to obtain the illumination information of each sub-region in the target image.
The illumination information may be light source information of each sub-region obtained by analyzing the spectral data of each sub-region, and the specific content of the illumination information is not particularly limited in the present disclosure.
In one embodiment, the illumination information includes a light source classification result; as shown in fig. 6, the analyzing the spectrum data of each detection region to obtain the illumination information of each sub-region in the target image may include the following steps S610 to S620:
step S610, extracting feature data of each detection region according to the spectrum data of each detection region.
Step S620, respectively processing the feature data of each detection region by using a pre-trained light source classification model, to obtain a light source classification result of each sub-region in the target image.
The characteristic data of each detection region may be characteristic data obtained from spectral data of each detection region. The present disclosure is not limited to the specific content of the characteristic data, and in one embodiment, the characteristic data may be obtained according to the derivative of the spectral data of the adjacent wavelength band.
The light source classification model may be a model that processes the feature data of each detection region to obtain a light source classification result of each sub-region in the target image, and the specific structure of the light source classification model is not particularly limited in this disclosure. In an embodiment, as shown in fig. 7, the light source classification model may include a plurality of hidden layers, a plurality of Batch Normalization (BN) layers, a plurality of activation function layers, and a classification layer.
In one embodiment, the hidden layer of the light source classification model can abstract the input feature data to another dimensional space, so that the number of nodes of the hidden layer can be adjusted according to the input feature data; the BN layer can prevent overfitting in the process of classifying the light source according to the characteristic data, so the BN layer can be replaced by other neural networks with the same function, such as Drop-out layers with the same function; the activation function layer may map inputs of the light source classification model to outputs, and thus other activation functions such as Relu, relu6, etc. may be used in the model to implement the mapping process.
In one embodiment, the light source classification model can be trained by selecting indoor light sources with color temperature range of 8000-2600K and the like and outdoor light sources with color temperature of 6000-3800K, collected in several time intervals of sunny days, cloudy days, noon, sunset and the like.
In an embodiment, the light source classification model may be trained according to the light source information of each sub-region of the target image and the indoor light source and the outdoor light source, so as to obtain the light source classification model of each detection region, that is, the structure and the used feature data of the light source classification model of each detection region may be different, so that a more accurate light source classification result of each sub-region of the target image may be obtained.
In an embodiment, the light source classification models corresponding to each detection region may be executed in parallel, so that the speed of acquiring the light source classification result of each sub-region in the target image may be increased, thereby increasing the illumination detection efficiency.
In another embodiment, a light source classification model may be obtained according to the indoor light source and the outdoor light source training, and then the light source classification model is applied to each detection region, that is, the key classification models used in each detection region have the same structure, so as to obtain the light source classification result of each sub-region of the target image.
The light source classification result may be light source classification information of each detection region obtained by the light source classification model of the detection region according to the feature data of the sub-region, for example, the light source classification result may include one or more of the indoor light source or the outdoor light source. In one embodiment, the light source classification result may be a set of color index values such as a color temperature value and a color deviation value corresponding to a certain light source. The present disclosure does not specifically limit the content of the light source classification result.
In step S610, the feature data of each detection region may be extracted according to the spectrum data of each detection region, and in one embodiment, as shown in fig. 8, the extracting the feature data of each detection region according to the spectrum data of each detection region may include the following steps S810 to S820:
step S810, preprocessing the spectral data and fitting the reflection spectrum to obtain fitting data;
in step S820, feature data of each detection region is extracted from the fitting data.
The fitting data may be data obtained by fitting a reflection spectrum to the preprocessed data. The present disclosure does not specifically limit the manner and details of obtaining the fitting data. In one embodiment, the discrete spectral data may be expanded into smooth continuous spectral data after the spectral data is acquired to obtain fitting data.
In step S810, the spectral data may be preprocessed and fitted with the reflection spectrum to obtain fitted data, and in one embodiment, the spectral data may include multi-channel response data; the preprocessing of the spectral data may include performing channel pre-correction on the multi-channel response data.
The multichannel response data can be spectral information of different wave bands of the detection area output by the spectral sensor after the spectral sensor receives the image data of the detection area.
In an exemplary embodiment, the multichannel response data may be corrected by the spectral calibrator, for example, spectral parameters of the multichannel response data may be first analyzed to determine whether peak shift occurs between the multichannel response data, a peak shift correction coefficient may be obtained according to the peak shift, and finally, the multichannel response data may be corrected by the parameter calibrator according to the peak shift correction coefficient to ensure consistency of data output by the multichannel sensor.
In an embodiment, after the channel pre-correction is performed on the multi-channel response data, the reflection spectrum fitting may be performed on the corrected data, and the discrete multi-channel response data is expanded into continuous fitting data.
After the fitting data is acquired, the feature data of each detection region may be extracted from the fitting data in step S820.
In one embodiment, the fitting data may be obtained by fitting a reflection spectrum to the above-described multichannel response data, and therefore, in the present exemplary embodiment, the first-order reciprocal of the response data of the adjacent channel may be extracted as the feature data of each detection region. The present disclosure does not specifically limit the manner of extracting the feature data.
After the feature data is obtained through the above steps, with continuing reference to fig. 6, in step S620, the feature data of each detection region is respectively processed by using a pre-trained light source classification model, so as to obtain a light source classification result of each sub-region in the target image.
In an embodiment, the feature data may be input into the light source classification model, and the light source classification model may perform data dimension mapping, data fitting and data conversion using an activation function on the feature data, and then input the feature data after data conversion into the classification model to perform light source classification. In this exemplary embodiment, a hidden layer may be used to perform data dimension mapping, a BN layer implements data fitting, a Relu activation function is used to perform data conversion, the hidden layer, the BN layer, and the Relu activation function may form one processing unit, and the light source classification model may include a plurality of processing units, when the light source classification model receives feature data, the feature data may be input into the processing unit to be processed, and an output of a previous processing unit may be used as an input of a next processing unit, and then the data processed by the multiple processing units is input into the classification model to be classified, so as to obtain a light source classification result of a current sub-region in the target image.
Based on the method, the spectral data of each subregion in the target image is analyzed in parallel through the light source classification model to obtain the light source classification result of each subregion in the target image, and because the light source classification model corresponding to each detection region is adaptively adjusted according to the characteristic data of each subregion, the network structure is simpler, the model operation time is shorter, and the light source classification models of each detection region can be executed in parallel, so that the illumination information detection precision is improved, the detection speed is improved, and the illumination information detection efficiency is improved.
In one embodiment, the illumination information includes a light source classification result; the illumination detection method for an image may further include the following steps S910 to S920:
step S910, converting the light source classification result of each sub-region into a color index value of each sub-region;
step S920, counting color index values of the sub-regions, and determining a light source in the target image according to the result of the counting of the color index values.
The color index value may be one or more of index values corresponding to a certain light source, for example, the color index value may be a color temperature (CCT) and a color deviation value (DUV) corresponding to a certain light source.
In step S910, the light source classification result of each sub-region may be converted into a color index value of each sub-region. In one embodiment, the conversion of the light source classification result into its corresponding color index value may be implemented by a light source information database. In this exemplary embodiment, a light source information database may be pre-established to store each light source classification result and a corresponding color index value thereof, and when the light source classification result is converted into a color live broadcast value, the light source information database may be searched according to the light source classification result to obtain the color index value corresponding to the light source, so as to implement conversion of the light source classification result. The present disclosure does not particularly limit the conversion method for converting the light source classification result into the color index value. For example, an existing light source classification model may be improved such that the light source classification model directly outputs a color index value for each light source.
After the color index value of each sub-region in the target image is obtained, in step S920, the color index values of the sub-regions may be counted, and the light source in the target image may be determined according to the statistical result of the color index values of all the sub-regions.
In one embodiment, the color index value may include a color deviation value and a color temperature; the aforementioned counting of the color index values of the sub-regions and the determination of the light sources in the target image according to the statistical result of the color index values may include counting a histogram of color deviation values and a histogram of color temperatures of the sub-regions and determining n light sources in the target image according to the histogram of color deviation values and the histogram of color temperatures; n is a positive integer not less than 2.
In one embodiment, the histogram may be determined according to a ratio of the color deviation value and the color temperature value in all sub-regions of the target image. In the present exemplary embodiment, as shown in fig. 10, the abscissa of the histogram is a value range of the color temperature value and the color deviation value, and the ordinate may be a ratio of the number of the sub-regions corresponding to a certain color temperature value or color deviation value to the number of all the sub-regions, for example, 48 sub-regions exist in the target image, wherein the color temperature value of 6 sub-regions is 2000K, and the ratio corresponding to the color temperature value 2000K is 6/48=0.125. The abscissa and ordinate of the histogram are not particularly limited in this disclosure.
After the histogram of the color deviation values and the histogram of the color temperatures are obtained, in an embodiment, n color temperature values and n color deviation values which occupy the highest ratio may be selected from the two histograms, and the n color temperature values and the n color deviation values may be combined to determine n sets of color index values corresponding to the light sources of the target image, and then, corresponding n light sources may be obtained according to the n sets of color index values to determine the light sources of the target image. In the present exemplary embodiment, n light sources may be determined according to the n sets of color index values by the above-described light source information database.
In one embodiment, after acquiring the color index value corresponding to the light source classification result of each subregion of the target image, an averaging filter may be used to process the color index value of each subregion to determine the light source of the target image. In the exemplary embodiment, the color index values corresponding to the light source classification result may be filtered by using an averaging filter, and if the color index values of adjacent sub-regions are very close, the color index values are determined to be the same color index value, and correspond to the same light source. The light source classification result of each subarea is processed by using an average filter, so that the maximum light source detection number of the target image does not need to be limited. The size of the averaging filter is not particularly limited in the present disclosure, and for example, the averaging filter may have other sizes such as 3 × 3, 2 × 2, and the like.
In an embodiment, the illumination detection method for an image may further include performing color correction processing on the target image according to the illumination information of each sub-region in the target image. In the exemplary embodiment, according to the illumination information of each sub-region, the color deviation of the target image generated by various illumination information can be corrected, so that the display effect of the target image in the terminal is consistent with the visual effect of human eyes, and the user experience is improved.
Based on the method, the spectral data of each subregion in the target image is acquired according to the spectral sensor, the spectral data of each subregion is analyzed by using the light source classification model of each detection region to acquire the light source classification result of each subregion, and then the color index values corresponding to the light source classification results of all subregions in the target image are counted to determine the light source in the target image, so that the interference of large-area color confusion in the target image on the light source detection is avoided, the light source detection precision and the detection speed are improved, and the light source detection efficiency is improved.
In one embodiment, an exemplary flow of the illumination information detection method for an image of the present disclosure is shown in fig. 11, and the illumination information detection for the image may be implemented according to steps S1101 to S1109.
Step S1101, acquiring a target image;
step S1102, enabling the detection areas of the multiple spectrum sensors to correspond to sub-areas of the target image one by one;
step S1103, the spectrum sensor collects the spectrum data of the corresponding detection area;
step S1104, preprocessing and fitting the reflection spectrum to the spectrum data of each detection area;
step S1105, extracting the characteristics of the fitted data to obtain the characteristic data of each detection area;
step S1106, processing the characteristic data by using a light source classification model to obtain a light source classification result of each subarea;
step S1107, converting the light source classification result of each sub-region into the color deviation value and the color temperature of the sub-region;
step S1108, counting the histogram of the color deviation value and the histogram of the color temperature of each subregion;
step S1109, determining the light source in the target image according to the color temperature and the color deviation value which accounts for the highest ratio in the histogram of the color deviation value and the histogram of the color temperature.
Exemplary embodiments of the present disclosure also provide a light detection apparatus for an image. As shown in fig. 12, the illumination detection apparatus 1200 for an image may include:
a data acquisition module 1210 configured to acquire a target image acquired by an image sensor and spectral data of a plurality of detection regions acquired by a plurality of spectral sensors; each spectral sensor corresponds to a detection area, and each detection area corresponds to a sub-area in the target image;
the illumination information detection module 1220 is configured to analyze the spectral data of each detection region to obtain illumination information of each sub-region in the target image.
In one embodiment, the illumination information includes a light source classification result; the analyzing the spectrum data of each detection region to obtain the illumination information of each sub-region in the target image may include:
extracting characteristic data of each detection area according to the spectral data of each detection area;
and respectively processing the characteristic data of each detection area by using a pre-trained light source classification model to obtain a light source classification result of each subarea in the target image.
In an embodiment, the extracting the feature data of each detection region according to the spectrum data of each detection region may include:
preprocessing the spectral data and fitting the reflection spectrum to obtain fitting data;
feature data of each detection region is extracted from the fitting data.
In one embodiment, the spectral data includes multi-channel response data; the preprocessing of the spectral data may include:
and performing channel pre-correction on the multi-channel response data.
In one embodiment, the illumination information includes a light source classification result; the above apparatus may further include:
converting the light source classification result of each sub-area into a color index value of each sub-area;
and counting the color index values of the sub-regions, and determining the light source in the target image according to the counting result of the color index values.
In one embodiment, the color index value includes a color deviation value and a color temperature; the counting the color index values of the sub-regions and determining the light source in the target image according to the statistical result of the color index values may include:
counting the color deviation value histogram and the color temperature histogram of each sub-region, and determining n light sources in the target image according to the color deviation value histogram and the color temperature histogram; n is a positive integer not less than 2.
In one embodiment, the above apparatus further comprises:
and carrying out color correction processing on the target image according to the illumination information of each subarea in the target image.
The specific details of each part in the above device have been described in detail in the method part embodiments, and thus are not described again.
An exemplary embodiment of the present disclosure provides a camera module. Referring to fig. 13, the camera module 1300 may include: an image filter 1310, an image sensor 1320, a K-bank spectral splitter 1330, and a K-bank spectral sensor 1340. Each component is explained below.
The image filter 1310 is a filter formed by arranging monochromatic filters in an array, and may be located on an incident light path of the image sensor 1320, so that the image sensor 1320 can receive monochromatic light passing through the image filter 1310. For example, the image filter 1310 may be composed of RGB monochromatic filters, which can filter out light of three different spectral ranges of R, G, and B (i.e., red, green, and blue monochromatic light). The present disclosure does not limit the arrangement of the image filter 1310.
In one embodiment, the image filter 1310 may be a bayer filter, such as the bayer filter illustrated in fig. 14A, which uses a conventional bayer array arrangement, or the bayer filter illustrated in fig. 14B, which uses a quad-bayer array arrangement.
The image sensor 1320 is a sensor that converts an optical signal into an electrical signal, and performs imaging by quantitatively characterizing the optical signal. The present disclosure is not limited to a specific type of the image sensor 1320, and may be a CMOS (Complementary Metal Oxide Semiconductor) or CCD (Charge Coupled Device) sensor. In the present exemplary embodiment, the image sensor 1320 is located on an outgoing light path of the image filter 1310, and is configured to sense an optical signal passing through the image filter 1310 and generate a target image of a photographic subject. The object to be captured is a scene, an object, a person, or the like, which is located right in front of the Image capture module 1300, and the target Image is an Image captured by the Image capture module 1300, and may be an original Image, such as a RAW Image, or an RGB Image or a YUV Image processed by an ISP (Image Signal Processor), or the like.
Generally, the image sensor 1320 is formed by arranging a number of photosensitive elements in an array, where each photosensitive element corresponds to a pixel of the target image. The number of photosensitive elements may represent the resolution of the image sensor 1320, for example, the photosensitive elements are arranged in an H × W array, H represents the number of rows and W represents the number of columns, and the resolution of the image sensor 1320 may be H × W, and the size of the generated target image is H × W, where H represents the image height and W represents the image width. Illustratively, H is 3000 and W is 4000.
The spectrum splitter 1330 for separating the light of a specific wavelength band from the incident light may be located on the incident light path of the spectrum sensor 1340, so that the spectrum sensor 1340 can receive the light of the specific wavelength band after passing through the spectrum splitter 1330, thereby sensing the spectrum data. In contrast, image filter 1310 typically provides only red, green, and blue monochromatic light in the visible range, while spectral splitter 1330 provides a greater variety of different wavelength bands over a larger spectral range (e.g., 350-1000 nm, covering the ultraviolet to infrared wavelength bands). The number of spectral categories of the spectral splitter 1330 is referred to as the number of channels of the spectral splitter 1330 or the spectral sensor 1340.
Spectral splitter 1330 can be a filter, pyramid, or like type of optical device. Taking a filter as an example, in one embodiment, each set of spectrum splitter 1330 may include L filters with different peak wavelengths (or center wavelengths), so that the incident light is separated into L different wavelength bands after passing through the spectrum splitter 1330, and the number of channels of the spectrum splitter 1330 is L. If L is 1, namely the number of channels of the spectrum splitter is 1, the spectrum splitter is a single spectrum splitter; and if L is a positive integer not less than 2, namely the number of channels of the spectral splitter is greater than or equal to 2, the spectral splitter is a multispectral splitter. Illustratively, L may be 13.
In one embodiment, the L filters in each set of spectral splitters may be arranged in a p × q array, p representing the number of rows, q representing the number of columns, and L = p × q. Fig. 15 shows a schematic diagram of a set of spectral splitters 1330 with 3 × 4 channels, which includes 12 filters arranged in a 3 × 4 array, respectively denoted as C1 to C12, for filtering light of C1 to C12 channels, and the Peak Wavelength (Peak Wavelength) and Full Width at Half Maximum (FWHM) of light of each channel can be referred to table 1, which covers 12 important bands in the range of 350 to 1000 nm.
TABLE 1
Figure BDA0003758192220000121
Figure BDA0003758192220000131
The spectrum sensor 1340 may be located on the exit light path of the spectrum splitter 1330 for sensing the light signal passing through the spectrum splitter 1330 to obtain the spectrum data of a specific detection area in the photographic subject. The present disclosure is not limited to a specific type of the spectrum sensor 1340, and may be a CMOS or CCD sensor, which may be the same type as the image sensor 1320, or may be different.
In one embodiment, each set of spectrum sensor 1340 may include L light sensing elements for sensing the optical signals filtered by the corresponding L filters of the spectrum splitter 1330, respectively, to obtain response data of L channels, i.e., spectrum data of different L wavelength bands. If L is 1, namely the number of channels of the spectrum sensor is 1, the spectrum sensor is a single spectrum sensor; if L is a positive integer not less than 2, that is, the number of channels of the spectral sensor is greater than or equal to 2, the spectral sensor is a multispectral sensor.
In one embodiment, the L photosensitive elements in each set of spectrum sensors may be arranged in an array of p × q, p representing the number of rows, q representing the number of columns, and L = p × q. For example, referring to fig. 15, the spectrum sensor 1340 may include 3 × 4 photo-sensors, respectively denoted as Z1 to Z12, which correspond to the filters C1 to C12 of the spectrum splitter 1330 in a one-to-one manner, and respectively receive the optical signals of 12 channels, so as to obtain response data of 12 channels.
It should be noted that the spectrum splitter 1330 and the spectrum sensor 1340 as a whole may be referred to as a spectrum sensor. The spectral beamsplitter 1330 and the spectral sensor 1340 are still shown as two components for a unified presentation.
The camera module 1300 may further include a bracket 1350 for fixing one or more of the bayer filter 1310, the image sensor 1320, the spectral splitter 1330, and the spectral sensor 1340.
In the present exemplary embodiment, K sets of spectrum splitters 1330 and K sets of spectrum sensors 1340 can be disposed in the camera module 1300, where K is a positive integer not less than 2, that is, the camera module 1300 includes at least two sets of spectrum splitters 1330 and at least two sets of spectrum sensors 1340. Illustratively, K may be 12 or 48.
Each set of spectral splitters 1330 corresponds to a set of spectral sensors 1340 and to a detection region. The detection area is a local area in the object to be photographed, and in the photographing process, light reflected by each detection area is incident into the camera module 1300, is split by the corresponding set of spectrum splitter 1330, and is finally incident into the corresponding set of spectrum sensor 1340, so that the spectrum sensor 1340 senses the light signal reflected by the detection area and split by the spectrum splitter 1330, and spectral data of the detection area is obtained. The spectrum splitter 1330 and the spectrum sensor 1340 may be disposed correspondingly, for example, the positions of the spectrum splitter 1330 and the spectrum sensor 1340 may be disposed in a one-to-one correspondence along the optical axis direction, so that each group of the spectrum sensor 1340 receives the light transmitted by the corresponding group of the spectrum splitter 1330 (in some cases, each group of the spectrum sensor 1340 may also receive a small amount of light transmitted by the spectrum splitter 1330 at the adjacent position, and the influence thereof may be eliminated by an algorithm).
Each detection region may correspond to a sub-region in the target image. For example, the K groups of spectrum sensors 1340 respectively detect spectrum data of K detection areas, which are equivalent to dividing the subject into K blocks, and the image sensor 1320 captures the same subject to obtain a target image, which may also be divided into K sub-areas, where each detection area corresponds to one sub-area.
Therefore, the spectral data of K sub-regions in the target image are actually obtained by detecting the spectral data of K detection regions. On the one hand, the camera module in the exemplary embodiment can provide spectral data on the basis of conventionally shooting a target image, and is beneficial to improving the image quality or widening the application range of the image. For example, the illumination condition of an image shooting scene can be fully characterized based on the spectrum data, and further, the image can be subjected to optimization processing such as accurate color correction; for example, the RGB data and the spectral data based on the target image can realize more accurate target detection, industrial monitoring, and the like, and can be applied to the fields of surveying and mapping, industrial detection, and the like. On the other hand, the camera module is provided with a plurality of groups of spectrum light splitters and a plurality of groups of spectrum sensors to refine the localized spectrum data of the detection sub-regions, so that the target image can be optimized more finely, for example, different illumination conditions of different sub-regions in the target image can be respectively represented, different processing is adopted for different sub-regions, or more refined information is provided, for example, based on the spectrum data of different sub-regions in the target image, temperature information or defect information of different parts in a shooting scene can be monitored, and the like.
K may be regarded as the resolution of the spectrum sensors 1340, each group of spectrum sensors 1340 outputs spectrum data of one detection region, which may be regarded as one pixel of the spectrum data, and the K groups of spectrum sensors 1340 may output spectrum data of the pixel K. That is, one pixel of the spectral data corresponds to one sub-region of the target image. In the present exemplary embodiment, K < H × W, i.e., the resolution of the spectral sensor 1340 is lower than that of the image sensor 1320. Wherein, the image sensor 1320 is used for imaging, and the high resolution image sensor 1320 can generate a high definition target image; the spectrum sensors 1340 are used for detecting spectrum data, the spectrum data does not need to be refined to the extent of image pixels, the spectrum data of K sub-regions in the target image are detected through K groups of spectrum sensors 1340, and differences among different parts in the target image are represented, so that the use requirements can be met. Therefore, the camera module 1300 in the exemplary embodiment can combine the two features of the image sensor 1320 for high-definition imaging and the spectral sensor 1340 for spectral information detection, so as to obtain both a high-definition target image and relatively rich and fine spectral information. Moreover, increasing the resolution of the spectrum sensor 1340 increases the manufacturing cost of the camera module 1300, and it is advantageous to control the resolution of the spectrum sensor 1340 to a level lower (generally much lower) than that of the image sensor 1320.
In one embodiment, K sets of spectral sensors 1340 may be arranged in an m × n array, where m represents the number of rows and n represents the number of columns, and then K = m × n. It should be understood that the K sets of spectral splitters 1330 could also be arranged in an m x n array. Fig. 16 shows a schematic diagram of an array of spectral sensors 1340, where m =6,n =8, that is, the camera module 1300 includes 48 sets of spectral splitters 1330 and 48 sets of spectral sensors 1340. Each set of spectral sensors 1340 in turn may include 3 x 4 photo-sensitive elements to output 12 channels of response data.
In one embodiment, H/W = m/n, i.e., the image sensor 1320 is in the same proportion as the m x n set of spectral sensors 1340. For example, the resolution of the image sensor 1320 is 3000 × 4000, and 48 sets of the spectrum sensors 1340 are arranged in a 6 × 8 array, and the width-to-height ratio of the two sets is the same. Thereby making it easier for the m × n detection regions to correspond to sub-regions in the target image. For example, each set of the spectrum sensors 1340 corresponds to a detection region corresponding to a sub-region of 500 × 500 size in the target image, and 6 × 8 detection regions respectively correspond to 6 × 8 sub-regions in the target image.
In one embodiment, if the spectral data is used to detect illumination in the subject, generally, there are less than 5 light sources in the subject. When the target image is captured, referring to fig. 17, the most uniform distribution of 5 light sources in the screen is that 5 light sources are distributed at the four corners and the center of the screen, so that the regions with different illumination conditions in the screen can be completely divided by 3 × 3=9 windows. In other cases, two or more than two light sources in the 5 light sources are distributed more intensively, and the areas with different illumination conditions in the picture can be completely divided through less than 9 windows. Therefore, when the spectrum sensor 1340 is arranged, in consideration of the requirement for detecting the illumination condition, both m and n can be set to be positive integers not less than 3, so that the target image can be divided into at least 9 sub-regions, the spectrum data of each sub-region is detected, and the illumination condition of each sub-region is further determined, so that different local different illumination conditions in the shot object are fully detected, and the actual use requirement is met.
In general, the higher the resolution of the spectral sensor 1340, i.e., the larger m and n, the finer the detection of local spectral data. However, considering the influence on the image processing, it is not preferable that the higher the resolution of the spectral sensor 1340 is. The method is characterized in that a reference image with large area of mixed colors exists under a single light source is collected, for example, in a scene shot in a room, the detection of the light source can be interfered by the large area of pure colors of walls and ceilings, and the mixed colors belong to mixed colors. The reference picture is divided into image blocks, e.g. into 6 x 8 image blocks. The FOV (Field Of Vision) Of each image block is small, and is easily concentrated on a large area Of mixed colors. Fig. 18A shows an image after detecting global spectral data for a reference image and performing global color correction processing; fig. 18B shows an image in which local spectral data is detected for a reference image (i.e., spectral data is detected individually for each image block), and a local color correction process is performed. By contrast, a certain color anomaly occurs in the frame of fig. 18B because the image block includes a large area of wall yellow, which is mistaken for the light source, and the color is significantly colder when the local color correction processing is performed. In order to reduce the risk of illumination analysis errors leading to image processing anomalies, the resolution of the spectral sensor 1340 can be controlled not to exceed 6 × 8.
In addition, as shown in fig. 19, when performing operations such as feature extraction on a target image, a 3 × 3 extraction template (i.e., 3 × 3 sub-regions) may be used, so as to facilitate operations such as filtering, averaging, and edge extraction on RGB data or spectral data within a 3 × 3 range, which is beneficial to achieving effects such as refinement of illumination analysis. To facilitate the use of a 3 × 3 extraction template, the resolution of the spectral sensors 1340 can be set to 6 × 8, i.e., the camera module 1300 can include 48 sets of spectral sensors 1340 arranged in a 6 × 8 array.
In one embodiment, each set of spectral splitters 1330 includes L filters with different peak wavelengths; each set of the spectrum sensors 1340 includes L light-sensing elements, which are respectively used for sensing the optical signals filtered by the corresponding L optical filters; the spectral data for each detection region includes response data for the L channels. Where L is a positive integer not less than 2, i.e., the number of channels of the spectral splitter 1330 or the spectral sensor 1340 is at least 2. Referring to fig. 15, the L filters may be arranged in a p × q array, and the L photosensitive elements may be arranged in a p × q array, where p denotes the number of rows and q denotes the number of columns.
In one embodiment, considering that the illumination intensity of a general light source is gradually attenuated in a circular diffusion manner, p = q may be set, that is, each group of spectrum sensors 1340 arranges its photosensitive elements in a square array so as to conform to the circular diffusion rule of the illumination intensity, so as to more accurately detect the spectrum data of each detection area (or sub-area).
In one embodiment, the number of channels of the spectral sensor 1340 may be determined according to the usage scenario of the camera module 1300. For example, in the field of remote sensing, the number of channels of the spectrum sensor 1340 may be set to 40 (e.g., 4 × 5), covering a spectral range of 350 to 2000 nm.
In one embodiment, referring to fig. 20, the image capturing module 1300 may further include a first lens 1360 and a second lens 1370. The first lens 1360 is disposed on an incident light path of the bayer filter 1310, and light passing through the first lens 1360 is incident on the bayer filter 1310 and then incident on the image sensor 1320. The second lens 1370 is disposed on the incident light path of the spectrum splitter 1330, and the light passing through the second lens 1370 is incident on the spectrum splitter 1330 and further incident on the spectrum sensor 1340. Thus, by providing two lenses, the total amount of light entering is increased, and the bayer filter 1310 and the spectrum spectrometer 1330 correspond to different lenses, the interference between image imaging and spectrum data detection can be improved, thereby improving the image quality and the accuracy of spectrum data.
In one embodiment, the camera module 1300 may include a first camera and a second camera. The first camera is a camera for imaging, and includes a first lens 1360, a bayer filter 1310, and an image sensor 1320; the second camera is a camera for detecting spectral data, and includes a second lens 1370, K sets of spectral splitters 1330, and K sets of spectral sensors 1340. In actual use, one of them camera can be opened alone according to specific demand to further widen the use scene of making a video recording module 1300.
In one embodiment, the first camera and the second camera may be calibrated to obtain a calibration parameter between the two cameras, and the calibration parameter may be used to determine a corresponding relationship between each group of the spectrum sensors 1340 and a sub-region of the image. When the camera module performs actual shooting, the spectral data of each group of the spectral sensors 1340 can be associated with the sub-region of the target image, which is beneficial to realizing faster and higher-quality image optimization processing or detection.
Exemplary embodiments of the present disclosure also provide an electronic device. As shown with reference to fig. 21, the electronic device 2100 may include: a housing 2110, a main circuit board 2120, and an image pickup module 2130. Wherein the main circuit board 2120 is located within the housing 2110. The camera module 2130 may be any one of the camera modules in the present exemplary embodiment, such as the camera module 1300 described above. The camera module 2130 is electrically connected to the main circuit board 2120 to realize signal or data transmission between the camera module 2130 and the main circuit board 2120. For example, the camera module 2130 may be provided on the main circuit board 2120, or may be connected to the main circuit board 2120 through a medium such as a flexible circuit board to form an electrical connection.
For example, the electronic device 2100 may be a mobile phone, a camera, a tablet computer, a wearable device, an unmanned aerial vehicle, or other devices requiring a photographing function. In the camera module 2130, the area of each group of spectral sensors can be within 1cm × 1cm, so that the volume of the whole camera module 2130 is not too large, and the camera module is integrated in the electronic device 2100, so that the camera module does not occupy too much space. By integrating the camera module 2130, the image processing performance of the electronic device 2100 can be improved, or the electronic device 2100 can implement more related functions based on image and spectrum detection, such as mapping, industrial monitoring, and the like.
In one embodiment, the electronic device 2100 may further include a main processor and an Image Signal Processor (ISP) (not shown). The main processor and ISP are electrically connected to the main circuit board 2120. Illustratively, the main Processor may be provided on the main circuit board 2120, and may be a CPU (Central Processing Unit), an AP (Application Processor), or the like. The ISP may be provided on the main circuit board 2120 as a device separate from the imaging module 2130, or may be provided in the imaging module 2130 as a device integrated in the imaging module 2130. The target image generated by the image sensor of the camera module 2130 may be a RAW image for input to the ISP for processing. Spectral data detected by a spectral sensor of the camera module can be used for being input to the main processor. For example, the spectrum sensor may be connected to the main processor through an I2C (Inter-Integrated Circuit), an SPI (Serial Peripheral Interface), and the like, and the data volume of the spectrum data is usually much smaller than that of the image data, which is suitable for transmission through the I2C and SPI related protocols. Therefore, the processing flow of the spectral data can bypass the ISP, the ISP does not need to be modified to have a function of processing the spectral data, and the processing of the spectral data does not need to occupy an Interface of the ISP (such as a Mobile Industry Processor Interface (MIPI)) so as to reduce the manufacturing cost of the electronic device 2100.
Referring to fig. 22, an electronic device in the form of a mobile terminal is exemplified. It should be understood that the electronic device 2200 shown in fig. 22 is merely an example and should not limit the functionality or scope of the disclosed embodiments.
Referring to fig. 22, the electronic device 2200 may include: a processor 2201, a memory 2202, a mobile communication module 2204, a wireless communication module 2205, a display 2206, a camera module 2207, an audio module 2208, a power module 2209 and a sensor module 2210.
The processor 2201 may include one or more processing units, such as: the Processor 2201 may include a Central Processing Unit (CPU), an AP (Application Processor), a modem Processor, a Display Processor (DPU), a GPU (Graphics Processing Unit), an ISP (Image Signal Processor), a controller, an encoder, a decoder, a DSP (Digital Signal Processor), a baseband Processor and/or an NPU (Neural-Network Processing Unit), and the like. In an embodiment, the sensor module 2210 acquires a target image and acquires spectrum data of a plurality of detection regions based on a plurality of spectrum sensors, where each spectrum sensor corresponds to one detection region and each detection region corresponds to one sub-region in the target image, the CPU receives the spectrum data of each detection region, analyzes the spectrum data of each detection region to obtain illumination information of each sub-region in the target image, and after acquiring the illumination information of each sub-region in the target image, the CPU may determine a light source of the target image according to a statistical result of the illumination information of each sub-region.
Memory 2202 may be used to store computer-executable program code, which may include instructions. The processor 2201 executes various functional applications of the electronic device 2200 and data processing by executing instructions stored in the memory 2202. The memory 2202 may also store application data as well as various intermediate data, for example, the memory 2202 may store images, video, and spectral data as described above.
The communication function of the electronic device 2200 may be implemented by the mobile communication module 2204, the antenna 1, the wireless communication module 2205, the antenna 2, a modem processor, a baseband processor, etc. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The mobile communication module 2204 may provide 3G, 4G, 5G, etc. mobile communication solutions applied to the electronic device 2200. The wireless communication module 2205 may provide wireless communication solutions for wireless local area network, bluetooth, near field communication, etc. applied to the electronic device 2200.
The display screen 2206 is used for implementing display functions, such as displaying a user interface, images, videos, and the like, and in one embodiment, the display screen 2206 may be used for displaying a display result of color correction of a target image based on an illumination detection result according to the embodiment of the present disclosure. The camera module 2207 is configured to implement a shooting function, such as shooting images, videos, and the like, for example, the camera module 2207 may include an image sensor and the above-mentioned spectrum sensor, the image sensor may be configured to collect a target image, and combining a plurality of spectrum sensors may obtain spectrum data of the target image more accurately, so as to improve illumination detection accuracy and detection efficiency of the image. The audio module 2208 is used for implementing audio functions, such as playing audio, collecting voice, etc. The power module 2209 is used to implement power management functions, such as charging a battery, powering a device, monitoring a battery status, and the like. Sensor module 2210 may include one or more sensors for obtaining status evaluations for various aspects of electronic device 2200.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, according to exemplary embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.), or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (10)

1. A method for illumination detection of an image, comprising:
acquiring a target image acquired by an image sensor and spectral data of a plurality of detection areas acquired by a plurality of spectral sensors; each spectral sensor corresponds to a detection area, and each detection area corresponds to a sub-area in the target image;
and analyzing the spectrum data of each detection area to obtain the illumination information of each subarea in the target image.
2. The method of claim 1, wherein the lighting information comprises light source classification results; the analyzing the spectrum data of each detection area to obtain the illumination information of each sub-area in the target image includes:
extracting feature data of each detection area according to the spectral data of each detection area;
and respectively processing the characteristic data of each detection area by using a pre-trained light source classification model to obtain a light source classification result of each sub-area in the target image.
3. The method of claim 2, wherein said extracting feature data of said each detection region from said spectral data of said each detection region comprises:
preprocessing the spectral data and fitting the reflection spectrum to obtain fitting data;
extracting feature data of each detection region from the fitting data.
4. The method of claim 3, wherein the spectral data comprises multichannel response data; the preprocessing the spectral data comprises:
and performing channel pre-correction on the multi-channel response data.
5. The method of claim 1, wherein the lighting information comprises light source classification results; the method further comprises the following steps:
converting the light source classification result of each sub-area into a color index value of each sub-area;
and counting the color index values of the sub-regions, and determining the light source in the target image according to the counting result of the color index values.
6. The method of claim 5, wherein the color index value comprises a color deviation value and a color temperature; the counting color index values of the sub-regions and determining the light source in the target image according to the counting result of the color index values comprises the following steps:
counting the histogram of the color deviation value and the histogram of the color temperature of each subarea, and determining n light sources in the target image according to the histogram of the color deviation value and the histogram of the color temperature; n is a positive integer not less than 2.
7. The method of claim 1, further comprising:
and carrying out color correction processing on the target image according to the illumination information of each subarea in the target image.
8. An illumination detection apparatus for an image, comprising:
a data acquisition module configured to acquire a target image acquired by the image sensor and spectral data of a plurality of detection areas acquired by the plurality of spectral sensors; each spectral sensor corresponds to a detection area, and each detection area corresponds to a sub-area in the target image;
and the illumination information detection module is configured to analyze the spectral data of each detection area to obtain illumination information of each sub-area in the target image.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 7.
10. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1 to 7 via execution of the executable instructions.
CN202210860445.1A 2022-07-21 2022-07-21 Illumination detection method and device for image, storage medium and electronic equipment Pending CN115187559A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210860445.1A CN115187559A (en) 2022-07-21 2022-07-21 Illumination detection method and device for image, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210860445.1A CN115187559A (en) 2022-07-21 2022-07-21 Illumination detection method and device for image, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN115187559A true CN115187559A (en) 2022-10-14

Family

ID=83520095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210860445.1A Pending CN115187559A (en) 2022-07-21 2022-07-21 Illumination detection method and device for image, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115187559A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115242949A (en) * 2022-07-21 2022-10-25 Oppo广东移动通信有限公司 Camera module and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115242949A (en) * 2022-07-21 2022-10-25 Oppo广东移动通信有限公司 Camera module and electronic equipment

Similar Documents

Publication Publication Date Title
CN108419061B (en) Multispectral-based image fusion equipment and method and image sensor
CN111698434B (en) Image processing apparatus, control method thereof, and computer-readable storage medium
CN103327342B (en) There is the imaging system of opaque filter pixel
US7460160B2 (en) Multispectral digital camera employing both visible light and non-visible light sensing on a single image sensor
US8803994B2 (en) Adaptive spatial sampling using an imaging assembly having a tunable spectral response
CN104519328B (en) Image processing equipment, image capture device and image processing method
US20050128509A1 (en) Image creating method and imaging device
US20070177004A1 (en) Image creating method and imaging device
US20130329101A1 (en) Camera system with multi-spectral filter array and image processing method thereof
KR20160005650A (en) Image processing device that performs white balance control, method of controlling the same, and image pickup apparatus
WO2012024163A2 (en) Image capture with identification of illuminant
CN111355936B (en) Method and system for acquiring and processing image data for artificial intelligence
TWI514894B (en) White balance correction in a captured digital image
CN101690160A (en) Methods, systems and apparatuses for motion detection using auto-focus statistics
CN115187559A (en) Illumination detection method and device for image, storage medium and electronic equipment
CN109729259B (en) Image processing apparatus, method thereof, system thereof, and computer readable medium
CN107197168A (en) The image capturing system of image-pickup method and application this method
CN110929615A (en) Image processing method, image processing apparatus, storage medium, and terminal device
US11457189B2 (en) Device for and method of correcting white balance of image
CN107786857B (en) A kind of image restoring method and device
US20230342895A1 (en) Image processing method and related device thereof
CN115239550A (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN106384102A (en) IR-card-equipped day-night digital network camera flame detection system and method
KR101120568B1 (en) Photographing system of multi-spectrum electromagnetic image and photographing method of multi-spectrum electromagnetic image
CN115242949A (en) Camera module and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination