CN113240759A - Visibility determination method, device, equipment and storage medium based on color temperature - Google Patents

Visibility determination method, device, equipment and storage medium based on color temperature Download PDF

Info

Publication number
CN113240759A
CN113240759A CN202110615340.5A CN202110615340A CN113240759A CN 113240759 A CN113240759 A CN 113240759A CN 202110615340 A CN202110615340 A CN 202110615340A CN 113240759 A CN113240759 A CN 113240759A
Authority
CN
China
Prior art keywords
visibility
image
training sample
target
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110615340.5A
Other languages
Chinese (zh)
Inventor
周凯艳
何娜
闫正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Priority to CN202110615340.5A priority Critical patent/CN113240759A/en
Publication of CN113240759A publication Critical patent/CN113240759A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Spectrometry And Color Measurement (AREA)

Abstract

The embodiment of the invention discloses a visibility determination method, a visibility determination device, visibility determination equipment and a visibility determination storage medium based on color temperature, wherein the visibility determination method comprises the steps of obtaining an image to be identified and optical parameters corresponding to the image to be identified, wherein the optical parameters at least comprise the color temperature; inputting the image to be recognized and the optical parameters into a predetermined target visibility prediction model as input data, wherein a loss function of the target visibility prediction model is related to standard visibility in a training sample; and predicting the visibility according to the output result of the target visibility prediction model to obtain the target visibility of the image to be recognized. The problem of inaccurate visibility result calculated through the atmospheric transmittance is solved. The target visibility prediction model is trained in advance, and when the visibility is predicted, the image to be recognized and the optical parameters are simultaneously used as the input of the model to predict the visibility, so that the visibility recognition accuracy under different color temperatures is improved, and the negative influence caused by low visibility accuracy is reduced.

Description

Visibility determination method, device, equipment and storage medium based on color temperature
Technical Field
The embodiment of the invention relates to a data processing technology, in particular to a visibility determination method, a visibility determination device, visibility determination equipment and a storage medium based on color temperature.
Background
The atmospheric visibility has great significance for human life, and especially the low visibility has many adverse effects on navigation, aviation, road traffic and human daily life, so that the accurate measurement of the atmospheric visibility has great significance. The visibility measuring method comprises a visual measuring method, a measuring method and a video detecting method. The visual inspection method is the most original observation method, and measures visibility by manually observing the visibility of ground markers by professional observers, but the visual inspection method is influenced by subjective factors, so that observation standards cannot be unified, and measurement errors are large. The measuring method is to measure the visibility by using an automatic measuring instrument such as an atmospheric projector, laser visibility and the like, and the method is objective, but has high cost and complex debugging.
At present, a double-brightness difference method, a visibility detection method based on a dark channel and a visibility regression method are widely used in a video visibility detection method. However, due to the influence of the solar altitude, the atmospheric transmittance at different color temperatures is different, so that the visibility values calculated according to the atmospheric transmittance are different, and the visibility level is not changed in the case of no fog. Therefore, when the existing visibility detection method is used for detecting visibility, the influence of different color temperatures on the visibility is not considered, so that the accuracy of the detected visibility is low, the daily life of people is influenced, and great inconvenience is brought to the daily life of people.
Disclosure of Invention
The invention provides a visibility determination method, a visibility determination device, visibility determination equipment and a visibility determination storage medium based on color temperature, which are used for accurately predicting the visibility of images at different color temperatures.
In a first aspect, an embodiment of the present invention provides a visibility determination method based on color temperature, where the visibility determination method based on color temperature includes:
acquiring an image to be identified and optical parameters corresponding to the image to be identified, wherein the optical parameters at least comprise color temperature;
inputting the image to be recognized and the optical parameters into a predetermined target visibility prediction model as input data, wherein a loss function of the target visibility prediction model is related to standard visibility in a training sample;
and predicting the visibility according to the output result of the target visibility prediction model to obtain the target visibility of the image to be recognized.
In a second aspect, an embodiment of the present invention further provides a visibility determination apparatus based on color temperature, where the visibility determination apparatus based on color temperature includes:
the device comprises an acquisition module, a recognition module and a processing module, wherein the acquisition module is used for acquiring an image to be recognized and optical parameters corresponding to the image to be recognized, and the optical parameters at least comprise color temperature;
the input module is used for inputting the image to be recognized and the optical parameters into a predetermined target visibility prediction model as input data, wherein a loss function of the target visibility prediction model is related to standard visibility in a training sample;
and the prediction module is used for predicting the visibility according to the output result of the target visibility prediction model to obtain the target visibility of the image to be identified.
In a third aspect, an embodiment of the present invention further provides a computer device, where the computer device includes:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a visibility determination method based on color temperature as described in any one of the embodiments of the present invention.
In a fourth aspect, the embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a visibility determination method based on color temperature as described in any one of the embodiments of the present invention.
The embodiment of the invention provides a visibility determination method, a visibility determination device, visibility determination equipment and a visibility determination storage medium based on color temperature, wherein the visibility determination method comprises the steps of obtaining an image to be identified and optical parameters corresponding to the image to be identified, wherein the optical parameters at least comprise the color temperature; inputting the image to be recognized and the optical parameters into a predetermined target visibility prediction model as input data, wherein a loss function of the target visibility prediction model is related to standard visibility in a training sample; and predicting the visibility according to the output result of the target visibility prediction model to obtain the target visibility of the image to be recognized. The problem of inaccurate visibility result calculated through the atmospheric transmittance is solved. The target visibility prediction model is trained in advance, and when the visibility is predicted, the image to be recognized and the optical parameters are simultaneously used as the input of the model to predict the visibility, so that the visibility recognition accuracy of the images at different color temperatures is improved, and the negative influence caused by low visibility accuracy is reduced.
Drawings
FIG. 1 is a flow chart of a visibility determination method based on color temperature according to a first embodiment of the present invention;
FIG. 2 is a flow chart of a visibility determination method based on color temperature according to a second embodiment of the present invention;
FIG. 3 is a flowchart of an implementation of determining a training sample set in a visibility determination method based on color temperature according to a second embodiment of the present invention;
FIG. 4 is a diagram illustrating an exemplary structure of a network model to be trained according to a second embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a visibility determination apparatus based on color temperature according to a third embodiment of the present invention;
fig. 6 is a schematic structural diagram of a computer device in the fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings. It should be understood that the embodiments described are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
In the description of the present application, it is to be understood that the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not necessarily used to describe a particular order or sequence, nor are they to be construed as indicating or implying relative importance. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate. Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Example one
Fig. 1 is a schematic flowchart of a visibility determination method based on color temperature according to an embodiment of the present application, where the method is suitable for predicting visibility. The method can be performed by a computer device, which can be formed by two or more physical entities or by one physical entity. Generally, the computer device may be a notebook, a desktop computer, a smart tablet, and the like.
As shown in fig. 1, a visibility determination method based on color temperature provided in this embodiment specifically includes the following steps:
s110, acquiring an image to be recognized and optical parameters corresponding to the image to be recognized, wherein the optical parameters at least comprise color temperature.
In this embodiment, the image to be recognized may be specifically understood as an image that needs visibility prediction and recognizes visibility of an environment in the image, and the image to be recognized may be collected by an image collecting device, such as a camera, a video camera, and the like. The optical parameters may be specifically understood as optical data corresponding to a weather environment at the time of acquiring the image to be recognized. Further, the optical parameters include at least: color temperature, reflection and scattering coefficients, etc., and may also include transmission coefficients. The optical parameters can be collected by a color temperature instrument, a spectral measuring instrument and other instruments.
Specifically, the image acquisition device for acquiring the image to be recognized may be disposed at any position, or may be disposed on any device. For example, in an aviation application, the image capture device may be located anywhere on the aircraft (e.g., wings, nose, etc.); in road traffic applications, the image acquisition device can be arranged at any position of a vehicle (such as a vehicle head, a vehicle roof and the like); the method has the advantages that the images to be identified can be collected at any time when the airplane or the vehicle runs, and the images to be identified are sent to the executive equipment executing the visibility determining method, so that the visibility can be determined in real time. Or the image acquisition device is arranged at a fixed position and fixedly acquires the image to be identified at the position, so that the visibility at the position is identified. The number of the image acquisition devices can be one or more, the number of the images to be identified acquired at the same time can also be one or more, and each image to be identified is processed by the visibility determination method based on the color temperature provided by the embodiment of the application, so that the visibility of the image to be identified is predicted.
And S120, inputting the image to be recognized and the optical parameters into a predetermined target visibility prediction model as input data, wherein a loss function of the target visibility prediction model is related to the standard visibility in the training sample.
In this embodiment, the target visibility prediction model may be specifically understood as a neural network model that can predict visibility through images, and the target visibility prediction model is obtained by training a large number of images and optical parameters in advance. The target visibility prediction model in the application takes a mobilonenet V2 model as an example, and at least adds a non-conventional rectangular convolution layer on the basis of the existing mobilonenet V2 model structure, so that the target visibility prediction model can process an original image with higher resolution. The standard visibility is specifically understood as the visibility as a standard for judging whether the predicted visibility is accurate or not. The training samples typically include data to be learned (e.g., images and optical parameters) and corresponding standard data (i.e., standard visibility).
Specifically, the target visibility prediction model is trained in advance, parameters of the model are continuously adjusted according to the loss function in the training process, and finally the target visibility prediction model meeting requirements is obtained to complete training. The trained target visibility prediction model can directly input data, and a prediction result is obtained according to learning experience. The existing MobileNetV2 model can only predict images with low resolution, and the image pixels need to be 224 × 224, 448 × 448 types of images. In the training process of the target visibility prediction model in the embodiment of the application, the difference of images under different visibility conditions is considered, and the loss function of the model is determined according to the standard visibility in the training sample, so that the target visibility prediction model is more accurate; in addition, the target visibility prediction model in the embodiment of the application is improved on the basis of the existing MobileNetV2 model structure, at least an unconventional rectangular convolutional layer is added, and an originally acquired image, for example, an image with a pixel value of 1920 × 1080, can be processed by the unconventional rectangular convolutional layer. The collected images to be identified do not need to be processed, optical parameters can be retained to the maximum extent, information loss is avoided, and the accuracy of visibility prediction is improved.
S130, predicting the visibility according to an output result of the target visibility prediction model to obtain the target visibility of the image to be recognized.
In this embodiment, the target visibility may be specifically understood as the visibility of the environment in the image to be recognized at the time of image acquisition. And the target visibility prediction model processes the input image to be recognized and the optical parameters according to the learned experience, predicts the visibility corresponding to the image to be recognized, outputs the prediction result as the output result of the model to realize the prediction of the visibility, and the output result of the model is the target visibility of the image to be recognized.
The embodiment of the invention provides a visibility determination method based on color temperature, which comprises the steps of obtaining an image to be identified and optical parameters corresponding to the image to be identified, wherein the optical parameters at least comprise color temperature; inputting the image to be recognized and the optical parameters into a predetermined target visibility prediction model as input data, wherein a loss function of the target visibility prediction model is related to standard visibility in a training sample; and predicting the visibility according to the output result of the target visibility prediction model to obtain the target visibility of the image to be recognized. The problem of inaccurate visibility result calculated through the atmospheric transmittance is solved. The target visibility prediction model is trained in advance, and when the visibility is predicted, the image to be recognized and the optical parameters are simultaneously used as the input of the model to predict the visibility, so that the visibility recognition accuracy is improved, and the negative influence caused by low visibility accuracy is reduced. When the target visibility is predicted through the target visibility prediction model, an unconventional rectangular convolution layer is added into the target visibility prediction model, and an originally acquired image is processed. The collected images to be identified do not need to be processed, optical parameters can be retained to the maximum extent, information loss is avoided, and the accuracy of visibility prediction is improved.
Example two
Fig. 2 is a flowchart of a visibility determination method based on color temperature according to a second embodiment of the present invention. The technical scheme of the embodiment is further refined on the basis of the technical scheme, and specifically mainly comprises the following steps:
s210, a training sample set containing at least one training sample is obtained, and the training sample set is determined through a preset sample processing method.
In this embodiment, the acquired image and the optical parameter are used as data to be learned, and the image, the optical parameter and the corresponding standard data are used as a training sample. The standard data is usually manually labeled or labeled by other means, and needs to be determined in advance before training, and the standard data in the embodiment of the application is visibility values. A training sample set is in particular understood to be a data set comprising one or more training samples.
Specifically, a large number of images are collected in advance, each image is processed and labeled respectively to form training samples, and then a training sample set is formed according to the training samples.
Further, fig. 3 provides a flowchart of an implementation of determining a training sample set in a visibility determination method based on color temperature, where the step of determining the training sample set includes:
s211, obtaining at least one training sample image and corresponding training sample optical parameters, wherein the training sample optical parameters at least comprise color temperature.
In this embodiment, the training sample image may be specifically understood as an image used in training, and the training sample image in this embodiment may be an image acquired by an image acquisition device under different weather conditions. The training sample optical parameters may specifically be understood as optical data corresponding to a weather environment when the training sample image is acquired, and the training sample optical parameters may include a color temperature, a reflection coefficient, a scattering coefficient, and the like, and may further include a transmission coefficient.
Specifically, the optical parameters of the training sample of one or more training sample images and the training sample images are acquired in advance, when a plurality of training sample images exist, the training sample images can be acquired by the same image acquisition device or different image acquisition devices, and in order to ensure that the color correction matrix is suitable for all the image acquisition devices, when a plurality of image acquisition devices are used for acquiring the training sample images, the same type of image acquisition device can be used. After the training sample images and the corresponding training sample optical parameters are collected, the training sample images and the corresponding training sample optical parameters can be stored in a local storage space or a storage space such as a server. When the model is trained, the training sample image and the corresponding training sample optical parameters are obtained from the storage space.
S212, aiming at each training sample image, screening a color correction matrix according to the color temperature to obtain a target color correction matrix.
In the present embodiment, the color correction matrix may be specifically understood as a matrix that corrects the color of the image. The target color correction matrix may be specifically understood as a color correction matrix matching the color temperature, being one of the color correction matrices.
It is to be understood that the color correction matrix needs to be predetermined. The embodiment of the application provides a method for determining correction matrixes of various colors, which comprises the following steps:
1) a camera is erected in an open place, and the camera is kept horizontally adjusted to a proper position, so that the part of the sky shot by the camera accounts for one third of the whole picture.
2) And (3) shooting pictures under different weather conditions by using a camera, and simultaneously obtaining the color temperature value of the shooting moment under the current environmental field by using a color temperature instrument. N (N is a positive integer greater than or equal to 1) pictures x are selected from the taken pictures as images for color correction.
3) Preparing a standard 24-color card, placing the color card on a photographing site, not filling the whole picture, only needing a camera to photograph all color blocks, not being in shadow or in an overexposure position, respectively photographing at different color temperatures, obtaining RGB values of pictures y at different color temperatures, and respectively obtaining color matrixes M of the camera at different color temperatures by using the following formula.
Figure BDA0003097808870000101
Where M is a color matrix, the first row determines red, the second row determines green, the third row determines blue, and the fourth column is the color shift. The C matrix is the RGB information contained in picture x and the C1 matrix is the standard RGB color components for taking 24 color chip picture y.
The method comprises the steps of shooting a picture x and a picture y of a 24-color card under different color temperatures (at the same color temperature, the picture x and the picture y are respectively shot), then calculating a color matrix M under each color temperature, and taking the color matrix M as a color correction matrix and corresponding to the correlated color temperature.
As described above, each color correction matrix is a color correction matrix at different color temperatures, i.e., each color correction matrix corresponds to and is associated with one color temperature. And searching a target color correction matrix corresponding to the color temperature of the training sample image according to the corresponding relation between the color temperature and the color correction matrix.
And S213, correcting the training sample image according to the target color correction matrix to obtain a corrected image.
In the present embodiment, the corrected image is specifically understood as an image subjected to color correction. And multiplying the matrix of the training sample image and the target color correction matrix to obtain a corrected image.
And S214, taking the corrected image and the corresponding optical parameters of the training sample as a training sample.
And S215, forming a training sample set according to the training samples.
Each training sample consists of a correction image and corresponding optical parameters of the training sample, and each training sample forms a training sample set.
It should be understood that the training samples also include standard visibility as a measure of whether the predicted visibility is accurate. Each training sample image has a corresponding standard visibility, and correspondingly, each correction image also has a corresponding standard visibility.
And S220, inputting the corresponding training sample under the current iteration into the current network model to be trained to obtain the predicted visibility.
In this embodiment, the network model to be trained may be specifically understood as an untrained deep learning-based neural network model. The predicted visibility can be specifically understood as the visibility predicted by the network model to be trained according to the input data in the training process.
Specifically, a training sample corresponding to the current iteration is input into a current network model to be trained, the network model to be trained predicts according to current network parameters, and data is processed through different convolutional layers and full connection layers to obtain a prediction result, namely, the visibility is predicted.
Further, the network model to be trained at least comprises unconventional rectangular convolutional layers, a first convolutional layer, a second convolutional layer, a third convolutional layer and a full-connection layer of a convolutional kernel are set, and the convolutional kernels of the first convolutional layer and the second convolutional layer are different;
in this embodiment, the first convolutional layer and the second convolutional layer function to perform dimension reduction processing on the feature data output by the irregular rectangular convolutional layer. The existing MobileNet V2 network model can only process images with low resolution and consistent resolution length and width, and can not directly process original images. The neural network model can realize the processing of the original collected images by adding the unconventional rectangular convolution layer, the first convolution layer and the second convolution layer, does not need to preprocess the collected images, and avoids the loss of original image data.
The network model to be trained in the embodiment of the present application may further include other convolutional layers and data processing layers.
Illustratively, as shown in fig. 4, the embodiment of the present application provides a structural example diagram of a network model to be trained, which includes at least a third convolutional layer 21, an irregular rectangular convolutional layer 22, a first convolutional layer 23, a second convolutional layer 24, and a fully-connected layer 25. Data is input from the third convolutional layer and output from the fully-connected layer. The input data is processed by each layer in sequence, and the predicted visibility is output. It should be noted that a convolutional layer may be included between the third convolutional layer 21 and the irregular rectangular convolutional layer 22, and the convolutional kernel of the convolutional layer may be the same as or different from the convolutional kernels of the other convolutional layers. The convolution kernels of the first convolution layer 23 and the second convolution layer 24 are related to the convolution kernels of the unconventional rectangular convolution layer 22, the convolution kernels of the unconventional rectangular convolution layer 22 can be set according to requirements, and after the convolution kernels of the unconventional rectangular convolution layer 22 are determined, in order to ensure that an input image can be reduced to a certain dimension, the sizes of the convolution kernels of the first convolution layer 23 and the second convolution layer 24 are correspondingly set. The convolution kernel size of each convolution layer is exemplarily given in the embodiment of the present application: a third 3 x 3 convolutional layer 21, a 4 x 7 irregular rectangular convolutional layer 22, a 7 x 7 first convolutional layer 23, and a 5 x 5 second convolutional layer 24.
As an optional embodiment of this embodiment, in this optional embodiment, the training samples corresponding to the current iteration are further input into the current network model to be trained, and the obtained predicted visibility is optimized as follows:
and A1, inputting the training sample into a third convolution layer in the network model to be trained to obtain first characteristic data.
In this embodiment, the first feature data may be specifically understood as feature data obtained after convolution processing, and may be a feature vector. The third convolutional layer obtains first characteristic data by performing convolution processing on the training samples.
And A2, taking the first characteristic data as input, and sequentially processing the irregular rectangular convolution layer, the first convolution layer and the second convolution layer to obtain second characteristic data.
In this embodiment, the second feature data may be specifically understood as feature data obtained after convolution processing, and may also be a feature vector. The first characteristic data and the second characteristic data are of the same type, only different characteristic data obtained by different convolutional layer processing. The first characteristic data is firstly input into the irregular rectangular convolution layer for processing, the obtained data is input into the first convolution layer for processing, the data output by the first convolution layer is input into the second convolution layer again, and the second characteristic data is obtained through convolution processing of the second convolution layer.
And A3, inputting the second characteristic data into the full connection layer to obtain the predicted visibility.
And processing the second characteristic data through the full connection layer, and outputting a one-dimensional array, namely the predicted visibility.
And S230, determining a current loss function expression according to the standard visibility in the corresponding training sample under the current iteration.
The standard visibility can be marked and determined in a manual marking mode, workers determine the visibility through experience or obtain the visibility through actual measurement, and the visibility is used as the standard visibility to be associated with a corrected image in a training sample. The current loss function expression may be specifically understood as a current calculation mode of the loss function. The loss function is calculated through parameters during calculation, and the calculated parameters of the loss function in the application are related to standard visibility.
Specifically, a calculation value is obtained by calculating the standard visibility by a set calculation method, the calculation value is used as a parameter of the loss function expression, and the current loss function expression is determined by combining a given expression. Therefore, when different training samples have different standard visibility, the obtained current loss function expression is also different. The calculation formula of the loss function is associated with the standard visibility, and the difference of the images under different visibility conditions is considered, so that the calculation of the loss function is more accurate, and a more accurate prediction model is obtained.
As an optional embodiment of this embodiment, the optional embodiment further optimizes the determining of the current loss function expression according to the standard visibility in the corresponding training sample under the current iteration as follows:
and B1, determining the current loss function parameters according to the standard visibility.
In the present embodiment, the current parameters of the loss function may be specifically understood as parameters of the loss function at the time of current calculation.
Specifically, the current loss function parameters are determined by calculating the standard visibility. For example, the embodiment of the present application provides a calculation method of a current loss function parameter:
Figure BDA0003097808870000131
wherein α represents a current loss function parameter; y istrueIndicating standard visibility. The loss function provided by the embodiment of the application is an unbalanced loss function, the lower the visibility is, the larger the current loss function parameter is, the higher the visibility is, and the smaller the current loss function parameter is.
And B2, determining a current loss function expression based on the current loss function parameters.
Specifically, the current loss function parameter is substituted into a given expression to obtain a current loss function expression.
Further, an embodiment of the present application provides a current loss function expression:
loss=α2|ytrue-ypre|;
wherein loss represents the current loss function expression; α represents a current loss function parameter; y istrueRepresents standard visibility; y ispreRepresenting predicted visibility.
And S240, according to the current loss function expression, combining the predicted visibility and the corresponding standard visibility to obtain a corresponding current loss function.
In this embodiment, the current loss function may be specifically understood as a loss function calculated according to a training sample under a current iteration. And substituting the predicted visibility and the standard visibility obtained according to the training sample under the current iteration into the current loss function expression for calculation to obtain the current loss function.
And S250, performing back propagation on the network model to be trained based on the current loss function to obtain the network model to be trained for the next iteration until an iteration convergence condition is met, and obtaining a target visibility prediction model.
And in the training process of the neural network model, continuously updating parameters of the adjustment model by a back propagation method until the output of the model is consistent with the target, and determining the parameters of the model at the moment as the parameters of the target visibility prediction model. After the current loss function is determined, the network model to be trained is subjected to back propagation through the current loss function until a target visibility prediction model meeting a convergence condition is obtained. The embodiment of the invention does not limit the specific back propagation process and can be set according to specific conditions. For example, the current loss function in the embodiment of the present application tends to stabilize to a value of about 100 or less, and the iteration convergence condition is considered to be satisfied.
After the model is trained by the method of S210-S250 to obtain the target visibility prediction model, the target visibility prediction model can predict visibility in practical application. In practical application, after the image to be recognized is collected, the target visibility of the image to be recognized is predicted by executing the following steps S260-S280. The predicted target visibility can be used for planning a driving route, reminding a driver of the condition of the front route and the like, for example, a voice prompt "the front visibility is 300m, please drive carefully".
S260, acquiring the image to be recognized and optical parameters corresponding to the image to be recognized, wherein the optical parameters at least comprise color temperature.
S270, inputting the image to be recognized and the optical parameters into a predetermined target visibility prediction model as input data.
And S280, predicting the visibility according to the output result of the target visibility prediction model to obtain the target visibility of the image to be recognized.
The embodiment of the invention provides a visibility determination method based on color temperature, which comprises the steps of obtaining an image to be identified and optical parameters corresponding to the image to be identified; inputting the image to be recognized and the optical parameters into a predetermined target visibility prediction model as input data, wherein a loss function of the target visibility prediction model is related to standard visibility in a training sample; and predicting the visibility according to the output result of the target visibility prediction model to obtain the target visibility of the image to be recognized. The problem of inaccurate visibility result calculated through the atmospheric transmittance is solved. The target visibility prediction model is trained in advance, when the visibility is predicted, the image to be recognized and the optical parameters are simultaneously used as the input of the model, the visibility is predicted, the accuracy of visibility recognition is improved due to the fact that the reflection coefficient and the scattering coefficient are considered, and negative effects caused by low visibility accuracy are reduced. Meanwhile, the target visibility prediction model can process the originally acquired image by adding the unconventional rectangular convolution layer, the first convolution layer and the second convolution layer, and the acquired image does not need to be preprocessed, so that the loss of original image data is avoided. The calculation formula of the loss function is associated with the standard visibility, and the difference of the images under different visibility conditions is considered, so that the calculation of the loss function is more accurate, and a more accurate prediction model is obtained.
EXAMPLE III
Fig. 5 is a schematic structural diagram of a visibility determination apparatus based on color temperature according to a third embodiment of the present invention, where the apparatus includes: an acquisition module 31, an input module 32 and a prediction module 33.
The acquiring module 31 is configured to acquire an image to be identified and optical parameters corresponding to the image to be identified, where the optical parameters at least include a color temperature;
an input module 32, configured to input the image to be recognized and the optical parameter as input data into a predetermined target visibility prediction model, where a loss function of the target visibility prediction model is related to standard visibility in a training sample;
and the predicting module 33 is configured to predict visibility according to an output result of the target visibility predicting model, and obtain target visibility of the image to be recognized.
The embodiment of the invention provides a visibility determination device based on color temperature, which is characterized in that an image to be identified and optical parameters corresponding to the image to be identified are obtained; inputting the image to be recognized and the optical parameters into a predetermined target visibility prediction model as input data, wherein a loss function of the target visibility prediction model is related to standard visibility in a training sample; and predicting the visibility according to the output result of the target visibility prediction model to obtain the target visibility of the image to be recognized. The problem of inaccurate visibility result calculated through the atmospheric transmittance is solved. The target visibility prediction model is trained in advance, and when the visibility is predicted, the image to be recognized and the optical parameters are simultaneously used as the input of the model to predict the visibility, so that the accuracy of visibility recognition under different color temperatures is improved, and the negative influence caused by low visibility accuracy is reduced.
Further, the apparatus further comprises:
the device comprises a sample set acquisition module, a sample set processing module and a sample set processing module, wherein the sample set acquisition module is used for acquiring a training sample set containing at least one training sample, and the training sample set is determined by a preset sample processing method;
the sample input module is used for inputting the corresponding training sample under the current iteration into the current network model to be trained to obtain the predicted visibility;
the function expression determining module is used for determining a current loss function expression according to the standard visibility in the corresponding training sample under the current iteration;
a loss function determining module, configured to obtain, according to the current loss function expression, a corresponding current loss function in combination with the predicted visibility and the corresponding standard visibility;
and the model determining module is used for performing back propagation on the network model to be trained based on the current loss function to obtain the network model to be trained for the next iteration until an iteration convergence condition is met to obtain a target visibility prediction model.
Further, the apparatus further comprises: and the sample set determining module is used for determining a training sample set.
A sample set determination module comprising:
the system comprises a sample acquisition unit, a data processing unit and a data processing unit, wherein the sample acquisition unit is used for acquiring at least one training sample image and corresponding training sample optical parameters, and the training sample optical parameters at least comprise color temperature;
the matrix determining unit is used for screening a color correction matrix according to the color temperature aiming at each training sample image to obtain a target color correction matrix;
the correcting unit is used for correcting the training sample image according to the target color correction matrix to obtain a corrected image;
the sample determining unit is used for taking the corrected image and the corresponding optical parameters of the training sample as a training sample;
and the sample set forming unit is used for forming a training sample set according to each training sample.
Further, the network model to be trained at least comprises unconventional rectangular convolutional layers, a first convolutional layer, a second convolutional layer, a third convolutional layer and a full-connection layer of a convolutional kernel are set, and the convolutional kernels of the first convolutional layer and the second convolutional layer are different;
accordingly, a sample input module, comprising:
the first data determining unit is used for inputting the training sample to a third convolution layer in the network model to be trained to obtain first characteristic data;
the second data determining unit is used for taking the first characteristic data as input and sequentially processing the unconventional rectangular convolutional layer, the first convolutional layer and the second convolutional layer to obtain second characteristic data;
and the visibility determining unit is used for inputting the second characteristic data into the full connection layer to obtain the predicted visibility.
Further, the function expression determination module includes:
the parameter determining unit is used for determining the current loss function parameter according to the standard visibility;
an expression determining unit, configured to determine a current loss function expression based on the current loss function parameter.
Further, the current loss function expression is:
loss=α2|ytrue-ypre|;
wherein loss represents the current loss function expression; α represents a current loss function parameter; y istrueRepresents standard visibility; y ispreRepresenting predicted visibility.
Further, the optical parameters at least further comprise a reflection coefficient and a scattering coefficient.
The visibility determination device based on color temperature provided by the embodiment of the invention can execute the visibility determination method based on color temperature provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 6 is a schematic structural diagram of a computer apparatus according to a fourth embodiment of the present invention, as shown in fig. 6, the apparatus includes a processor 40, a memory 41, an input device 42, and an output device 43; the number of processors 40 in the device may be one or more, and one processor 40 is taken as an example in fig. 6; the processor 40, the memory 41, the input device 42 and the output device 43 in the apparatus may be connected by a bus or other means, as exemplified by the bus connection in fig. 6.
The memory 41 serves as a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the visibility determination method based on color temperature in the embodiment of the present invention (for example, the obtaining module 31, the input module 32, and the prediction module 33 in the visibility determination device based on color temperature). The processor 40 executes various functional applications of the device and data processing, i.e. implements the above-described visibility determination method based on color temperature, by running software programs, instructions and modules stored in the memory 41.
The memory 41 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 41 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 41 may further include memory located remotely from processor 40, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 42 is operable to receive input numeric or character information and to generate key signal inputs relating to user settings and function controls of the apparatus. The output device 43 may include a display device such as a display screen.
EXAMPLE five
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform a visibility determination method based on color temperature, the method including:
acquiring an image to be identified and optical parameters corresponding to the image to be identified, wherein the optical parameters at least comprise color temperature;
inputting the image to be recognized and the optical parameters into a predetermined target visibility prediction model as input data, wherein a loss function of the target visibility prediction model is related to standard visibility in a training sample;
and predicting the visibility according to the output result of the target visibility prediction model to obtain the target visibility of the image to be recognized.
Of course, the storage medium provided by the embodiments of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the operations of the method described above, and may also perform the relevant operations in the visibility determination method based on color temperature provided by any embodiments of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the visibility determination apparatus based on color temperature, the included units and modules are only divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be realized; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A visibility determination method based on color temperature, characterized by comprising:
acquiring an image to be identified and optical parameters corresponding to the image to be identified, wherein the optical parameters at least comprise color temperature;
inputting the image to be recognized and the optical parameters into a predetermined target visibility prediction model as input data, wherein a loss function of the target visibility prediction model is related to standard visibility in a training sample;
and predicting the visibility according to the output result of the target visibility prediction model to obtain the target visibility of the image to be recognized.
2. The method of claim 1, wherein the step of training the target visibility prediction model comprises:
acquiring a training sample set containing at least one training sample, wherein the training sample set is determined by a preset sample processing method;
inputting a corresponding training sample under current iteration into a current network model to be trained to obtain predicted visibility;
determining a current loss function expression according to the standard visibility in the corresponding training sample under the current iteration;
according to the current loss function expression, combining the predicted visibility and the corresponding standard visibility to obtain a corresponding current loss function;
and performing back propagation on the network model to be trained based on the current loss function to obtain the network model to be trained for the next iteration until an iteration convergence condition is met, and obtaining a target visibility prediction model.
3. The method of claim 2, wherein the step of determining the training sample set comprises:
acquiring at least one training sample image and corresponding training sample optical parameters, wherein the training sample optical parameters at least comprise color temperature;
screening a color correction matrix according to the color temperature for each training sample image to obtain a target color correction matrix;
correcting the training sample image according to the target color correction matrix to obtain a corrected image;
taking the corrected image and the corresponding optical parameters of the training sample as a training sample;
a training sample set is formed from each of the training samples.
4. The method according to claim 2, wherein the network model to be trained comprises at least irregular convolutional layers, a first convolutional layer, a second convolutional layer, a third convolutional layer and a full-link layer of a set convolutional kernel, and the convolutional kernels of the first convolutional layer and the second convolutional layer are different;
correspondingly, the inputting the training sample corresponding to the current iteration into the current network model to be trained to obtain the predicted visibility includes:
inputting the training sample into a third convolution layer in the network model to be trained to obtain first characteristic data;
taking the first characteristic data as input, and sequentially processing the unconventional rectangular convolution layer, the first convolution layer and the second convolution layer to obtain second characteristic data;
and inputting the second characteristic data into the full-connection layer to obtain the predicted visibility.
5. The method of claim 2, wherein determining the current loss function expression based on standard visibility in the corresponding training sample at the current iteration comprises:
determining a current loss function parameter according to the standard visibility;
determining a current loss function expression based on the current loss function parameters.
6. The method of claim 5, wherein the current loss function expression is:
loss=α2|ytrue-ypre|;
wherein loss represents the current loss function expression; α represents a current loss function parameter; y istrueRepresents standard visibility; y ispreRepresenting predicted visibility.
7. The method according to any of claims 1-6, wherein the optical parameters further comprise at least a reflection coefficient and a scattering coefficient.
8. A visibility determination apparatus based on color temperature, characterized by comprising:
the device comprises an acquisition module, a recognition module and a processing module, wherein the acquisition module is used for acquiring an image to be recognized and optical parameters corresponding to the image to be recognized, and the optical parameters at least comprise color temperature;
the input module is used for inputting the image to be recognized and the optical parameters into a predetermined target visibility prediction model as input data, wherein a loss function of the target visibility prediction model is related to standard visibility in a training sample;
and the prediction module is used for predicting the visibility according to the output result of the target visibility prediction model to obtain the target visibility of the image to be identified.
9. A computer device, the device comprising:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the visibility determination method based on color temperature as claimed in any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a visibility determination method based on color temperature as claimed in any one of claims 1 to 7.
CN202110615340.5A 2021-06-02 2021-06-02 Visibility determination method, device, equipment and storage medium based on color temperature Pending CN113240759A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110615340.5A CN113240759A (en) 2021-06-02 2021-06-02 Visibility determination method, device, equipment and storage medium based on color temperature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110615340.5A CN113240759A (en) 2021-06-02 2021-06-02 Visibility determination method, device, equipment and storage medium based on color temperature

Publications (1)

Publication Number Publication Date
CN113240759A true CN113240759A (en) 2021-08-10

Family

ID=77136394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110615340.5A Pending CN113240759A (en) 2021-06-02 2021-06-02 Visibility determination method, device, equipment and storage medium based on color temperature

Country Status (1)

Country Link
CN (1) CN113240759A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850801A (en) * 2021-10-18 2021-12-28 深圳晶泰科技有限公司 Crystal form prediction method and device and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850801A (en) * 2021-10-18 2021-12-28 深圳晶泰科技有限公司 Crystal form prediction method and device and electronic equipment

Similar Documents

Publication Publication Date Title
US10984293B2 (en) Image processing method and apparatus
CN109086668B (en) Unmanned aerial vehicle remote sensing image road information extraction method based on multi-scale generation countermeasure network
CN109799073B (en) Optical distortion measuring device and method, image processing system, electronic equipment and display equipment
CN110032928B (en) Satellite remote sensing image water body identification method suitable for color sensitivity
EP3798975A1 (en) Method and apparatus for detecting subject, electronic device, and computer readable storage medium
CN112384946A (en) Image dead pixel detection method and device
CN110930296A (en) Image processing method, device, equipment and storage medium
CN111028779B (en) Display panel compensation method and device and display panel
CN111935479A (en) Target image determination method and device, computer equipment and storage medium
CN115797735A (en) Target detection method, device, equipment and storage medium
CN113240759A (en) Visibility determination method, device, equipment and storage medium based on color temperature
CN113989394A (en) Image processing method and system for color temperature of automatic driving simulation environment
WO2024061194A1 (en) Sample label acquisition method and lens failure detection model training method
CN111611835A (en) Ship detection method and device
CN116597246A (en) Model training method, target detection method, electronic device and storage medium
CN111611836A (en) Ship detection model training and ship tracking method based on background elimination method
CN111444833A (en) Fruit measurement production method and device, computer equipment and storage medium
CN111104965A (en) Vehicle target identification method and device
CN116363658A (en) Digital display signal identification method, system, equipment and medium based on deep learning
CN114693528A (en) Unmanned aerial vehicle low-altitude remote sensing image splicing quality evaluation and redundancy reduction method and system
CN114882385A (en) Method for counting wheat ears in field based on unmanned aerial vehicle platform
CN111400534B (en) Cover determination method and device for image data and computer storage medium
CN114255193A (en) Board card image enhancement method, device, equipment and readable storage medium
JP7253322B2 (en) Integument discoloration diagnostic method
CN114175093A (en) Detection device and detection method for display panel, electronic device and readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination