CN116559119B - Deep learning-based wood dyeing color difference detection method, system and medium - Google Patents

Deep learning-based wood dyeing color difference detection method, system and medium Download PDF

Info

Publication number
CN116559119B
CN116559119B CN202310528055.9A CN202310528055A CN116559119B CN 116559119 B CN116559119 B CN 116559119B CN 202310528055 A CN202310528055 A CN 202310528055A CN 116559119 B CN116559119 B CN 116559119B
Authority
CN
China
Prior art keywords
reflectivity
color
value
neural network
standard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310528055.9A
Other languages
Chinese (zh)
Other versions
CN116559119A (en
Inventor
管雪梅
吴言
杨渠三
黄靖一
张威
何中生
崔宏博
周家名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Forestry University
Original Assignee
Northeast Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Forestry University filed Critical Northeast Forestry University
Priority to CN202310528055.9A priority Critical patent/CN116559119B/en
Publication of CN116559119A publication Critical patent/CN116559119A/en
Application granted granted Critical
Publication of CN116559119B publication Critical patent/CN116559119B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/55Specular reflectivity
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/46Measurement of colour; Colour measuring devices, e.g. colorimeters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

A wood dyeing color difference detection method, system and medium based on deep learning relate to the technical field of wood dyeing, and aim at the problem that the detection precision is affected because the phenomenon that metameric products cannot be distinguished can occur when color space conversion is directly carried out in the prior art. The method and the device do not directly perform color space conversion, but indirectly convert LAB through reflectivity, so that the problem that detection accuracy is affected due to the fact that metameric products cannot be distinguished is avoided.

Description

Deep learning-based wood dyeing color difference detection method, system and medium
Technical Field
The invention relates to the technical field of wood dyeing, in particular to a method, a system and a medium for detecting wood dyeing chromatic aberration based on deep learning.
Background
In the traditional wood dyeing industry, a spectrophotometer is mainly used for color measurement, but the existing spectrophotometer can only perform off-line single-point measurement, so that the wood production efficiency is severely limited.
The hyperspectral imaging system is utilized to detect the color quality of the dyed wood, so that the imaging type detection system is realized, multiple groups of products can be measured at one time, the production efficiency is improved, the measuring resolution of the equipment is high, and the more tiny chromatic aberration of the dyed wood can be measured. However, the use of hyperspectral imaging systems to detect the quality of dyed wood has not been popular until now, and the main reason is that large-scale experimental equipment and a certain expertise are often required to obtain hyperspectral images, and time-consuming and laborious operations become main obstacles for the development of the technology, so researchers try to recover the spectral information from images captured by common CCD or CMOS cameras in the market, and a reflectance reconstruction technology is proposed.
In modern industrial production, machine vision detection is mainly used, a digital CCD camera is used for acquiring an image of a detection target, color information to be detected is converted into a digital signal, target characteristics are extracted by using methods such as image processing and the like and are converted into physical quantities to be measured, and therefore a detection result is output according to a set judgment basis. The response value of the CCD camera is RGB color mode, the RGB color space belongs to the color space which is relevant to the equipment and is non-uniform, the RGB values of the RGB color space are different on different equipment, the color quality cannot be evaluated by adopting the RGB color space, the RGB color space is required to be converted into a real color space LAB, and the phenomenon that metameric products cannot be distinguished can occur when the color space conversion is directly carried out, because the algorithm model cannot distinguish the uniqueness of the colors, and therefore the detection precision is affected.
Disclosure of Invention
The purpose of the invention is that: aiming at the problem that the detection precision is affected by the phenomenon that metameric products cannot be distinguished when color space conversion is directly carried out in the prior art, the method, the system and the medium for detecting the color difference of the wood dyeing based on deep learning are provided.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a wood dyeing color difference detection method based on deep learning comprises the following steps:
step one: acquiring a standard color card, selecting a color block from the standard color card, and then shooting to obtain gray level images and hyperspectral images of the color block under each wave band;
step two: carrying out space correction on gray images in each wave band by using a standard white board, then extracting response values of the gray images subjected to space correction, extracting reflectivity of a hyperspectral image, and carrying out normalization operation on the extracted reflectivity;
step three: training the neural network by using the response value as input and the normalized reflectivity as output to obtain a trained neural network 1;
step four: measuring the LAB value of the color block by using a spectrophotometer, and training the neural network by taking the normalized reflectivity as input and the LAB value as output to obtain a trained neural network 2;
Step five: obtaining a wood dyeing veneer to be identified and a wood dyeing reference standard LAB value, then obtaining a response value of the wood dyeing veneer to be identified, inputting the obtained response value into a trained neural network 1 to obtain an output reflectivity, and then inputting the reflectivity into a neural network 2 to obtain an output LAB value;
step six: comparing the output LAB value with a wood dyeing reference standard LAB value, and obtaining a color difference through a color difference formula;
in the second step, the specific steps of using the standard white board to spatially correct the gray level image under each wave band are as follows:
shooting and storing a uniform standard white board with the reflectivity of 1 by using an imaging system, and carrying out normalization operation on each pixel point on the U when shooting a color block U, so as to complete space correction, wherein the normalization formula is as follows:
wherein,representing the response value of U after normalization of pixel points at coordinates (i, j) on a gray scale image obtained by shooting U under the filtering of a kth band filter, and U ijk Represents the response value before normalization, d represents the dark current response of the camera, i.e. the response value that the camera still has in the absence of any light, s ijk Representing the response value of the pixel point of the gray map obtained by shooting the standard whiteboard under the filter of the kth band filter at the (i, j);
The normalization operation of the extracted reflectivity is specifically as follows:
wherein x is i Represents the reflectivity value of the ith band, min (x i ) Represents the minimum of the reflectivity values of the ith band of all data samples, max (x i ) Representing the maximum in reflectivity of the ith band for all data samples.
Further, the trained neural network 1 includes 2 convolution layers and 2 pooling layers, each convolution layer is composed of 30 convolution kernels with a size of 2 and a step length of 1, the convolution kernels in the pooling layers are 2 and the step length of 1, the pooling mode is average pooling, the activation function is RELU, the formula is f (x) =max (0, x), the training error is estimated by using MSE mean square error, and the formula is thatThe optimization function selects Adam optimization algorithm, and the initial learning rate is set to be 0.001.
Further, the trained neural network 2 is an extreme learning machine, and the LAB value output by the extreme learning machine is expressed as:
wherein y is j Representing the output LAB value, ω i Represented as the connection weight between the input layer and the i-th hidden layer node, beta i Expressed as the connection weight between the i-th hidden layer node and the output layer, g () is expressed as an activation function, here a Sigmoid activation function is selected, the formula is b i Is biased.
Further, the specific steps of the first step are as follows:
and sequentially obtaining gray images of the color blocks at the wave bands of 400nm-750nm with the interval of 50 by using an imaging system, and shooting the color blocks by using a specific hyperspectral imaging workstation to obtain hyperspectral images of the color blocks.
Further, the gray level image in the first step is obtained through an imaging system, wherein the imaging system comprises an imager lens 1, a CMOS sensor 2, a steering engine 3, a filter wheel 4, a filter 5 and a standard light source 6;
the steering engine 3 is used for driving the filter wheel 4, and the filter wheel 4 is provided with a filter 5;
the light emitted by the standard light source 6 is reflected by the color block, enters the imager lens 1 through the optical filter 5, and is imaged by the CMOS sensor 2.
Further, the standard light source 6 is 2D 65 lamps with 20W power.
Further, the filter wheel 4 is provided with 8 round holes with the same diameter, and the filter 5 is arranged on the round holes.
Further, the steering engine 3 is controlled through an STM 32.
A deep learning-based wood stain color difference detection system, comprising: the device comprises a calibration measurement module, an actual measurement module, a data processing module and a detection module;
the calibration measurement module is used for acquiring a standard color card, selecting a color block from the standard color card, and shooting to obtain gray level images and hyperspectral images of the color block under each wave band;
The actual measurement module is used for carrying out space correction on gray images under each wave band by using a standard white board, then carrying out response value extraction on the gray images subjected to space correction, carrying out reflectivity extraction on hyperspectral images, and carrying out normalization operation on the extracted reflectivity;
the data processing module is used for training the neural network by taking the response value as input and the normalized reflectivity as output to obtain a trained neural network 1, measuring the LAB value of the color block by using a spectrophotometer, and taking the normalized reflectivity as input and the LAB value as output to train the neural network to obtain a trained neural network 2;
the detection module is used for acquiring a wood dyeing veneer to be identified and a wood dyeing reference standard LAB value, then acquiring a response value of the wood dyeing veneer to be identified, inputting the acquired response value into the trained neural network 1 to acquire output reflectivity, inputting the reflectivity into the neural network 2 to acquire the output LAB value, and finally comparing the output LAB value with the wood dyeing reference standard LAB value to acquire chromatic aberration through a chromatic aberration formula;
the specific steps of using the standard white board to spatially correct the gray level image under each wave band are as follows:
Shooting and storing a uniform standard white board with the reflectivity of 1 by using an imaging system, and carrying out normalization operation on each pixel point on the U when shooting a color block U, so as to complete space correction, wherein the normalization formula is as follows:
wherein,representing the response value of U after normalization of pixel points at coordinates (i, j) on a gray scale image obtained by shooting U under the filtering of a kth band filter, and U ijk Representing the response value before normalization, b representing the dark current response of the camera, i.e. the response value that the camera still has in the absence of any light, s ijk Indicating that the standard whiteboard is in the kth bandThe response value of the pixel point of the gray image shot under the filtering of the optical filter at the (i, j) position;
the normalization operation of the extracted reflectivity is specifically as follows:
wherein x is i Represents the reflectivity value of the ith band, min (x i ) Represents the minimum of the reflectivity values of the ith band of all data samples, max (x i ) Representing the maximum value in the reflectivity of the ith band of all data samples;
the trained neural network 1 comprises 2 convolution layers and 2 pooling layers, each convolution layer consists of 30 convolution kernels with the size of 2 and the step length of 1, the convolution kernels in the pooling layers are 2 in size and the step length of 1, the pooling mode is average pooling, RELU is selected as an activation function, MSE mean square error is adopted for training error evaluation, adam optimization algorithm is selected as an optimization function, and the initial learning rate is set to be 0.001;
The trained neural network 2 is an extreme learning machine, and the LAB value output by the extreme learning machine is expressed as:
wherein y is j Representing the output LAB value, ω i Represented as the connection weight between the input layer and the i-th hidden layer node, beta i Denoted as the connection weight between the i-th hidden layer node and the output layer, g () is denoted as the activation function, b i Is biased;
the specific steps of obtaining gray level images and hyperspectral images under each wave band of the color blocks are as follows:
sequentially obtaining gray images of color blocks at the wave bands of 400nm-750nm with the interval of 50 by using an imaging system, and shooting the color blocks by using a specific hyperspectral imaging workstation to obtain hyperspectral images of the color blocks;
the gray level image in the calibration measurement module is obtained through an imaging system, and the imaging system comprises an imager lens 1, a CMOS sensor 2, a steering engine 3, a light filtering wheel 4, a light filtering sheet 5 and a standard light source 6;
the steering engine 3 is used for driving the filter wheel 4, and the filter wheel 4 is provided with a filter 5;
light emitted by the standard light source 6 is reflected by the color block, enters the imager lens 1 through the optical filter 5, and is imaged by the CMOS sensor 2;
The standard light source 6 is 2D 65 lamp tubes with 20W power;
the filter wheel 4 is provided with 8 round holes with the same diameter, and the filter 5 is arranged on the round holes;
the steering engine 3 is controlled by an STM 32.
A deep learning-based wood stain color difference detection medium storing a computer readable program for performing the steps of any one of claims 1 to 8.
The beneficial effects of the invention are as follows:
the system has compact structure, convenient assembly, easy disassembly and carrying, eliminates the interference of ambient light, obtains the surface reflectivity of the object to be measured more conveniently, ensures the quality detection of the wood dyeing quality of the 'map-in-one' pixel level, and has wide application space in the wood dyeing industry. The method and the device do not directly perform color space conversion, but indirectly convert LAB through reflectivity, so that the problem that detection accuracy is affected due to the fact that metameric products cannot be distinguished is avoided.
Aiming at the defects of small detection range, time-consuming operation and the like of the traditional wood dyeing quality detection, the application designs the wood dyeing quality detection device which is simple to operate and low in cost and can realize high-precision global detection and a matched detection method.
Drawings
FIG. 1 is a system block diagram;
FIG. 2 is a schematic diagram of a filter wheel structure in the system;
FIG. 3 is a schematic diagram of the system operation;
FIG. 4 is a flowchart of an example implementation.
Detailed Description
It should be noted in particular that, without conflict, the various embodiments disclosed herein may be combined with each other.
The first embodiment is as follows: referring to fig. 4, a method for detecting a color difference of wood dyeing based on deep learning according to the present embodiment includes:
the method for detecting the color difference of the wood dyeing based on deep learning is characterized by comprising the following steps of:
step one: acquiring a standard color card, selecting a color block from the standard color card, and then shooting to obtain gray level images and hyperspectral images of the color block under each wave band;
step two: carrying out space correction on gray images in each wave band by using a standard white board, then extracting response values of the gray images subjected to space correction, extracting reflectivity of a hyperspectral image, and carrying out normalization operation on the extracted reflectivity;
step three: training the neural network by using the response value as input and the normalized reflectivity as output to obtain a trained neural network 1;
Step four: measuring the LAB value of the color block by using a spectrophotometer, and training the neural network by taking the normalized reflectivity as input and the LAB value as output to obtain a trained neural network 2;
step five: obtaining a wood dyeing veneer to be identified and a wood dyeing reference standard LAB value corresponding to the dyeing veneer, then obtaining a response value of the wood dyeing veneer to be identified, inputting the obtained response value into a trained neural network 1 to obtain an output reflectivity, and then inputting the reflectivity into a neural network 2 to obtain an output LAB value;
step six: comparing the output LAB value with a wood dyeing reference standard LAB value, and obtaining a color difference through a color difference formula;
in the second step, the specific steps of using the standard white board to spatially correct the gray level image under each wave band are as follows:
shooting and storing a uniform standard white board with the reflectivity of 1 by using an imaging system, and carrying out normalization operation on each pixel point on the U when shooting a color block U, so as to complete space correction, wherein the normalization formula is as follows:
wherein,representing the response value of U after normalization of pixel points at coordinates (i, j) on a gray scale image obtained by shooting U under the filtering of a kth band filter, and U ijk Represents the response value before normalization, d represents the dark current response of the camera, i.e. the response value that the camera still has in the absence of any light, s ijk Representing the response value of the pixel point of the gray map obtained by shooting the standard whiteboard under the filter of the kth band filter at the (i, j);
the normalization operation of the extracted reflectivity is specifically as follows:
wherein x is i Represents the reflectivity value of the ith band, min (x i ) Represents the minimum of the reflectivity values of the ith band of all data samples, max (x i ) Representing the maximum in reflectivity of the ith band for all data samples.
In this application, the system includes a camera imaging system, a filter wheel filter system, and a data processing system.
1. Camera imaging system
The imaging system in the system is composed of an imaging lens and a black-and-white CMOS sensor, and can convert spectral signals projected to each wave band in the sensor into digital electric signals so as to generate a multichannel spectral image. The imaging lens is a megapixel fixed focal length lens, has ultralow distortion, high resolution and excellent micro-distance effect, and has various optical correction modes.
2. Filter wheel filtering system
The optical filter wheel filtering system in the system comprises an optical filter, an optical filter wheel and a steering engine driving device, wherein the optical filter wheel is fixed in front of a lens and clings to the lens, and the perpendicularity of the optical filter wheel and an imaging light path is guaranteed through a fixing device. The reflection spectrum of the measured object forms a single-band spectrum after being filtered by the optical filter, then forms a gray image through imaging of a black-and-white CMOS sensor, and realizes the switching of the optical filters of different bands by driving the optical filter wheel to rotate through the steering engine, and finally obtains the gray image under different bands.
Wherein, the filter wheel is provided with 8 round holes with the same diameter, and the filters with different wave bands are fixed in the round hole channels. The center of the filter wheel is provided with a round hole for being connected with a steering engine, and the steering engine is controlled through STM32, so that the filter wheel is driven to rotate, and the function of switching the filter is achieved.
3. Data processing system
The data processing system in the system of the present application includes a spectral image processing program, a standard reflective whiteboard. Wherein the spectrum image processing comprises the functions of multispectral image correction, reflectivity reconstruction and the like. The image correction mainly uses a standard reflection white board to eliminate the spatial inconsistency of measurement results caused by uneven illumination or uneven light transmittance of a lens.
In order to solve the technical problems, the technical scheme of the application is realized as follows:
in order to ensure the effectiveness of the wood dyeing quality detection method based on the deep learning technology, training samples is important, namely the quality of the color. Therefore, in the technical scheme of the application, the American Pantone color card is selected as a training sample to establish a data set.
The following implementation steps are as follows:
step 1, preparing a standard color card, ensuring that color points of color blocks in the color card are uniformly distributed in a color gamut range, and preparing a dyed wood veneer for testing. The color gamut coverage of the wood veneer color is wide, so that the spectrum reconstruction accuracy can be better verified in all the color gamut ranges.
Step 2, placing the color card to be detected in an objective table by using the optical filter wheel type multispectral imaging system designed in the application, and obtaining gray level pictures of the color card to be detected under different wave bands; imaging the color card to be detected by using a hyperspectral imaging system to obtain the reflectivity of the color card to be detected; the colorimetric parameters LAB for each color patch on the color chart are measured using a spectrophotometer.
And step 3, firstly, carrying out space correction on the obtained gray level image, then, respectively selecting the interested areas of the gray level image and the hyperspectral image under different wave bands, namely, the area corresponding to each color block in the color card, obtaining the response value and the reflectivity of each color block on the color card, and carrying out normalization operation on the reflectivity.
And 4, corresponding the response value of each color block in the color card to the normalized reflectivity one by one, and constructing a data set.
And step 5, obtaining the mapping relation between the response value and the reflectivity by using the obtained data set, thereby obtaining the reconstructed reflectivity.
And 6, the reflectivity obtained after reconstruction corresponds to the chromaticity parameter LAB one by one, and a data set is constructed.
And 7, obtaining a regression relation between the reconstructed reflectivity and the chromaticity parameter LAB by using the new data set, and obtaining a final required real color value by using the regression relation.
And 8, using the dyed wood veneer prepared in advance to verify the accuracy of reflectivity reconstruction and chromaticity parameter detection. The specific flow is shown in fig. 4.
The filter wheel type multispectral imaging system in the step 2 is specifically shown in fig. 1, and comprises a CMOS sensor 2, an imaging lens 1, a filter wheel 4, a filter 5, a steering engine 3, a control module, a D65 standard light source 6 and a spectral image processing module; the filter wheel is fixed in front of the imaging lens, 8 round hole channels with the same diameter are embedded in the filter wheel, the filter is arranged in the round hole channels, as shown in figure 2, the reflection spectrum of the measured color card is filtered by the filter, and after the imaging of the black-white image sensor CMOS, a spectrum image of the wave band is formed; the steering engine module is connected with the control module, and the control module outputs signals to control the steering engine to output rotary power to the filter wheel so as to switch filters in different wave bands, thereby acquiring gray images of a sample to be tested in different wave bands. The program execution of the system is shown in fig. 3.
In the training phase, reference numeral 7 is a color block, and in the recognition phase, reference numeral 7 is a wood veneer to be tested.
The specific operation of correcting the gray image by using the standard whiteboard in the step 3 is as follows: and shooting and storing a uniform standard white board S with the reflectivity of 1, normalizing U according to S when the color card U is actually shot, namely normalizing each pixel on U by using the pixel at the same position on S, and expressing the normalized operation as follows by a formula: Wherein u is ijk Representing the response value, s, of the (i, j) th pixel point on the gray image shot by the color card to be measured under the kth band filter without normalization ijk A response value representing the (i, j) th pixel of the gray image captured by the standard whiteboard under the kth band filter, d representing the dark current response of the camera, i.e. a response value that the camera would have had in the absence of any illumination>Is u ijk The normalized response value is a fraction between 0 and 1.
The specific operation of extracting the response value and the reflectivity from the gray level picture and the hyperspectral picture in the step 3 is as follows: after gray level images under each wave band of a color card to be measured are obtained, a color block area is taken as an ROI (region of interest) area, response values of all pixel points in the area are extracted, an average value of the response values is taken as the response values of the color block under the wave band, and the operations are repeated according to the sequence from the wave band to the big wave band to obtain response values X [ X ] of the same color block position under 8 wave bands 1 ,x 2 ,x 3 ,...,x 8 ]. Shooting the color card by using a specific hyperspectral imaging workstation to obtain the color cardThen selecting a color block area as an interested area to obtain spectral reflectance values of all pixel points in the area, averaging the spectral reflectance values, wherein the calculation formula of the average reflectance is as follows: Wherein->For average reflectivity, R (lambda) x,y The reflectivity at the pixel points (x, y), N is the number of the pixel points in the region, and the obtained average value is taken as the reflectivity of the color block.
The normalized formula for reflectivity is shown below:
wherein x is i For the reflectivity value of the ith band, min (x i ) For the minimum of the reflectivity values of the ith band of all data samples, max (x i ) For the maximum value in the reflectivity of the ith band of all data samples, y i The value normalized by the reflectivity of the ith wave band is a fraction between 0 and 1.
In the step 5, the mapping relation is specifically that a convolutional neural network is used for carrying out fitting operation on data, the normalized response value is used as input, the reflectivity is used as output, a convolutional neural network model is built, training is stopped after the training round number is reached, and final network model parameters are saved.
The mapping relationship between the reflectance value and the chromaticity parameter LAB in the step 7 is constructed by using an extreme learning machine to realize the fitting operation of the data, and can be expressed as a mathematical formulaWherein y is j Expressed as LAB value, ω in the jth sample i Represented as the connection weight between the input layer and the i-th hidden layer node, beta i Represented as the i-th hidden layer node and input The connection weights between the output layers, g () is represented as an activation function, b i Is biased.
As an embodiment of the present application, the present application may be implemented as a system, including a calibration measurement module 1, an actual measurement module 2, and a data processing module 3;
the calibration measurement module 1 is used for acquiring training response data of a target to be measured, acquiring calibration response data of a standard whiteboard and acquiring dark current calibration response data of a camera;
the actual measurement module 2 is used for acquiring actual reflectivity data of the target to be measured and also used for acquiring actual chromaticity parameter data of the target to be measured;
the data processing module 3 is realized based on a CNN convolutional neural network and an extreme learning machine ELM and is used for constructing matrix data by utilizing training response data of a target to be detected, calibration response of a standard whiteboard and camera dark current calibration response data; the method is also used for normalizing the matrix data to obtain an original data set, dividing the original data into a training set and a testing set, utilizing the testing set to prevent training from fitting, and finally utilizing the response data of the dyed veneer to be tested to obtain the real color parameters of the dyed veneer to be tested;
in this embodiment, the target to be measured is a PANTONE standard color card, and the standard whiteboard refers to a standard diffuse reflection reference object for spatial correction of the imaging system; the dark current calibration response of the camera refers to a response value that the camera still exists in the absence of any illumination;
The calibration measurement module 1 comprises a first zero point measurement unit, a first actual measurement unit, a calibration measurement unit and a denoising unit;
the first zero point measuring unit is used for measuring a response value generated under the action of dark current by using the cmos sensor to obtain dark current response data;
the first actual measurement unit is used for measuring response values of the target to be measured under the filtering action of the plurality of optical filters by using the cmos sensor to obtain original target response data;
the calibration measurement unit is used for measuring response values of the standard whiteboard under the filtering action of the plurality of optical filters by using the cmos sensor to obtain standard whiteboard calibration response data;
the denoising unit is used for subtracting the dark current response data from the original target response data and dividing the dark current response data by the standard whiteboard calibration response data to obtain training response data of the target to be detected;
the actual measurement module 2 comprises a second actual measurement unit.
The second actual measurement unit is used for measuring the surface reflectivity of the target to be measured by utilizing the specific hyperspectral imaging workstation to obtain actual measurement target reflectivity data; and is also used for measuring the chromaticity parameter CIEL of the target surface by using the SC-10 color difference meter * A * B * Measuring to obtain LAB data of the measured target chromaticity parameter;
in this embodiment, the second actually measured measurement unit is configured to image the target to be measured by using a specific hyperspectral imaging workstation, select a region of interest of the target, and use the average value of reflectivity of all pixel points in the region as the noise-removed basic reflectivity data; and is also used for measuring the chromaticity parameter CIEL of the target surface by using the SC-10 color difference meter * A * B * Measuring to obtain LAB data of the measured target chromaticity parameter; the method of averaging using a plurality of measurements is used as the basis data for denoising, and the number of measurements is generally 3.
The data processing module 3 comprises a data matrix construction unit, a CNN neural network processing unit and an ELM neural network processing unit;
the data matrix construction unit is used for constructing matrix data by utilizing training response data of the target to be tested;
the CNN neural network processing unit is used for taking the matrix data as an original data set, dividing the original data set into a training set and a test set, training response data of a target to be tested by using the training set, preventing the training from fitting by using the test set, and finally obtaining real reflectivity parameters of the dyed single board to be tested obtained in actual measurement by using the actually measured response data of the dyed single board to be tested;
The ELM neural network processing unit is used for normalizing the reflectivity parameters obtained by the CNN neural network processing unit to obtain an original data set, dividing the original data set into a training set and a testing set, training the reflectivity data of the target to be tested by using the training set, preventing the training from fitting by using the testing set, and finally obtaining the real chromaticity parameters of the dyed single board to be tested obtained in actual measurement by using the reflectivity parameters of the dyed single board to be tested obtained by the CNN neural network processing unit.
In this embodiment, the data matrix construction unit performs matrix construction on response data of the target to be tested by using the PYTHON programming language, and elements of the construction basis include training response data of the measured target to be tested of each channel; the method comprises the steps of using a python language as compiling and compiling of running codes, using a pytorch environment as a neural network running environment, and using a CNN convolutional neural network as a neural network core structure; normalizing the matrix to original data; the dividing condition is as follows: the training set accounts for 80% of the total number, the test set accounts for 20%, the training response data of the target to be tested is trained by using the training set, and meanwhile, the accuracy test is synchronously performed by using the test set so as to prevent the training from being fitted to obtain the optimal network structure; normalizing the reflectivity parameters obtained by the CNN neural network processing unit to obtain a second original data set; the dividing condition is as follows: the training set accounts for 80% of the total number, the test set accounts for 20%, the ELM is used as a neural network core structure, training reflectivity data of a target to be tested is trained by using the training set, and meanwhile, precision test is synchronously performed by using the test set so as to prevent the training from being fitted to obtain an optimal network structure. Finally, the response value data of the dyed single board is brought into to obtain the chromaticity parameter LAB of the dyed single board.
In this embodiment, the CNN neural network processing unit, the CNN aims to perform feature extraction on objects, and then perform classification, prediction, decision, or the like on objects according to features, and in this embodiment, uses gradient descent as a core optimization objective function to minimize training errors:ELM neural network processing unit, ELM need not overlapThe learning generation has the advantages of good generalization performance, high learning speed and the like; the application condition of the measuring device is applicable to the embodiment; the device takes response value data of each channel as input to calculate target chromaticity parameter LAB data.
The wood dyeing color difference detection device also comprises a result display module 4;
the display module 4 is used for displaying real chromaticity parameters LAB of the to-be-measured dyed single board; the training effect of the training set is also displayed.
As an embodiment of the present application, the present application may be implemented as a method comprising a calibration measurement step, an actual measurement step, and a data processing step;
a calibration measurement step, which is used for acquiring training response data of a target to be measured, acquiring calibration response data of a standard whiteboard and acquiring dark current calibration response data of a camera;
An actual measurement step, which is used for acquiring actual reflectivity data of the target to be measured and also used for acquiring actual chromaticity parameter data of the target to be measured;
the data processing step is realized based on a CNN neural network and an ELM neural network, and is used for constructing matrix data by utilizing training response data of a target to be tested, dividing the matrix data into a training set and a testing set by using the original data set, training the training response data of the target to be tested by using the training set, preventing the training from being fitted by using the testing set, and using the actual measurement response value of the target to be tested to obtain the reflectivity of the target to be tested; and an ELM neural network processing step, wherein the ELM neural network processing step is used for normalizing the reflectivity data obtained in the CNN neural network processing step to obtain an original data set, dividing the original data set into a training set and a testing set, training the reflectivity data of the target to be tested by using the training set, preventing over fitting by using the testing set, and obtaining the measured chromaticity parameter of the target to be tested by using the reflectivity data of the target to be tested obtained in the CNN neural network step.
The calibration measurement step comprises a first zero point measurement step, a first actual measurement step, a calibration measurement step and a denoising step;
A first zero point measurement step, which is used for measuring a response value generated under the action of dark current by using a cmos sensor to obtain dark current response data;
a first actual measurement step, which is to measure the response value of the target to be measured under the filtering action of a plurality of optical filters by using a cmos sensor to obtain original target response data;
a calibration measurement step, which is used for measuring response values of the standard whiteboard under the filtering action of a plurality of optical filters by using a cmos sensor to obtain calibration response data of the standard whiteboard;
denoising, namely subtracting dark current response data by using original target response data, and dividing the dark current response data by standard whiteboard calibration response data to obtain training response data of a target to be detected;
the actual measurement step comprises an actual measurement step;
the actual measurement step is used for measuring the surface reflectivity of the target to be measured by utilizing the specific hyperspectral imaging workstation to obtain actual measurement target reflectivity data; and is also used for measuring the chromaticity parameter CIEL of the target surface by using the SC-10 color difference meter * A * B * Measuring to obtain LAB data of the measured target chromaticity parameter;
the data processing step comprises a matrix construction step, a CNN neural network data processing step and an ELM neural network data processing step;
A data matrix construction step, which is used for constructing matrix data by utilizing training response data of a target to be tested;
a CNN neural network processing step, which is used for taking matrix data as an original data set, dividing the original data set into a training set and a test set, training response data of a target to be tested by using the training set, preventing the training from fitting by using the test set, and finally obtaining real reflectivity parameters of the dyed veneer to be tested, which are obtained in actual measurement, by using the actually measured response data of the dyed veneer to be tested;
and an ELM neural network processing step, which is used for normalizing the reflectivity parameters obtained by the CNN neural network processing unit to obtain an original data set, dividing the original data set into a training set and a testing set, training the reflectivity data of the target to be tested by using the training set, preventing the training from fitting by using the testing set, and finally obtaining the real chromaticity parameters of the dyed veneer to be tested obtained in actual measurement by using the reflectivity parameters of the dyed veneer to be tested obtained by the CNN neural network processing unit.
In the embodiment, the multispectral method is combined with the CNN and the ELM neural network, the CNN neural network is utilized to reconstruct the reflectivity of the wood dyeing veneer to be detected, and the ELM neural network is utilized to realize the mapping from the reflectivity to the chromaticity parameter LAB, so that the high-precision chromatic aberration detection of the wood dyeing is realized.
The method for detecting the chromatic aberration of the dyed wood also comprises a result display step;
the result display step is used for displaying real chromaticity parameters LAB of the target to be tested; and is also used for displaying the training effect of the CNN neural network and the ELM neural network.
Example 1:
obtaining gray level pictures and hyperspectral images of a color card to be measured under different wave bands, obtaining the reflectivity of the color card to be measured according to the hyperspectral images, and measuring chromaticity parameters LAB of each color block on the color card by using a spectrophotometer;
step 1, preparing a pantongGP 1601U standard color card. And ready for testing after dyeing the wood veneer.
And 2, shooting the standard color card by using the wood dyeing quality detection device and the hyperspectral imaging system to obtain gray level images and hyperspectral images in all wave bands.
And step 3, firstly, carrying out space correction on the gray level image under each wave band by using a standard white board, respectively carrying out extraction operation of response values and reflectivities on the obtained gray level image (response values) and hyperspectral image (reflectivities) under each wave band after correction, and carrying out normalization operation on the reflectivities.
And 4, corresponding the response value of each color block to the normalized reflectivity one by one, and constructing a data set.
And step 5, sending the training set data sample into a designed convolutional neural network model for training to obtain a mapping model of the response value and the reflectivity.
And 6, verifying the model by using the prepared dyed wood veneer.
And 7, measuring LAB values of color blocks in the data set by using a spectrophotometer, wherein the LAB values correspond to reflectivity values detected by the data set through a model one by one, and constructing the data set taking the reflectivity as input and LAB as output.
And 8, constructing an extreme learning machine model, training by using a training sample to obtain a mapping relation between the reflectivity and the chromatic parameter LAB, evaluating the used error as MAE, and verifying the trained model by using the dyed wood veneer in step 6.
Step 9: obtaining an LAB value of a reference color standard and a dyed veneer to be detected, obtaining a response value of the dyed veneer, inputting the response value into a model 1 to obtain reflectivity, and then inputting the reflectivity into a model 2 to obtain the LAB value;
and comparing the obtained LAB value with the LAB value of the reference color standard, calculating the color difference through a color difference formula, and if the color difference exceeds a specified index threshold value, failing.
The specific operation mode in the step 2 is as follows: placing a color card to be measured on an object stage of the device, adjusting the height of a camera to achieve the optimal imaging effect, externally arranging shading cloth, using 2D 65 lamp tubes with 20W power as unique lighting equipment, adjusting focusing of the camera, and using an external control module to switch optical filters to sequentially obtain gray images of the color card under the wave bands of 400nm-750nm and 50 intervals; and shooting the standard color card by using a specific hyperspectral imaging workstation to obtain a hyperspectral image of the standard color card.
The specific operation of using the standard whiteboard to spatially correct the gray-scale image in the step 3 is as follows: when the imaging system designed in the application is used for shooting a uniform standard whiteboard S with the reflectivity of 1 and storing the uniform standard whiteboard S, and when a color card U is shot, each pixel point on the U is normalized by using the S, and the normalization formula is as follows:
wherein the method comprises the steps ofThe response value after normalization of the pixel points at coordinates (i, j) on the gray scale image obtained by shooting U under the filtering of the k-th band filter, U ijk For its pre-normalization response value, d is the dark current response of the camera, i.e., the response value that the camera still has without any illumination, s ijk And (3) the response value of the pixel point at (i, j) of the gray map obtained by shooting the standard whiteboard under the filter of the kth wave band filter.
The operation of extracting the response and the reflectivity in the step 3 is specifically that the response value extraction is performed on the imaging picture of each color block under different optical filters, the whole color block is selected as the region of interest of the ROI, the response values of all pixels in the region are averaged, the average value is used as the response value of the color block, and since the device contains 8 optical filters with different wavebands, each color block has eight imaging pictures, so each color block has eight groups of response values, the eight-channel gray-scale pictures of the standard reflection whiteboard are shot by using the same method, and the response values of the eight-channel gray-scale pictures at different color block positions are extracted respectively. In the hyperspectral image, the region of interest of the ROI also selects the outline of the whole color block, and the average value of the reflectances of all pixel points in the outline is taken as the reflectances of the color block, and in the embodiment, the reflectance ranges from 400 nm to 700nm, and the interval is 31 spectral bands of 10 nm.
The normalized formula for reflectivity is shown below:
wherein x is i For the reflectivity value of the ith band, min (x i ) For the minimum of the reflectivity values of the ith band of all data samples, max (x i ) For the maximum value in the reflectivity of the ith band of all data samples, y i For the ith bandThe value of the normalized reflectivity is a fraction between 0 and 1.
The model in the step 5 specifically comprises the following steps: the model is specifically realized by using a PYTORCH programming system to design a convolutional neural network frame, an optimization function adopts an Adam optimization algorithm, an initial learning rate is set to be 0.001, the model comprises 2 convolutional layers and 2 pooling layers, each convolutional layer consists of 30 convolutional kernels with the size of 2 and the step length of 1, the convolutional kernels in the pooling layers are 2 in size and 1 in step length, the pooling mode is average pooling, an activation function adopts RELU, training errors are evaluated by adopting MSE mean square errors, the number of training rounds is 5000, and the model is saved after training is completed.
Example 2:
obtaining gray level pictures and hyperspectral images of a color card to be measured under different wave bands, obtaining the reflectivity of the color card to be measured according to the hyperspectral images, and measuring chromaticity parameters LAB of each color block on the color card by using an SC-10 color difference meter;
Step 1, preparing a pantongGP 1601U standard color card. And ready for testing after dyeing the wood veneer.
And 2, respectively acquiring gray level images and hyperspectral images in each wave band by using the wood dyeing color difference detection device.
And step 3, firstly, carrying out space correction on the gray level image under each wave band by using a standard white board, respectively carrying out extraction operation of response values and reflectivities on the obtained gray level image (response values) and hyperspectral image (reflectivities) under each wave band after correction, and carrying out normalization operation on the reflectivities.
Step 4, the response value of each color block is in one-to-one correspondence with the normalized reflectivity, a data set is constructed, and the following steps are carried out: 2 to divide the training set and the test set.
And step 5, sending the training set data sample into a designed CNN convolutional neural network model for training to obtain a mapping model of the response value and the reflectivity.
And 6, verifying the model by using the test set.
Step 7, measuring LAB values of color blocks in the data set by using an SC-10 color difference meter, and constructing a data set with reflectivity as input and LAB as output according to the following steps: 2 to divide the training set and the test set.
And 8, constructing an ELM model of the extreme learning machine, training by using a training sample to obtain a mapping relation between the reflectivity and the chromatic parameter LAB, evaluating the used error as MAE, and verifying the trained model by using a test set in step 6.
Step 9: obtaining LAB values of reference color standards and a to-be-detected dyed single board, obtaining response values of the dyed single board, inputting the response values into a model 1 to obtain reflectivity, then inputting the reflectivity into an ELM model to obtain LAB values, and simultaneously giving response value data of several groups of classical samples, and calculating chromaticity parameters and errors of the to-be-detected dyed single board, wherein the chromaticity parameters and errors are shown in a table I;
table classical sample calculation results
The average absolute error MAE between the LAB value obtained by the method and the LAB value actually measured is 0.57, so that the method has higher detection precision, is convenient to operate and easy to implement, and can well achieve the purpose of detecting the chromatic aberration of the dyed wood veneer;
and comparing the obtained LAB value with the LAB value of the reference color standard, calculating the color difference through a color difference formula, and if the color difference exceeds the index, failing.
The specific operation mode in the step 2 is as follows: placing a color card to be measured on an object stage of the device, adjusting the height of a camera to achieve the optimal imaging effect, externally arranging shading cloth, using 2D 65 lamp tubes with 20W power as unique lighting equipment, adjusting focusing of the camera, and using an external control module to switch optical filters to sequentially obtain gray images of the color card under the wave bands of 400nm-750nm and 50 intervals; and shooting the standard color card by using a specific hyperspectral imaging workstation to obtain a hyperspectral image of the standard color card.
The specific operation of using the standard whiteboard to spatially correct the gray-scale image in the step 3 is as follows: when the imaging system designed in the application is used for shooting a uniform standard whiteboard S with the reflectivity of 1 and storing the uniform standard whiteboard S, and when a color card U is shot, each pixel point on the U is normalized by using the S, and the normalization formula is as follows:
wherein the method comprises the steps ofThe response value after normalization of the pixel points at coordinates (i, j) on the gray scale image obtained by shooting U under the filtering of the k-th band filter, U ijk For its pre-normalization response value, d is the dark current response of the camera, i.e., the response value that the camera still has without any illumination, s ijk And (3) the response value of the pixel point at (i, j) of the gray map obtained by shooting the standard whiteboard under the filter of the kth wave band filter.
The operation of extracting the response and the reflectivity in the step 3 is specifically that the response value extraction is performed on the imaging picture of each color block under different optical filters, the whole color block is selected as the region of interest of the ROI, the response values of all pixels in the region are averaged, the average value is used as the response value of the color block, and since the device contains 8 optical filters with different wavebands, each color block has eight imaging pictures, so each color block has eight groups of response values, the eight-channel gray-scale pictures of the standard reflection whiteboard are shot by using the same method, and the response values of the eight-channel gray-scale pictures at different color block positions are extracted respectively. In the hyperspectral image, the region of interest of the ROI also selects the outline of the whole color block, and the average value of the reflectances of all pixel points in the outline is taken as the reflectances of the color block, and in the embodiment, the reflectance ranges from 400 nm to 700nm, and the interval is 31 spectral bands of 10 nm.
The normalized formula for reflectivity is shown below:
wherein x is i For the reflectivity value of the ith band, min (x i ) For the minimum of the reflectivity values of the ith band of all data samples, max (x i ) For the maximum value in the reflectivity of the ith band of all data samples, y i The value normalized by the reflectivity of the ith wave band is a fraction between 0 and 1.
The model in the step 5 specifically comprises the following steps: the model is specifically realized by using a PYTORCH programming system to design a convolutional neural network frame, an optimization function adopts an Adam optimization algorithm, an initial learning rate is set to be 0.001, the model comprises 2 convolutional layers and 2 pooling layers, each convolutional layer consists of 30 convolutional kernels with the size of 2 and the step length of 1, the convolutional kernels in the pooling layers are 2 in size and 1 in step length, the pooling mode is average pooling, an activation function adopts RELU, training errors are evaluated by adopting MSE mean square errors, the number of training rounds is 5000, and the model is saved after training is completed.
The ELM model of the extreme learning machine in the step 8 can be expressed as a mathematical formula Wherein y is j Expressed as LAB value, ω in the jth sample i Represented as the connection weight between the input layer and the i-th hidden layer node, beta i Denoted as the connection weight between the i-th hidden layer node and the output layer, g () is denoted as the activation function, b i Is biased.
It should be noted that the detailed description is merely for explaining and describing the technical solution of the present invention, and the scope of protection of the claims should not be limited thereto. All changes which come within the meaning and range of equivalency of the claims and the specification are to be embraced within their scope.

Claims (8)

1. The method for detecting the color difference of the wood dyeing based on deep learning is characterized by comprising the following steps of:
step one: acquiring a standard color card, selecting a color block from the standard color card, and then shooting to obtain gray level images and hyperspectral images of the color block under each wave band;
step two: carrying out space correction on gray images in each wave band by using a standard white board, then extracting response values of the gray images subjected to space correction, extracting reflectivity of a hyperspectral image, and carrying out normalization operation on the extracted reflectivity;
step three: training the neural network by using the response value as input and the normalized reflectivity as output to obtain a trained neural network 1;
step four: measuring the LAB value of the color block by using a spectrophotometer, and training the neural network by taking the normalized reflectivity as input and the LAB value as output to obtain a trained neural network 2;
Step five: obtaining a wood dyeing veneer to be identified and a wood dyeing reference standard LAB value, then obtaining a response value of the wood dyeing veneer to be identified, inputting the obtained response value into a trained neural network 1 to obtain an output reflectivity, and then inputting the reflectivity into a neural network 2 to obtain an output LAB value;
step six: comparing the output LAB value with a wood dyeing reference standard LAB value, and obtaining a color difference through a color difference formula;
in the second step, the specific steps of using the standard white board to spatially correct the gray level image under each wave band are as follows:
shooting and storing a uniform standard white board with the reflectivity of 1 by using an imaging system, and carrying out normalization operation on each pixel point on a color block when shooting the color block, so as to complete space correction, wherein the normalization formula is as follows:
wherein,indicating that the color patch is in the kth bandResponse value after normalization of pixel points at coordinates (i, j) on gray level diagram obtained by shooting under filter filtering of filter, u ijk Represents the response value before normalization, d represents the dark current response of the camera, i.e. the response value that the camera still has in the absence of any light, s ijk Representing the response value of the pixel point of the gray map obtained by shooting the standard whiteboard under the filter of the kth band filter at the (i, j);
The normalization operation of the extracted reflectivity is specifically as follows:
wherein x is i Represents the reflectivity value of the ith band, min (x i ) Represents the minimum of the reflectivity values of the ith band of all data samples, max (x i ) Representing the maximum value in the reflectivity of the ith band of all data samples;
the trained neural network 1 comprises 2 convolution layers and 2 pooling layers, each convolution layer consists of 30 convolution kernels with the size of 2 and the step length of 1, the convolution kernels in the pooling layers are 2 in size and the step length of 1, the pooling mode is average pooling, RELU is selected as an activation function, MSE mean square error is adopted for training error evaluation, adam optimization algorithm is selected as an optimization function, and the initial learning rate is set to be 0.001;
the trained neural network 2 is an extreme learning machine, and the LAB value output by the extreme learning machine is expressed as:
wherein y is j Representing the output LAB value, ω i Represented as the connection weight between the input layer and the i-th hidden layer node, beta i Denoted as the connection weight between the i-th hidden layer node and the output layer, g () is denoted as the activation function, b i Is biased.
2. The method for detecting the color difference of the wood dyeing based on the deep learning according to claim 1, wherein the specific steps of the first step are as follows:
And sequentially obtaining gray images of the color blocks at the wave bands of 400nm-750nm with the interval of 50 by using an imaging system, and shooting the color blocks by using a specific hyperspectral imaging workstation to obtain hyperspectral images of the color blocks.
3. The method for detecting the color difference of the wood dyeing based on the deep learning according to claim 1, wherein in the step one, the gray level image is obtained through an imaging system, and the imaging system comprises an imager lens (1), a CMOS sensor (2), a steering engine (3), a filter wheel (4), a filter (5) and a standard light source (6);
the steering engine (3) is used for driving the filter wheel (4), and the filter wheel (4) is provided with a filter (5);
light emitted by the standard light source (6) is reflected by the color block, enters the lens (1) of the imager through the optical filter (5), and is imaged by the CMOS sensor (2).
4. The method for detecting the color difference of the wood dyeing based on the deep learning according to claim 3, wherein the standard light source (6) is 2D 65 lamps with 20W power.
5. The method for detecting the wood dyeing chromatic aberration based on deep learning according to claim 4 is characterized in that 8 round holes with the same diameter are processed on the filter wheel (4), and the filter (5) is arranged on the round holes.
6. The wood dyeing color difference detection method based on deep learning according to claim 5, wherein the steering engine (3) is controlled by STM 32.
7. Deep learning-based wood dyeing color difference detection system is characterized by comprising: the device comprises a calibration measurement module, an actual measurement module, a data processing module and a detection module;
the calibration measurement module is used for acquiring a standard color card, selecting a color block from the standard color card, and shooting to obtain gray level images and hyperspectral images of the color block under each wave band;
the actual measurement module is used for carrying out space correction on gray images under each wave band by using a standard white board, then carrying out response value extraction on the gray images subjected to space correction, carrying out reflectivity extraction on hyperspectral images, and carrying out normalization operation on the extracted reflectivity;
the data processing module is used for training the neural network by taking the response value as input and the normalized reflectivity as output to obtain a trained neural network 1, measuring the LAB value of the color block by using a spectrophotometer, and taking the normalized reflectivity as input and the LAB value as output to train the neural network to obtain a trained neural network 2;
The detection module is used for acquiring a wood dyeing veneer to be identified and a wood dyeing reference standard LAB value, then acquiring a response value of the wood dyeing veneer to be identified, inputting the acquired response value into the trained neural network 1 to acquire output reflectivity, inputting the reflectivity into the neural network 2 to acquire the output LAB value, and finally comparing the output LAB value with the wood dyeing reference standard LAB value to acquire chromatic aberration through a chromatic aberration formula;
the specific steps of using the standard white board to spatially correct the gray level image under each wave band are as follows:
shooting and storing a uniform standard white board with the reflectivity of 1 by using an imaging system, and carrying out normalization operation on each pixel point on a color block when shooting the color block, so as to complete space correction, wherein the normalization formula is as follows:
wherein,indicating that the color block is filtered in the kth bandResponse value after normalization of pixel points at coordinates (i, j) on gray level diagram obtained by shooting under sheet filtering, u ijk Represents the response value before normalization, d represents the dark current response of the camera, i.e. the response value that the camera still has in the absence of any light, s ijk Representing the response value of the pixel point of the gray map obtained by shooting the standard whiteboard under the filter of the kth band filter at the (i, j);
The normalization operation of the extracted reflectivity is specifically as follows:
wherein x is i Represents the reflectivity value of the ith band, min (x i ) Represents the minimum of the reflectivity values of the ith band of all data samples, max (x i ) Representing the maximum value in the reflectivity of the ith band of all data samples;
the trained neural network 1 comprises 2 convolution layers and 2 pooling layers, each convolution layer consists of 30 convolution kernels with the size of 2 and the step length of 1, the convolution kernels in the pooling layers are 2 in size and the step length of 1, the pooling mode is average pooling, RELU is selected as an activation function, MSE mean square error is adopted for training error evaluation, adam optimization algorithm is selected as an optimization function, and the initial learning rate is set to be 0.001;
the trained neural network 2 is an extreme learning machine, and the LAB value output by the extreme learning machine is expressed as:
wherein y is j Representing the output LAB value, ω i Represented as the connection weight between the input layer and the i-th hidden layer node, beta i Denoted as the connection weight between the i-th hidden layer node and the output layer, g () is denoted as the activation function, b i Is biased;
the specific steps of obtaining gray level images and hyperspectral images under each wave band of the color blocks are as follows:
Sequentially obtaining gray images of color blocks at the wave bands of 400nm-750nm with the interval of 50 by using an imaging system, and shooting the color blocks by using a specific hyperspectral imaging workstation to obtain hyperspectral images of the color blocks;
the gray level image in the calibration measurement module is obtained through an imaging system, and the imaging system comprises an imager lens (1), a CMOS sensor (2), a steering engine (3), a filter wheel (4), a filter (5) and a standard light source (6);
the steering engine (3) is used for driving the filter wheel (4), and the filter wheel (4) is provided with a filter (5);
light emitted by the standard light source (6) is reflected by the color block, enters the lens (1) of the imager through the optical filter (5), and is imaged by the CMOS sensor (2);
the standard light source (6) is 2D 65 lamp tubes with 20W power;
the filter wheel (4) is provided with 8 round holes with the same diameter, and the filter (5) is arranged on the round holes;
the steering engine (3) is controlled through STM 32.
8. A deep learning-based wood stain color difference detection medium characterized in that the medium stores a computer readable program for performing the steps of any one of claims 1 to 6.
CN202310528055.9A 2023-05-11 2023-05-11 Deep learning-based wood dyeing color difference detection method, system and medium Active CN116559119B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310528055.9A CN116559119B (en) 2023-05-11 2023-05-11 Deep learning-based wood dyeing color difference detection method, system and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310528055.9A CN116559119B (en) 2023-05-11 2023-05-11 Deep learning-based wood dyeing color difference detection method, system and medium

Publications (2)

Publication Number Publication Date
CN116559119A CN116559119A (en) 2023-08-08
CN116559119B true CN116559119B (en) 2024-01-26

Family

ID=87503077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310528055.9A Active CN116559119B (en) 2023-05-11 2023-05-11 Deep learning-based wood dyeing color difference detection method, system and medium

Country Status (1)

Country Link
CN (1) CN116559119B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392896A (en) * 2017-07-14 2017-11-24 佛山市南海区广工大数控装备协同创新研究院 A kind of Wood Defects Testing method and system based on deep learning
CN108346153A (en) * 2018-03-22 2018-07-31 北京木业邦科技有限公司 The machine learning of defects in timber and restorative procedure, device, system, electronic equipment
CN111579506A (en) * 2020-04-20 2020-08-25 湖南大学 Multi-camera hyperspectral imaging method, system and medium based on deep learning
CN112950632A (en) * 2021-04-18 2021-06-11 吉林大学 Coal quality detection method based on hyperspectral imaging technology and convolutional neural network
CN114965346A (en) * 2022-06-07 2022-08-30 河北工业大学 Kiwi fruit quality detection method based on deep learning and hyperspectral imaging technology
CN115861119A (en) * 2022-12-20 2023-03-28 上海工业自动化仪表研究院有限公司 Rock slag image color cast correction method based on deep convolutional neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392896A (en) * 2017-07-14 2017-11-24 佛山市南海区广工大数控装备协同创新研究院 A kind of Wood Defects Testing method and system based on deep learning
CN108346153A (en) * 2018-03-22 2018-07-31 北京木业邦科技有限公司 The machine learning of defects in timber and restorative procedure, device, system, electronic equipment
CN111579506A (en) * 2020-04-20 2020-08-25 湖南大学 Multi-camera hyperspectral imaging method, system and medium based on deep learning
CN112950632A (en) * 2021-04-18 2021-06-11 吉林大学 Coal quality detection method based on hyperspectral imaging technology and convolutional neural network
CN114965346A (en) * 2022-06-07 2022-08-30 河北工业大学 Kiwi fruit quality detection method based on deep learning and hyperspectral imaging technology
CN115861119A (en) * 2022-12-20 2023-03-28 上海工业自动化仪表研究院有限公司 Rock slag image color cast correction method based on deep convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李文峰.《基于极限学习机的杉木单板染色智能配色模型研究》.2023,(第2023年第02期期),全文. *

Also Published As

Publication number Publication date
CN116559119A (en) 2023-08-08

Similar Documents

Publication Publication Date Title
CN105209869B (en) High precision imaging colorimeter with spectrometer assisted specially designed pattern closed loop calibration
EP2637004B1 (en) Multispectral imaging color measurement system and method for processing imaging signals thereof
CN105865630B (en) For showing the colorimetric system of test
CN101874401B (en) One chip image sensor for measuring vitality of subject
US7936377B2 (en) Method and system for optimizing an image for improved analysis of material and illumination image features
US10514335B2 (en) Systems and methods for optical spectrometer calibration
CA2364547A1 (en) Device for the pixel-by-pixel photoelectric measurement of a planar measured object
CN109191560B (en) Monocular polarization three-dimensional reconstruction method based on scattering information correction
CA2364684A1 (en) Device for the pixel-by-pixel photoelectric measurement of a planar measured object
CN109459136A (en) A kind of method and apparatus of colour measurement
CN113848044B (en) Method for detecting brightness and chrominance consistency of display screen
JP2016164559A (en) Image color distribution inspection device and image color distribution inspection method
CN109253862A (en) A kind of colour measurement method neural network based
JP2014187558A (en) Image color distribution inspection device and image color distribution inspection method
CN116559119B (en) Deep learning-based wood dyeing color difference detection method, system and medium
TWI719610B (en) Method of spectral analysing with a color camera
CN112098415A (en) Nondestructive testing method for quality of waxberries
Pelagotti et al. Multispectral UV fluorescence analysis of painted surfaces
CN110726536B (en) Color correction method for color digital reflection microscope
Vunckx et al. Accurate video-rate multi-spectral imaging using imec snapshot sensors
CN110174351B (en) Color measuring device and method
CN110595364B (en) Space displacement and strain measuring device and method based on CCD camera
JP5895094B1 (en) Image color distribution inspection apparatus and image color distribution inspection method
Kim et al. Developing a multispectral HDR imaging module for a BRDF measurement system
CN117405230A (en) Imaging colorimeter and light measurement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant