CN112672474A - Attention factor-based lighting control system - Google Patents

Attention factor-based lighting control system Download PDF

Info

Publication number
CN112672474A
CN112672474A CN202011561377.6A CN202011561377A CN112672474A CN 112672474 A CN112672474 A CN 112672474A CN 202011561377 A CN202011561377 A CN 202011561377A CN 112672474 A CN112672474 A CN 112672474A
Authority
CN
China
Prior art keywords
unit
attention
neural network
parameters
illumination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011561377.6A
Other languages
Chinese (zh)
Inventor
邹细勇
张维特
黄昌清
陈亮
杨凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Jiliang University Shangyu Advanced Research Institute Co Ltd
Original Assignee
China Jiliang University Shangyu Advanced Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Jiliang University Shangyu Advanced Research Institute Co Ltd filed Critical China Jiliang University Shangyu Advanced Research Institute Co Ltd
Priority to CN202011561377.6A priority Critical patent/CN112672474A/en
Publication of CN112672474A publication Critical patent/CN112672474A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B45/00Circuit arrangements for operating light-emitting diodes [LED]
    • H05B45/30Driver circuits
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B45/00Circuit arrangements for operating light-emitting diodes [LED]
    • H05B45/10Controlling the intensity of the light
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B45/00Circuit arrangements for operating light-emitting diodes [LED]
    • H05B45/10Controlling the intensity of the light
    • H05B45/12Controlling the intensity of the light using optical feedback
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B45/00Circuit arrangements for operating light-emitting diodes [LED]
    • H05B45/30Driver circuits
    • H05B45/32Pulse-control circuits
    • H05B45/325Pulse-width modulation [PWM]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Circuit Arrangement For Electric Light Sources In General (AREA)

Abstract

The invention discloses an attention factor-based illumination control system, which comprises a light group with adjustable light property, a light color sensing unit, an image acquisition unit, a heart rate acquisition unit, a dimming mapping unit, a control unit and the like, parameters such as working surface illumination, color temperature and continuous learning time are used as input quantities, physical sign parameters such as the eye opening degree, the sight concentration degree and the heart rate of a learner and attention factor values corresponding to the physical sign parameters are used as output quantities, a neural network mapping model from the illumination condition to the attention is established, the trained neural network is used for predicting the attention parameters in a field light environment, and then the light color parameters are optimized based on a multi-objective optimization algorithm, the optimization result is mapped into the driving current value of the lamp group based on modes such as lookup table interpolation and the like, and the lamp group can realize illumination which is helpful for a learner to improve or keep attention under different field environments after dimming. The invention also enables the optimized illumination to better accord with the illumination preference of individuals through the online adjustment of the grading standard.

Description

Attention factor-based lighting control system
The application is a divisional application with application number 201910263082.1, application date 2019, 04 and 02, and invention title "lighting control device and method based on attention factor".
Technical Field
The invention relates to the field of intelligent lighting and learning assistance, in particular to a lighting control system based on attention factors.
Background
In daily study and work of people, environmental illumination has direct influence on the study efficiency or the work efficiency. The human eye has two functions, namely a photosensitive function, namely, light reaches the eye ground through an optical system of the eye to form an object image on the retina; the other is a visual signal processing function, namely, the retina converts and processes the light energy of the object image into nerve impulses, and the impulses are transmitted into human eyes through ganglion cells, so that visual and non-visual effects are generated. In recent years, scientific researchers track and compare physiological changes of human bodies under various illumination conditions, and research results show that the illumination environment not only can influence various physiological parameters of the people, such as blood pressure, heart rate, melatonin and the like, but also has obvious influence on the working efficiency and visual function of the people. Light affects the human body in various parameter forms, for example, illuminance, the light environment illuminance level affects the attention, arousal level and working enthusiasm of personnel, thereby indirectly affecting the working performance.
The change of the illumination condition has influence on the learning or working efficiency, and students carry out experimental research and analysis aiming at the problem. For example, the professor of the university of Chongqing who is strictly Yonghong published in the paper "influence of color temperature of fluorescent lamp in classroom on learning efficiency and physiological rhythm of students" 32 th volume 2010, which is published in civil construction and environmental engineering ", states that the optimal illuminance value and the worst illuminance value of fluorescent lamp light sources with different color temperatures are different, and proposes several combinations of color temperature and illuminance. On the other hand, a study on LED office lighting environment based on photo-biological effect, published in "the lighting engineering journal" 2014, 25 th paper, recommends that LED lighting conditions with 500lx illumination and 4500K correlated color temperature are ideal lighting environments after comprehensive operation tests and subjective evaluations of multiple persons.
Currently, there is a desire to recommend a generally suitable comfortable lighting condition to facilitate work efficiency. However, the lighting environment around the user is not fixed and difficult to predict and enumerate, and with the increasingly widespread use of LED lamps with dimmable functions and the pursuit of personalized lighting by the user, the way of only recommending one or a limited number of light environments in the past cannot meet the demand of future lighting.
For the above reasons, there is a need for a system capable of automatically detecting and judging factors related to learning efficiency under various different illumination conditions, and further automatically performing illumination optimization control according to the influence of illumination on the system, and a method for automatically performing optimization control on illumination environment for assisting learning.
Disclosure of Invention
The invention aims to provide an attention factor-based lighting control system, which can detect and judge the attention of people under various lighting conditions through a non-specifically designed natural state, thereby optimizing the light color condition which is helpful for improving the attention within the field lighting condition range and automatically dimming to realize the condition. Meanwhile, the method is required to have strong generalization capability on the detection and judgment of attention so as to predict and judge attention of other different illumination conditions which are not tested in advance; thus, high attention can be gained in recommendation of a light environment and automatic lighting control to a specific individual.
Desktop learning with respect to other activities such as automobile driving, attention factors are focused not on when the opening of the learner's eyes is so small as to affect learning, but on finding the impact that changes in the environment such as lighting conditions have on the presentation of attention factors. In desktop learning under different lighting conditions, the difference in attention of learners includes not only slowly changing eye openness, but also the range of sight points, heart rate fluctuation, sight line movement rate and other physical signs.
In order to perform learning or automatic optimization of the lighting of a working environment, the relationship between lighting conditions and attention is modeled first. Therefore, the invention firstly collects the physical sign data of the learner through the sensor to obtain parameters such as eye opening, sight concentration, heart rate, sight movement rate and the like, and uses the parameters as the attention factors so as to evaluate the attention of the learner in the luminous environment.
Based on the vital sign sensing data, attention assessment is performed, and there are several problems. First, how does the sampled vital sign data, quantify it? And also to be able to distinguish between the level of attention. Second, how are data sequences before and after associated, how to further distinguish whether attention is focused according to their course of change?
The invention models a complex nonlinear mapping relation between illumination conditions and attention of people through a neural network, wherein the illumination conditions comprise illumination intensity and color temperature of a working surface and also comprise xyz color coordinate values of colors, and the attention is represented by parameters such as an eye opening value, a sight concentration value, a sight movement rate, a heart rate and corresponding attention factor values. Considering that the attention of the person is also influenced by the accumulated work or learning time, the neural network takes the above-mentioned several light color parameters and the continuous learning time as input quantities, and takes the 6 attention parameters as output quantities. The neural network adopts RBF network, after collecting enough samples, the number of nodes of hidden layer of RBF neural network and their respective central vectors are determined by K-means clustering algorithm, and the weight from hidden layer to output layer is corrected by gradient descent method, so that the error between the actual value of space output quantity of training sample and the network output value is minimum.
After the mapping from the light color parameters to the attention parameters is established, the light environment which can improve the attention of a specific individual can be searched through an optimization algorithm, and the on-site light environment is configured according to the optimization result through a dimming means. For this reason, a light environment evaluation function with multiple evaluation factors is established based on the attention parameters, and the established evaluation function F has higher score when the learner concentrates attention, otherwise, the score is reduced. Since attention-seeking involves multiple factors, this is a multi-objective optimization problem for which the optimization solution is referred to as the Pareto solution. The problem is solved by a multiobjective genetic optimization algorithm MOGA.
And obtaining the photochromic parameters with high attention evaluation after optimization and solution. Then, the dimming mapping unit maps the optimized light color parameters into driving current values of driving current channels of the lamp group, and outputs the current values to drivers in the dimmable lamp group, so that an illumination environment which is helpful for a learner to keep or improve attention is obtained.
The dimming mapping unit converts the light color parameters into a mapping of the lamp group driving current, which may be based on various means. Firstly, a look-up table from a light color space to a driving current space generated in advance can be based on; secondly, a conversion polynomial from a light color space to a driving current space generated by least square regression can be used as a basis; or, the BP network may be further based on a BP neural network, and the BP network uses 5 light color parameters including the illumination intensity of the working surface, the color temperature, and the xyz color coordinate value of the color as input quantities, and uses the current values of all the driving current channels of the lamp set as output quantities.
The technical solution of the present invention is to provide an attention-based lighting control system having the following structure, which includes: the system comprises a user interface unit for inputting parameters and initiating operation, a lamp set with adjustable light property in at least one of brightness, color temperature, color and illumination angle, a light color sensing unit for acquiring illumination, color temperature, color and the like of illumination of a working surface, an image acquisition unit for acquiring images of a face and a working surface area of a learner, a heart rate acquisition unit for acquiring the heart rate of the learner, a user identity identification unit for identifying the identity of the learner, and a control unit respectively connected with the user interface unit, the lamp set, the light color sensing unit, the image acquisition unit, the heart rate acquisition unit, the user identity identification unit and a dimming mapping unit, wherein the control unit is configured to:
the processing module contained in the device processes the signals collected by the light color sensing unit to obtain 2 light color parameters including the illumination and the color temperature of the working surface, processes the signals collected by the image collecting unit to obtain the opening value of the eyes, the concentration value of the sight line and the change rate of the movement speed of the sight line of the learner, and obtains the heart rate and the heart rate change rate of the learner by reading the signals of the heart rate collecting unit,
3 parameters of the illumination intensity, the color temperature and the continuous learning time of a working surface are used as input quantities, 5 attention parameters of an eye opening value, a sight line concentration value, a sight line movement speed change rate, a heart rate and a heart rate change rate of a learner are used as output quantities, an artificial neural network is established, the artificial neural network adopts an RBF neural network,
the dimming processing part sends dimming signals to the lamp group through the output module or the user interface unit, acquires a training sample set of the RBF neural network based on the photochromic sensing unit, the image acquisition unit and the heart rate acquisition unit for the changed luminous environment, trains the RBF neural network by using the sample set,
in the field environment, the lighting optimization processing part establishes a luminous environment evaluation function based on 5 attention parameters, predicts the attention parameter values of different users under different light color parameter conditions by using the trained RBF neural network corresponding to the users respectively, optimizes the illumination and the color temperature of a working surface in a spatial range in which the light color parameters of the field lamp group can be valued by a multi-objective optimization algorithm, and transmits the optimized result to the dimming mapping unit;
and the dimming mapping unit maps the optimization result into a driving current value of each driving current channel of the lamp group and transmits the current value to a driver in the lamp group.
Preferably, the lamp set comprises two LED strings of high color temperature and low color temperature, each LED string corresponds to a driving current channel, the dimming mapping unit comprises a lookup table from a light color space consisting of working surface illuminance and color temperature to a two-channel driving current space, and the optimization result (E) is obtained0,K0) By inserting in look-up tablesValues to obtain a dual channel drive current value.
Preferably, the dimming mapping unit comprises a lookup table from a light color space consisting of working surface illuminance and color temperature to a dual-channel driving current space, and the lookup table is used for optimizing a result (E) in the light color space0,K0) Obtaining a dual-channel driving current value by interpolation in a lookup table;
first, find P (E) in the photochromic space0,K0) Four points around: a (E)1,K1),B(E2,K1),C(E1,K2) And D (E)2,K2) In which E1≤E0≤E2,K1≤K0≤K2
Dual channel current value (i) to which the optimization result is mapped01,i02) The distance is used as a weighted value to perform interpolation,
Figure BDA0002859484250000031
Figure BDA0002859484250000032
wherein d is1Represents the shortest distance of P to four points, d2The second shortest point, and so on, dTIs the sum of all distances; i.e. i11And i21The current values of the two channels with the shortest distance are respectively; and respectively adding different weights to four points closest to the P point to be searched according to different distances, wherein the four points are the shortest and the heaviest.
Preferably, the lamp set has w driving current channels, the light color parameters further include xyz color coordinate values of the illumination color of the working surface, the artificial neural network is an RBF neural network and uses 5 light color parameters in total of the xyz color coordinate values of the illumination color, the color temperature and the color of the working surface and 6 parameters in total of the continuous learning time as input quantities, the dimming mapping unit is replaced by a BP neural network established in the control unit, the BP neural network establishes the artificial neural network with 5 light color parameters as input quantities and current values of the w driving current channels as output quantities, and the dimming processing portion performs light color signal acquisition and processing on the changed light environment and records driving current values of the w driving current channels corresponding to dimming when sending a dimming signal to the lamp set through the output module or the user interface unit to form a training sample set of the BP neural network,
in the field environment, the trained BP neural network maps the optimization results to drive current values of each drive current channel of the lamp group and transmits the current values to the drivers in the lamp group.
Preferably, the xyz color coordinate value of the color is replaced by an RGB three-component value of the color, the lamp group is an LED lamp group, the driving current value of each LED lamp in the lamp group is adjusted by a driver, and the dimming signal is a PWM wave duty ratio value of the driving current of the LED lamp;
the image acquisition unit adopts two mesh cameras, processing module includes image processing portion and photochromic processing portion, image processing portion includes eye opening detector and sight detector again, photochromic processing portion includes illuminance detector, colour temperature detector and colour detector again.
Preferably, the model of the BP neural network is:
the output of the jth node of the hidden layer by layer is
Figure BDA0002859484250000041
The p-th node of the output layer outputs
Figure BDA0002859484250000042
Wherein the f () function is taken as sigmoid function, wijAnd vjpRespectively the connection weight from the input layer to the hidden layer and the connection weight from the hidden layer to the output layer, ajAnd bpRespectively representing the threshold values of a hidden layer and an output layer, and k representing the number of nodes of the hidden layer, and performing network training by adopting a gradient descent method.
Preferably, the model of the RBF neural network is:
the output of the ith node of the hidden layer is as follows:
Figure BDA0002859484250000043
the output of the jth node of the output layer is as follows:
Figure BDA0002859484250000044
wherein, the dimension of the input vector X is 6, the number of hidden layer H nodes is p, the dimension of the output vector Y is 5, CiIs the center of the Gaussian function of the ith node of the hidden layer, sigmaiIs the width of the center of the Gaussian function, X-CiAre vectors X and CiEuclidean distance between, wijThe weight value from the ith hidden node to the jth output node.
Preferably, a camera adopted by the image acquisition unit is installed on a support opposite to a person in a working scene, the input unit comprises a key for indicating the current learning difficulty, and the neural network is added with a learning difficulty coefficient input parameter;
the input unit also comprises a sampling canceling key, and the control unit suspends data sampling and sample recording after detecting that the key is pressed;
in the input unit, a sliding input device with a cursor is further provided, and the control unit is further configured to:
in the multi-target optimization algorithm processing process, after a total evaluation value F is calculated according to a luminous environment evaluation function, the evaluation value is adjusted according to the position of a cursor after a learner operates a sliding input device:
F'=F·(1+η·Δ),
Figure BDA0002859484250000051
Figure BDA0002859484250000052
wherein E is the illumination of the light to be evaluated corresponding to the current individual, E0The current light intensity corresponds to the middle position of the slide input device, and the left and right end positions respectively correspond to E when the cursor slides towards the left and right sides of the slide input device00.9 and 1.1 times of illuminance, EnAnd delta is a set threshold value for grading adjustment according to the degree, eta is an adjustment coefficient, and F' are grading values before and after adjustment respectively.
Meanwhile, the invention also provides another lighting control system based on attention factors, which is characterized by comprising the following components: an image acquisition unit, a heart rate acquisition unit, a control unit, an input unit, an output unit, a storage unit and a dimmable lamp set,
the image acquisition unit acquires images of the face and the working face area of a learner, the heart rate acquisition unit acquires the heart rate of the learner, and the output unit is used for displaying signals and outputting attention factor values and dimming signals;
the control unit includes a processing module, an iterative learning module, a neural network module, and a connection switcher, and is configured to:
the processing module processes the signals acquired by the image acquisition unit to acquire an eye opening value, a sight concentration value and a sight movement rate of the learner, acquires the heart rate of the learner by reading the signals of the heart rate acquisition unit,
the neural network module takes w lighting parameters and continuous learning time of the driving current of u LED strings and v irradiation angles as input quantities, takes 6 attention factor parameters of the characteristics parameters of 3 individuals of the learner, the eye opening degree, the sight concentration degree and the heart rate as output quantities, and establishes an RBF neural network,
the light modulation processing part in the processing module sends out light modulation signals to the lamp group through the output unit or the user interface unit, obtains a training sample set of the RBF neural network for the changed light environment based on the image acquisition unit, the heart rate acquisition unit and the light modulation signals, trains the RBF neural network by using the sample set,
in the field environment, an illumination optimization processing part in a processing module establishes a luminous environment evaluation function based on 6 attention factor parameters, predicts the attention factor parameters of different users under different illumination parameter conditions by a trained RBF neural network corresponding to the users respectively, optimizes the driving current and the illumination angle of the LED string in a spatial range in which the illumination parameters of the field lamp group can be taken by a multi-objective optimization algorithm,
and outputting the drive current and the irradiation angle of the LED string obtained by optimization through a communication interface module of the output unit.
Preferably, at least one of the light properties of the lamp set, such as brightness, color temperature, color and illumination angle, is adjustable; the dimming signal is a PWM wave duty ratio value of the LED lamp driving current; the image acquisition unit adopts two mesh cameras, processing module includes image processing portion and photochromic processing portion, image processing portion includes eye opening detector and sight detector again, photochromic processing portion includes illuminance detector, colour temperature detector and colour detector again.
Preferably, the attention factor values of the 3 individual characteristic parameters for characterizing the attention factor are obtained by processing as follows:
firstly, for the eye opening sequence de, firstly, the window average filtering is performed by the following formula to obtain the eye opening e at the current time,
Figure BDA0002859484250000061
then, a down-sampling sequence Xe of the eye opening degree is obtained by moving the window at intervals,
Xe={e(0),e(Ts),e(2Ts),...},
next, the sequence Xe is function-fitted using the following formula: y is a.e-b·xThe opening degree change time tu is obtained according to the fitted function,
Figure BDA0002859484250000062
wherein L is the window width, Ts is the down-sampling interval, a and b are both fitting coefficients, E1 and E2 are two thresholds of the eye opening degree, and for the normalized eye opening degree value sequence, the values of E1 and E2 are between 0 and 1;
calculating a first and a second volume characteristic value of the eye opening according to the eye opening e and the opening change time tu,
Figure BDA0002859484250000063
wherein be and ce are lower limit value and upper limit value of the interval which is obtained according to statistics and covers the eye opening value with the set proportion in the normal state, ae and de are the other two preset lower limit value and upper limit value respectively; btu is an upper limit value of eye opening change time covering a set proportion in a current continuous learning time range in a normal state, and atu is a set lower limit value;
calculating the attention factor value of the eye opening as ke1 ke 2;
secondly, detecting the intersection point of the learner's sight line and the working surface, if the intersection point falls outside the range of the preset working surface block, calculating the shortest distance from the intersection point to the working surface block and recording the time length of the corresponding sight point continuously exceeding the preset range, for the distance sequence dd, obtaining the current sight line offset distance d through window average filtering, and simultaneously calculating the maximum time length td of the sight point continuously exceeding the preset range in the corresponding window time range,
calculating a first body characteristic value and a second body characteristic value of the sight concentration degree according to the distance d and the time length td,
Figure BDA0002859484250000071
the method comprises the following steps that a and b are fitting coefficients, Td is the maximum time length of a viewpoint continuously exceeding a preset range, which covers a set proportion in a current continuous learning time range in a normal state, and sigma is a preset width value;
the attention factor value for calculating the attention degree is kd1 kd 2;
thirdly, setting an up-and-down fluctuation interval for the heart rate data sequence according to the heart rate expected value in the normal state, counting the times N that the data fluctuation exceeds the fluctuation interval range within a preset time length by taking the current time as the center, and the number of samples Rb of the heart rate within the interval range within the preset time length,
N=N++N-
wherein N is+For the number of times of crossing the interval, N-The number of times of crossing into the interval;
respectively calculating a first body characteristic value and a second body characteristic value of the heart rate according to the times N and the ratio Rb,
Figure BDA0002859484250000072
Figure BDA0002859484250000073
the method comprises the following steps that TN is the maximum number of times that a preset proportion is covered in a current continuous learning time range and a heart rate exceeds a fluctuation interval range in a normal state, sigma N is a preset width value, and aRb and bRb are two proportion threshold values set according to statistics;
the attention factor value for calculating the degree of attention is kb1 kb 2.
Preferably, based on the 3 attention factor values, and 3 individual characteristic parameters of eye opening, gaze concentration and heart rate, the light environment evaluation function is,
Figure BDA0002859484250000074
wherein f isiAttention parameter evaluation values w of eye opening, gaze concentration, and heart rate, respectivelyiIs its corresponding weight, each fiIs defined as follows:
Figure BDA0002859484250000075
f3 is fp1 fp2 kb,
Figure BDA0002859484250000081
Figure BDA0002859484250000082
wherein e is an eye opening value at the current moment, eT is an eye opening threshold value, and ke is an attention factor value of the eye opening; dp is the sight line offset distance, dS is the corresponding distance threshold value, and kd is the attention factor value of the sight line concentration degree; p is the number of heart beats in the current unit time, i.e. the heart rate, pT is the corresponding threshold value, apChange in heart rate per unit time, a1And a2For its corresponding heart rate variation, i.e. threshold value of heart rate acceleration, a3Kb is the attention factor value of the heart rate for the set width value of the change rate interval;
the multi-objective optimization algorithm adopts evolution processing, for each individual in an evolved population, corresponding illuminance and color temperature of the individual are mapped into attention factor parameter values through an RBF neural network, a total evaluation value F of the individual is calculated based on the luminous environment evaluation function, then inheritance, intersection and variation operations are carried out according to the total evaluation value F, the evolved population is updated, then the population is repeatedly evolved until the optimization is finished, and an optimization searching result is output.
Compared with the prior art, the scheme of the invention has the following advantages: the light condition is represented by factors such as the illumination intensity and the color temperature of a working surface, the attention is represented by physical parameters such as an eye opening value, a sight concentration value, a sight moving rate and a heart rate, the attention of a learner is objectively distinguished by multi-factor quantization, and each parameter is automatically extracted by a control unit after signal acquisition is carried out by a photochromic sensing unit, an image acquisition unit or a heart rate acquisition unit; the nonlinear network is adopted to construct and model the mapping relation between the illumination condition of the environment and the attention of the personnel, and the trained network can predict the attention of the personnel in different light environments, so that a basis is provided for the recommendation and evaluation of the high-attention light environment in various work sites. And searching out the light color parameters with high attention evaluation values based on a multi-objective optimization algorithm, and mapping the optimized light color parameters into the driving current of the lamp group based on a lookup table or a conversion polynomial or a nonlinear mapping network, thereby realizing the illumination condition which is beneficial to improving or maintaining the attention of learners. Meanwhile, the grading standard is adjusted on line through the user interface unit, so that the optimized illumination is more in line with the individual preference of the learner.
It should be understood that all combinations of the foregoing concepts, as well as additional concepts discussed in greater detail below (provided such concepts are not mutually inconsistent), are contemplated as being part of the inventive subject matter disclosed herein. In particular, all combinations of claimed subject matter appearing at the end of this disclosure are contemplated as being part of the inventive subject matter disclosed herein.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a block diagram of the components of a lighting control system employing the method of the present invention;
FIG. 3 is a schematic view of a lighting environment according to the present invention;
FIG. 4 is a view showing a constitution of a control unit; FIG. 5 is a block diagram of the processing module;
FIG. 6 is a schematic diagram of a RBF neural network structure;
FIG. 7 is a partial schematic view of an embodiment of the present invention;
FIG. 8 is a partial structural view of another embodiment of the present invention;
FIG. 9 is a schematic view of a slide loader;
FIG. 10 is a schematic view of a work area setting;
FIG. 11a is a structural diagram of a working area setting unit; FIGS. 11b and 11c are structural diagrams of the adjustment shaft; FIG. 11d is a view showing the structure of the lamp housing;
FIG. 12 is a schematic view of the intersection of the line of sight and the work surface;
figure 13 is a schematic of a sign data sequence;
FIGS. 14a and 14b are schematic diagrams of first and second volume characteristic evaluation functions of eye opening, respectively; fig. 14c and 14d are schematic diagrams of the first and second feature value evaluation functions of the gaze concentration, respectively;
FIG. 15 is a view point distribution diagram;
fig. 16 is a schematic view of a driving structure of the lamp set.
Wherein: 1000 an attention factor based lighting control system, 100 an attention factor based lighting control device, 110 a light color sensing unit, 120 an image acquisition unit, 130 a heart rate acquisition unit, 140 a user interface unit, 150 a control unit, 160 a user identification unit, 170 a dimming mapping unit, 180 a dimmable light group,
151 processing module, 152RBF neural network, 153 first connection switcher, 154 first iterative learning module, 155 output module, 156 storage module, 157BP neural network, 158 second connection switcher, 159 second iterative learning module, 181 driver, 182LED light,
1511 image processing unit, 1512 light color processing unit, 1513 light modulation processing unit, 1514 illumination optimization processing unit, 1551 display screen, 1552 communication interface, 15111 eye opening degree detector, 15112 sight line detector, 15113 mouth detector, 15121 illuminance detector, 15122 color temperature detector, 15123 color detector,
101 a bottom plate, 102 a bracket, 103 a binocular camera, 104 an infrared auxiliary light source, 105 a display bar, 106 a light color sensing block, 107 a key block, 108 a calibration block, 109 a working area setting unit,
1071 a slide input device, 1072 a cursor,
1091 pivot, 1092 adjusting plate, 1093 first adjusting shaft, 1094 second adjusting shaft, 1095 lamp groove, 1096 motor, 1097, driving rod, 1098, connecting rod,
951LED lamp pearl, 952 glass cover, 953 spotlight piece.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings, but the present invention is not limited to only these embodiments. The invention is intended to cover alternatives, modifications, equivalents and alternatives which may be included within the spirit and scope of the invention.
In the following description of the preferred embodiments of the present invention, specific details are set forth in order to provide a thorough understanding of the present invention, and it will be apparent to those skilled in the art that the present invention may be practiced without these specific details.
The invention is described in more detail in the following paragraphs by way of example with reference to the accompanying drawings. It should be noted that the drawings are in simplified form and are not to precise scale, which is only used for convenience and clarity to assist in describing the embodiments of the present invention.
Example 1
As shown in fig. 1, the attention-based lighting control method of the present invention includes the following steps:
s1, initializing, and establishing an RBF neural network in the control unit, wherein the neural network takes 3 light color parameters such as working surface illumination, color temperature and the like and 3 continuous learning time as input quantities, and takes 6 parameters which are used for representing attention factors and respectively correspond to the eye opening degree, the eye concentration degree and the heart rate of a learner and are used for representing the attention factors as output quantities, wherein the eye concentration degree is the offset distance of the sight line;
s2, sending a dimming signal to the dimmable LED lamp bank through an output module of the control unit, carrying out signal acquisition on the changed luminous environment based on the light color sensing unit, the image acquisition unit and the heart rate acquisition unit, and processing the acquired signal by a processing module to obtain a training sample of the RBF neural network;
s3, repeating the step S2 for multiple times, obtaining a training sample set of the RBF neural network, and training the RBF neural network by using the sample set;
s4, determining strategies for encoding and decoding light color parameters such as working surface illumination, color temperature and the like in the multi-objective optimization algorithm, determining respective value intervals of the strategies, and initializing an evolutionary population;
s5, aiming at each individual in the evolution group in the search space, predicting the attention parameter of the individual by using the trained RBF neural network based on the photochromic parameter and the current continuous learning time to obtain predicted values of 6 attention factor parameters including the eye opening, the sight concentration, the heart rate and the respective attention factor values thereof;
s6, calculating the attention evaluation value according to the established luminous environment evaluation function based on the predicted value, and performing cross inheritance and mutation operation according to the evaluation value to update an evolved population;
s7, turning to the step S5, repeating iteration until the optimization is finished, and outputting a Pareto optimization solution;
and S8, demapping the optimization into the driving current value of each driving current channel of the lamp group, and transmitting the current value to a driver in the lamp group for dimming.
The processing and application of the present invention are described in detail below.
As shown in fig. 2, the method of the present invention is applied to an attention-factor-based lighting control system 1000, where the attention-factor-based lighting control system 1000 includes an attention-factor-based lighting control device 100 and a dimmable light set 180, where the attention-factor-based lighting control device 100 further includes a light color sensing unit 110, an image capturing unit 120, a heart rate capturing unit 130, a user interface unit 140, a control unit 150, a user identification unit 160, and a dimming mapping unit 170.
The heart rate acquisition unit 130 acquires the heart rate of the learner, and the heart rate can be acquired through a wristwatch or a bracelet and transmitted to the control unit 150 through the communication interface. The light color sensing unit 110 collects the illumination, color temperature and color of the illumination of the working surface, the illumination can be detected by an independent module, and the color temperature and color can be obtained by the same RGB or xyz color sensing module. Preferably, the color sensing module may be a TCS3430 sensor, and the filter of TCS3430 has five channels, including X, Y, Z channel and two infrared channels, which can be used to infer the light source type. The TCS3430 sensor collects the light color signal of the working surface in real time, and the xyz color coordinate value and the color temperature of the color are respectively obtained after signal processing and conversion by the processing module in the control unit.
The lamp set 180 is a dimmable lamp set, and at least one of the light properties such as brightness, color temperature, color and illumination angle thereof is adjustable. The user identification unit 160 identifies the learner to perform data collection, model construction, and lighting control for the unique individual. Preferably, the user identification unit 160 may adopt one or more of the following identification methods: fingerprint identification, iris identification, voice identification and face identification.
The dimming mapping unit 170 maps light color parameters such as illumination, color temperature and the like of the working surface, which are obtained by the control unit 150 according to the attention evaluation value optimization, to the lamp group driving current based on the lookup table interpolation or the conversion polynomial or the nonlinear mapping network, and transmits each current value to the driver 181 in the lamp group 180, so that the light emission of the LED lamps 182 in the lamp group is changed, and a high-attention light environment is obtained.
As shown in fig. 2 to 4, the lighting system control unit or the dimming mapping unit sends a dimming command, where the dimming command includes driving current values of n LED strings in the lamp set. The adjustment of the lighting color of the lamp group on the working surface is realized through the dimming instruction, and the light color of the working surface is collected by the light color sensing unit and then processed by the control unit. The person will have different attention expressions under different light color conditions, and the control unit processes, extracts and evaluates the physical signs of the face, the heartbeat and the like of the human body after the physical signs are collected by the image collecting unit and the heart rate collecting unit, and forms an attention parameter set.
As shown in FIG. 3, attention performance of a particular individual under various light color conditions is collected, and a first mapping is established between light color parameters and attention parameters. An attention evaluation function is established for the attention parameters of the person, and the evaluation indexes comprise a plurality of evaluation indexes, so that the optimization of the chromatic parameters can be carried out based on a multi-objective optimization algorithm, such as a multi-objective genetic algorithm (MOGA). In the optimization process, aiming at each photochromic parameter combination in the search space, the attention parameter corresponding to the parameter combination is predicted based on the generalization of the first mapping, so that the attention evaluation corresponding to the combination can be calculated according to the predicted attention parameter.
The result of the optimization is a combination of the light color parameters in the search space that needs to be converted into the actual drive currents for the lamp set, for which a second mapping of the light color parameters to the lamp set drive currents is established. And based on the second mapping, converting the optimization result into a driving current value of the lamp group, transmitting a dimming command to a driver of the lamp group for execution, outputting corresponding current to each channel, and then adjusting light emitted by the LED strings to realize luminous environment illumination corresponding to the attention optimization value.
As shown in fig. 2 and 4, the control unit 150 includes a processing module 151, a first iterative learning module 154, an RBF neural network module 152, a first connection switcher 153, an output module 155, and a storage module 156. The processing module 151 further includes an image processing unit 1511, a light color processing unit 1512, a dimming processing unit 1513, and an illumination optimization processing unit 1514. As shown in fig. 4 and 5, the light color processing unit 1512 further includes an illuminance detector 15121, a color temperature detector 15122, and a color detector 15123, which process the signals collected by the light color sensing unit to obtain light color parameters, such as illuminance, color temperature, and xyz stimulus values of the color, which represent the illumination conditions of the working surface. The image capturing unit 120 may employ a binocular camera, and the image processing unit 1511 processes the signal captured by the image capturing unit 120 to obtain the characteristics of the face of the learner.
The detection of the attention state can be based on technologies such as machine vision and image processing, and such methods are adopted in traffic driving, and there are many researches on realizing effective monitoring of the driver state by analyzing facial features of the driver. For learning on the desktop, attention detection and analysis can be performed by an image processing method. Different from the state of full emotion and concentrated attention, the physiological parameters of people can change to different degrees when the people are tired or distracted, and the physiological parameters can be used as the basis for monitoring the attention state. When the learner is inattentive, the eyelids are closed, the opening degree of the eyes is obviously reduced, and even intermittent closure and blinking occur; in the sub-tired state before the obvious drowsiness, the phenomena of reduced reading speed and slow sight movement can also occur; occasionally, the person may also take yawning actions. Therefore, the invention is based on the detection of the attention state of the learner.
Specifically, as shown in fig. 4 and 5, the image processing section 1511 includes an eye opening detector 15111, a gaze detector 15112, and a mouth shape detector 15113, which respectively detect the opening degree of the learner's eyes, the gaze direction, and the mouth opening characteristics, and further obtains the eye opening value, the gaze concentration value, and the gaze movement rate of the learner in association with the calibration and conversion processes. The sight concentration degree is the sight offset distance, namely the shortest distance from the intersection point of the sight and the working surface to the preset working surface block.
The vision estimation method by image processing can be selected from an iris-sclera marginal method, a pupil-eye corner point positioning method and a pupil-cornea reflection method. The first two estimate the sight line direction by using the infrared signal difference and the eye corner and pupil connecting line. Preferably, the present invention employs a third method, wherein an infrared light source is used to illuminate the cornea of the human eye, and when light is applied to the eye, a reflection is generated on the outer surface of the cornea of the human eye, and the reflection appears as a bright spot in the eye, which is called purkinje spot. When the eyeball rotates, the position of the purkinje spot is fixed, so that the sight line direction can be estimated according to the relative position relation between the pupil of the human eye and the purkinje spot. In specific application, the pupil-cornea reflection method also comprises two types of realization methods, namely a two-dimensional sight estimation method and a three-dimensional sight estimation method. The two-dimensional sight estimation method adopts a calibrated sight mapping function, the two-dimensional eye characteristic parameters are input parameters of the function, and the output parameters are the sight direction or the screen fixation point. The three-dimensional sight estimation method is based on binocular vision, space three-dimensional information of the eyes of the driver is obtained through a three-dimensional reconstruction process, and the three-dimensional sight estimation method is high in detection precision and wide in range.
Based on a learning scene image acquired by a binocular camera, firstly, smoothing and threshold segmentation are carried out, noise is removed, the face and eye regions of a learner are positioned, and characteristic information such as the height-width ratio of human eyes, the pupils of the eyes, the Purkinje points and the like is extracted; secondly, performing stereo matching on the extracted feature points, and performing three-dimensional reconstruction on the pupils of the eyes and the Purkinje points based on a geometric constraint establishing process to obtain three-dimensional world coordinates of the feature points; and finally, judging the three-dimensional sight direction of the learner through a three-dimensional coordinate vector formed by the pupil and the Purkinje point. Based on the human eye height-width ratio and the sight line direction tracking which are periodically obtained, the eye opening value, the sight line space direction and the sight line moving speed can be calculated.
Specifically, as shown in fig. 2 and 7, the present invention installs a binocular camera 103 used by an image capturing unit on a bracket 102 directly opposite to a person in a work scene, and the bracket 102 is fixed on a base plate 101. An infrared auxiliary light source 104 for assisting visual line detection is also fixed on the bracket 102, the light color sensing unit is fixed in the light color sensing block 106 area of the bottom surface, and the keys of the user interface unit are arranged in the key block 107 area at the other end of the light color sensing block 106 symmetrical with respect to the bracket.
Referring to fig. 10, in order to detect and determine the viewpoint of the learner in the image processing, a reasonable working area needs to be preset in the working plane. For this purpose, a work area setting unit 109 is added to the apparatus. The working area setting unit 109 is supported at the top end of the bracket 102 by a pivot 1091 at the center, and four triangular adjusting plates 1092 are movably connected to the pivot 1091 and symmetrically distributed at the left, right, front and back. As shown in fig. 11a, a first adjusting shaft 1093 is connected between the left and right adjusting plates 1092, a second adjusting shaft 1094 is connected between the front and rear adjusting plates 1092, and a rectangular light groove 1095 is formed on the bottom edges of the four adjusting plates. The two adjusting shafts are staggered in the longitudinal height.
As shown in fig. 11b, the first and second adjusting shafts are driven by a motor 1096 to drive two driving rods 1097 moving in opposite directions, wherein the driving rods are connected to the inner side of the adjusting plate. As shown in fig. 11c, the drive rods 1097 of the two adjustment shafts may also be connected to the adjustment plates by a link 1098. As shown in fig. 11d, the lamp groove 1095 at the end of the adjusting plate is embedded with an LED lamp bead 951, a glass cover 952 is arranged outside the lamp bead, and the light rays of the LED are collected into a strip shape by a light collecting sheet 953 around the glass cover.
As shown in fig. 10 and 11b, the rectangular light trough 1095 emits a strip-shaped light spot GS. The control unit drives the first adjusting shaft and the second adjusting shaft by controlling the motor to rotate, so that the inclination angles of the left and right adjusting plates and the front and back adjusting plates relative to the horizontal plane are respectively changed, and a rectangular area is defined on the horizontal plane of the working surface through four strip-shaped light spots and serves as a preset working surface block. When the motor rotates clockwise, the driving rod drives the adjusting plate to move outwards, so that the inclination angle of the adjusting plate relative to the horizontal plane is reduced, the strip-shaped light spots move outwards, and the working surface area block is enlarged; conversely, when the motor rotates counterclockwise, the working surface area shrinks. Preferably, 4 buttons may be provided in the buttons of the user interface unit to adjust the expansion and contraction of the work surface block in the left-right and front-rear directions, respectively. The range of the working face block can be recorded by the rotation angle of a motor and other mechanisms.
Through the online adjustment of the working face blocks, the acquisition of the detection sample is greatly facilitated, and the accuracy and the applicability of the sample acquisition are improved. And after the working face block is set, the light emitting of the lamp groove is closed through key operation.
As shown in fig. 12, the line of sight acquired by the image processing unit is a v-ray passing through point P0. In the working horizontal plane G2, the preset working surface block is a rectangular region G1 with GA, GB, GC, GD as corner points, the normal vector of the working plane is u, and the world coordinate system is O-XYZ, then the coordinates of the intersection point P1 of the sight line and the working plane can be calculated.
First, the ray's parametric equation is:
Figure BDA0002859484250000131
wherein t is an independent variable parameter,
then is formed by
Figure BDA0002859484250000132
The coordinates of the intersection point P1 of the line of sight with the working plane can be calculated,
Figure BDA0002859484250000133
as shown in fig. 12, in the G2 plane, the regions outside the range of the working surface block are divided into eight regions I to viii in total according to the four corners of the working surface block. If the viewpoint P1 is not located in the work surface area, it is first determined which area it is located in, and then the shortest distance d between the viewpoint and the work surface area is further calculated based on the located area. Specifically, if the viewpoint falls in the regions II, IV, VI and VIII of the diagonal region, the distance between the viewpoint and the corresponding corner point is calculated; otherwise, the distance between the viewpoint and the corresponding corner point in the X direction or the Y direction is calculated. As shown in the figure, P1 is in the V region, then,
d=|xP1-xGD|。
with reference to fig. 2, 3, and 6, the present invention adopts a neural network to structurally model a mapping relationship between an illumination condition of an environment and attention of a person. Specifically, the RBF neural network shown in fig. 6 is established, which takes 3 parameters of the illumination of the working surface, the color temperature and the duration learning time as input quantities, and takes 6 parameters of the feature parameters of the learner's eye opening, the gaze concentration, the heart rate 3 and the corresponding attention factor value thereof for representing the attention factor as output quantities. Wherein, the attention factor value of each physical sign parameter is quantified according to the following process.
See fig. 13 for a schematic diagram of the normalized physical sign data sequence, where the recorded data sequence is the data sequence after filtering the opening of the primary eye, and the midpoint of the maximum probability value interval of the physical sign quantity is 1.
T1, for the eye opening sequence de, because the eye opening changes many high frequency components, firstly, the window average filtering is carried out by the following formula to obtain the eye opening e at the current moment,
Figure BDA0002859484250000134
then, a down-sampling sequence Xe of the eye opening degree is obtained by moving the window at intervals,
Xe={e(0),e(Ts),e(2Ts),...},
next, the sequence Xe is function-fitted using the following formula: y is a.e-b·xAnd acquiring the variation trend of the eye opening. The opening degree change time tu is obtained according to the fitted function,
Figure BDA0002859484250000141
wherein L is the window width, Ts is the down-sampling interval, a and b are both fitting coefficients, E1 and E2 are two thresholds of the eye opening degree, and for the normalized eye opening degree value sequence, the values of E1 and E2 are between 0 and 1.
Then, as shown in fig. 14a and 14b, the first and second biometric values of the eye opening are calculated based on the eye opening e and the opening change time tu,
Figure BDA0002859484250000142
wherein be and ce are lower limit value and upper limit value of the interval which is obtained according to statistics and covers the eye opening value with the set proportion in the normal state, ae and de are the other two preset lower limit value and upper limit value respectively; btu is an upper limit value of eye opening change time covering a set proportion in a current continuous learning time range in a normal state, and atu is a set lower limit value;
the attention factor value for calculating the eye opening is ke1 ke 2.
And T2, detecting the intersection point of the learner's sight line and the working surface, if the intersection point falls outside the range of the preset working surface block, calculating the shortest distance from the intersection point to the working surface block and recording the time length of the corresponding sight point continuously exceeding the preset range, for the distance sequence dd, obtaining the current sight line offset distance d through window average filtering, and simultaneously calculating the maximum time length td of the corresponding window time range in which the sight point continuously exceeds the preset range. If the intersection point falls within the working face block, the assigned distance d is zero.
As shown in fig. 14c and 14d, the first and second body characteristic values of the gaze concentration are calculated from the distance d and the time length td,
Figure BDA0002859484250000143
wherein a and b are fitting coefficients, and the larger the values of a and b are, the faster the function value is reduced; td is the maximum time length that the view point continuously exceeds the preset range and covers the set proportion in the current continuous learning time range in the normal state, and sigma is a preset width value;
the attention factor value for calculating the eye concentration is kd1 kd 2.
T3, for the heart rate, the change interval is relatively much smaller, the change period is long, and the attention factor evaluation value is obtained as follows. As shown in fig. 13, two dotted lines are drawn at positions of δ% above and below the unit value on the vertical axis. Counting the number N of times that data fluctuation exceeds the fluctuation interval range within a preset time length by taking the current time as the center and the number of samples Rb of the heart rate within the interval range within the preset time length according to an up-down fluctuation interval set by the heart rate expected value in a normal state,
N=N++N-
wherein N is+For the number of times of crossing the interval, N-The number of times the interval is crossed.
Respectively calculating a first body characteristic value and a second body characteristic value of the heart rate according to the times N and the ratio Rb,
Figure BDA0002859484250000151
Figure BDA0002859484250000152
the method comprises the following steps that TN is the maximum number of times that a preset proportion is covered in a current continuous learning time range and a heart rate exceeds a fluctuation interval range in a normal state, sigma N is a preset width value, and aRb and bRb are two proportion threshold values set according to statistics;
the attention factor value for calculating heart rate was kb1 kb 2.
The preset parameters in the quantization, such as E1 and E2, can be gradually reduced according to the increase of the continuous learning time, and the setting of the two parameters can also be carried out by adopting relative proportions; other preset parameters may be similarly dynamically adjusted. In the heart rate parameter processing, δ corresponding to the fluctuation interval can be set according to statistics, if the δ is set, the probability of the sign data of the interval corresponding to the dotted line range in the normal state is a probability threshold, and the upper limit and the lower limit of the probability threshold are both values between 0.92 and 0.98. The normal state refers to a physical sign detection sample of the learner under a comfortable illumination condition with a higher grade.
In the process of calculating the attention factor values of the various physical signs, the acquired attention factor values of the eye openness, the sight concentration and the heart rate are processed, and the characteristics of the various physical signs are considered, and meanwhile, the consistent evaluation standard can be embodied.
Based on the 3 attention factor values, the eye opening degree, the sight line concentration degree and the heart rate 3 individual characteristic parameters, a luminous environment evaluation function is defined as,
Figure BDA0002859484250000153
wherein f isiAttention parameter evaluation values w of eye opening, gaze concentration, and heart rate, respectivelyiIs its corresponding weight, each fiIs defined as follows:
Figure BDA0002859484250000154
f3=fp1·fp2·kb,
wherein the content of the first and second substances,
Figure BDA0002859484250000161
Figure BDA0002859484250000162
wherein e is an eye opening value at the current moment, eT is an eye opening threshold value, and ke is an attention factor value of the eye opening; dp is the sight line offset distance, dS is the corresponding distance threshold value, and kd is the attention factor value of the sight line concentration degree; p is the number of heart beats in the current unit time, i.e. the heart rate, pT is the corresponding threshold value, apChange in heart rate per unit time, a1And a2For its corresponding heart rate variation, i.e. threshold value of heart rate acceleration, a3For the set width of the variation rate interval, kb is the attention factor value of the heart rate.
The established evaluation function F has higher score when the learner concentrates attention, otherwise, the score is reduced. Since attention-seeking involves multiple factors, this is a multi-objective optimization problem for which the optimization solution is referred to as the Pareto solution. The problem is solved by a multi-objective genetic optimization algorithm, i.e. MOGA. The genetic algorithm is used as a method for simulating a biological genetic process of natural selection, high-quality and low-quality selection and survival of suitable persons, and an effective solving way is provided for a target optimization problem. Due to the superior characteristics of robustness, global convergence and the like, the method is widely applied to many disciplines such as production scheduling, communication, circuit design and the like.
In MOGA solving, firstly, determining strategies for coding and decoding light color parameters such as working surface illumination, color temperature and the like, and determining respective value intervals of the strategies; in evolution iteration, carrying out attention parameter prediction on individuals in a group by using a trained RBF neural network based on the current continuous learning time on light color parameters corresponding to each individual to obtain feature parameters of 3 individuals, namely the eye opening, the sight concentration and the heart rate, and predicted values of 6 output quantities of the feature parameters and the attention factor values corresponding to the feature parameters; based on the predicted value, calculating an evaluation value of the predicted value according to an evaluation function F, performing cross inheritance and mutation operation according to the evaluation value, and updating an evolved population; and repeating the iteration until the optimization is finished, and outputting a Pareto optimization solution.
Preferably, a learning mode key is set in the user interface unit, and when the reading mode is selected by the learning mode key, two parameters, namely the gaze movement rate and the attention factor value thereof, are added to the output quantity of the neural network, and the gaze movement rate attention factor value calculation process is as follows:
referring to fig. 15, an intersection point P1 of the learner's gaze with the working plane Z1 is detected within a preset time length Tp, a circumscribed rectangle Z2 of the outermost viewpoint is found for the set of intersection points falling within a preset working plane block range, and the gaze movement rate is calculated based on the length X and width Y of the rectangle,
Figure BDA0002859484250000163
then, the attention factor value for calculating the line-of-sight movement rate is,
Figure BDA0002859484250000171
wherein avs and bvs are two rate thresholds respectively set according to statistics,
correspondingly adding a sight line movement rate attention parameter evaluation value f in the light environment evaluation function4And f is4=kv。
Preferably, a mouth shape detector is arranged, the mouth opening characteristic detection is carried out on the mouth part, correspondingly, a mouth opening sign parameter used for representing an attention factor is added in the output quantity of the neural network, the attention factor value of the mouth opening is the product of the mouth opening sign value and the continuous opening time length sign value of the mouth,
the mouth opening degree sign value is obtained by calculation according to a semi-normal distribution function with zero opening degree as a vertex, and the mouth continuous opening duration sign value is obtained by calculation according to another semi-normal distribution function with zero duration as a vertex;
correspondingly, a mouth opening attention parameter evaluation value which takes the mouth opening attention factor value is also added in the light environment evaluation function.
Referring to fig. 6, the model of the RBF neural network is as follows.
The output of the ith node of the hidden layer is as follows:
Figure BDA0002859484250000172
the output of the jth node of the output layer is as follows:
Figure BDA0002859484250000173
wherein, the dimension of the input vector X is 3, the number of hidden layer H nodes is p, the dimension of the output vector Y is 5, CiIs the center of the Gaussian function of the ith node of the hidden layer, sigmaiIs the width of the center of the Gaussian function, X-CiAre vectors X and CiEuclidean distance between, wijThe weight value from the ith hidden node to the jth output node;
and at the moment, the RBF neural network takes 5 light color parameters of the illumination, the color temperature and the xyz color coordinate value of the color of the working surface and 6 parameters of the continuous learning time as input quantities.
Sigma of hidden layer nodeiCan be determined by the following equation:
Figure BDA0002859484250000174
wherein DiThe maximum distance between the center of the ith hidden node and other centers.
In the initial stage of modeling and evaluating attention by using the method, training samples are few, and when the change of illumination, color temperature and color components in the light color parameters in a sample set is not enough, each sample X is used as a central vector Ci of a hidden layer node, and the number of the hidden layer nodes and the respective central vectors Ci thereof are determined by using a K-means clustering algorithm along with the enrichment of the samples. In order to obtain sufficient training samples, the person may be allowed to collect samples in a preferred environment where the lightness and chroma may be adjusted to a greater extent.
Because the value intervals of the network input and output quantity are likely to have large difference, in order to improve the effectiveness of data, the sample data is firstly subjected to normalization preprocessing, and the data is mapped into a [0, 1] numerical value space. The performance index function of the network approximation, i.e. the total average error function, is:
Figure BDA0002859484250000181
wherein N is the total number of samples in the training sample set, k is the sample number,
Figure BDA0002859484250000182
is relative to the input XkActual output of (2), YkIs relative to the input XkThe desired output of (c). In the RBF network training process, the adjustment of parameters needs to make the network approach to the corresponding mapping relation in the least square sense, namely to make E reach the minimum, for this reason, a gradient descent method can be adopted to correct the weight from the network hidden layer to the output layer, so as to make the target function reach the minimum.
In the application of the device and the method, no matter the training sample is collected or the trained network is used for predicting the attention parameters, the light color collecting unit is used for collecting signals; but for image acquisition, if the current task is to acquire training samples, image acquisition is needed, otherwise, if the current task is to predict, image acquisition is not needed.
In order to improve the generalization capability of the neural network, enough training samples are collected. The invention sends out dimming signals to the lamp group through the output module or the user interface unit, and obtains the training sample set of the artificial neural network based on the light color sensing unit, the image acquisition unit and the heart rate acquisition unit for the light environment after each change.
As shown in fig. 2, in an environment where the method is tested or used, preferably, the dimmable light set 180 is a dimmable LED light set, and the driver 181 adjusts the driving current value of each LED light 182 in the light set, and the driver adjusts the light output by changing the PWM duty cycle of the driving current of each channel of the LED light.
Preferably, the processing module changes the light output of the LED lamp set in a stepwise manner within a known dimming range of the LED lamp set. And the processing module sends the PWM wave duty ratio of each channel current to the driver in the form of signals through the communication interface of the output module. The processing module obtains enough network training samples by continuously changing working points of an illumination vector space, wherein sampling points can be sparse in end value areas of various light color variables, and the sampling points are denser in middle areas such as areas with color temperature of 4500k and illumination of 300 lx-500 lx. The collected sample is stored in a storage module.
The first iterative learning module 154 obtains 5 actual output values corresponding to the training samples from the processing module 151 through the first connection switch 153, obtains 5 mapping values of 6 input values corresponding to the training samples after being processed by the neural network from the RBF neural network 152, adjusts the neural network structural parameters according to the 5 actual output values and the 5 mapping values to train the neural network, and repeats the training until a preset training frequency is reached or a target function is less than a set threshold. And storing the trained network structure parameters in a storage module.
The parameters such as preset values required for processing by the control unit are input through keys in the user interface unit. The trained neural network can predict and judge what attention a learner will pay under the environment illumination condition after the personnel enter a new learning environment based on the generalization ability of the neural network, and display or output the predicted result through an output module.
As shown in fig. 7, the output module 155 preferably includes a display bar 105 for indicating the current concentration level of the learner. Alternatively, the output module may employ the display screen 1551 and a plurality of separate display bars to display the respective factor evaluations of attention, respectively.
Preferably, the output module 155 further includes a communication interface 1552, and outputs the detected or predicted attention factor values to the outside through the interface module.
Preferably, when the image capturing unit is a monocular camera, as shown in fig. 8, a plurality of calibration blocks 108 with known positions may be provided on the surface of the base plate, each of the calibration blocks having a circular light spot, and a calibration confirmation key may be provided in the user interface unit, and the control unit may perform distance calibration through the calibration blocks: and the calibration blocks are lighted in turn, the image of the face of the learner is collected through the image collection unit after the calibration confirmation key is pressed, the sight line direction of the human eyes is extracted based on the collected image, and the extraction result is compared with the position of the calibration blocks so as to calibrate the detection parameters of the sight line direction.
When the learner is distracted by emotions and the like, the collected samples are greatly deviated from the samples under normal conditions, and although the neural network has better fault tolerance, the accuracy of the network is affected by too many samples. For this purpose, a cancel sampling key is preferably provided in the user interface unit, and the control unit suspends data sampling and sample recording after detecting that this key is pressed.
Referring to fig. 8 and 9, when the learner is not satisfied with the current optimized lighting effect, the present invention also performs fine adjustment through a slide input 1071 in the user interface unit, in which a cursor 1072 is provided inside the slide input 1071. When the cursor is positioned at the middle position on the sliding input device, the light color scoring standard is not changed; when the cursor moves left, the learner is shown to expect that the lamp emits light which is a little darker than the current illumination, and the score of the light color combination with the illumination lower than the current illumination is improved; otherwise, when the cursor moves to the right, the learner hopes that the lamp emits light which is brighter than the current illumination, and the score of the light color combination with the illumination higher than the current illumination is increased.
Accordingly, the step S6 further includes:
after a total evaluation value F is calculated according to the luminous environment evaluation function, the evaluation value is adjusted according to the position of a cursor after a learner operates the sliding input device:
F'=F·(1+η·Δ),
Figure BDA0002859484250000191
Figure BDA0002859484250000192
wherein, E is the illuminance of the light to be scored corresponding to the current individual, E0 is the illuminance of the current light and corresponds to the position in the middle of the sliding input device, the left and right end positions respectively correspond to the illuminance of 0.9 and 1.1 times of that of E0 when the cursor slides towards the left and right sides of the sliding input device, En is the illuminance corresponding to the position of the cursor after the user operates, Δ is the set threshold for scoring adjustment according to the degree, η is the adjustment coefficient, and F' are the score values before and after the adjustment, respectively.
The current illumination is the illumination corresponding to the optimization result of dimming sent to the lamp group driver, and by moving the cursor, the learner can finely adjust the evaluation function, so that the evaluation standard is closer to the preference of the learner. Preferably, the color temperature can be adjusted by a similar method.
And obtaining the photochromic parameters with high attention evaluation after optimization and solution. Then, the dimming mapping unit maps the optimized light color parameters into driving current values of each driving current channel of the lamp group and transmits the current values to drivers in the lamp group, thereby obtaining an illumination environment which is helpful for a learner to keep or improve attention.
The dimming mapping unit converts the light color parameters into a mapping of the lamp group driving current, which may be based on various means. For example, the light color space to driving current space look-up table may be generated in advance.
For the sake of simplicity, without loss of generality, the color parameters in the above-mentioned light color parameters are removed, and only 2 parameters of the working surface illuminance and the color temperature are considered.
As shown in fig. 16, as a common dimmable light set, it is assumed that the light set includes two LED strings of high color temperature and low color temperature, and each LED string corresponds to one driving current channel, as shown in fig. 16a, where n is 2. The dimming mapping unit comprises a lookup table from a light color space consisting of working surface illumination and color temperature to a dual-channel driving current space, and for the optimization result (E0, K0), a dual-channel driving current value is obtained by interpolation in the lookup table.
First, find P (E) in the photochromic space0,K0) Four points around: a (E)1,K1),B(E2,K1),C(E1,K2) And D (E)2,K2) In which E1≤E0≤E2,K1≤K0≤K2
Two-channel current value (i)01,i02) The distance is used as a weighted value for interpolation,
Figure BDA0002859484250000201
Figure BDA0002859484250000202
wherein d is1Represents the shortest distance of P to four points, d2The second shortest point, and so on, dTIs the sum of all distances; i.e. i11And i21The current values of the two channels with the shortest distance are respectively; and respectively adding different weights to four points closest to the P point to be searched according to different distances, wherein the four points are the shortest and the heaviest.
Example 2
The dimming mapping unit converts the light color parameters into the mapping of the lamp group driving current, and can also be based on a conversion polynomial from a light color space to a driving current space, which is generated through least square regression.
For the sake of simplicity, without loss of generality, only 2 parameters of the working surface illumination and color temperature are considered. As shown in fig. 16, as a common dimmable light set, it is assumed that the light set includes two LED strings of high color temperature and low color temperature, and each LED string corresponds to one driving current channel, as shown in fig. 16a, where n is 2. As shown in the second mapping of fig. 2, the dimming mapping unit includes a conversion polynomial from a light color space composed of the illumination and the color temperature of the working surface to a two-channel driving current space.
Assuming that a conversion polynomial from a light color space composed of the illumination and the color temperature of the working surface to a two-channel driving current space is as follows:
i1=α1·E+α2·K+α3·E·K+α4·E25·K2
i2=β1·E+β2·K+β3·E·K+β4·E25·K2
the above equation is simplified to a matrix form: i ═ a λ
Wherein, the current vector i ═ i1,i1]TCoefficient matrix
Figure BDA0002859484250000203
Transformation vector q ═ E K EK E2 K2]T
Obtaining samples after adjusting the driving current to change the light color, arranging I and Q according to the column vector of each sample respectively, and combining the I and Q into matrixes I and Q, wherein the matrix comprises the following components: i is A.Q
Solving coefficient matrix a may utilize a least squares method, as follows:
A=IQT(QTQ)-1
thus, for the optimization results (E)0,K0) Calculating a dual channel drive current value i by the polynomial01And i02
i01=α1·E02·K03·E0·K04·E0 25·K0 2
i02=β1·E02·K03·E0·K04·E0 25·K0 2
The polynomial regression model can also select other nonlinear polynomial models such as 9 terms, and the model is improved by increasing the terms of the polynomial, and then the transformation vector is:
q'=[E K EK E2 K2 EK2 KE2 E3 K3]T
example 3
Different from embodiment 1, this embodiment replaces the dimming mapping unit with a BP neural network established in the control unit to implement mapping for converting the light color parameter into the lamp set driving current.
The established BP neural network takes 5 light color parameters including the illumination of the working surface, the color temperature and the xyz color coordinate value of the color as input quantities, and takes the current values of all the driving current channels of the lamp set as output quantities. As shown in fig. 3, the second mapping is implemented by the BP network in this embodiment.
Referring to fig. 16b, the lamp set uses a three-primary-color LED lamp string, and the driving current thereof includes three RGB channels, and the output of the BP network is 3. At this time, the driving current value of one of the channels is changed to change the light color of the lamp. When the three channel currents are increased or decreased in synchronization from a certain state, the lamp exhibits no change in color but a brightness that gradually increases or decreases.
The model of the BP neural network is:
the output of the jth node of the hidden layer by layer is
Figure BDA0002859484250000211
The p-th node of the output layer outputs
Figure BDA0002859484250000212
Wherein the f () function is taken as sigmoid function, wijAnd vjpRespectively input layer to hidden layer connection rightsValues and the connection weights of the hidden layer to the output layer, ajAnd bpRespectively, a hidden layer threshold and an output layer threshold, and k is the number of hidden layer nodes.
The total error criterion function of the BP neural network for N training samples is:
Figure BDA0002859484250000213
in order to minimize the total error, a gradient descent method is used for network training.
Specifically, for the BP neural network, the establishment is performed in the step S1, and the following processing procedures are further added in the step S2:
and when the dimming signal is sent out, recording the driving current values of the w driving current channels corresponding to dimming, wherein the current values and the light color parameters obtained by dimming post-treatment form a training sample of the BP neural network together.
The following processing procedures are also added in the step S3: acquiring a training sample set of the BP neural network, and training the BP neural network by using the sample set;
the optimization demapping of the step S8 is performed by the BP neural network.
Referring to fig. 4, the second iterative learning module 159 obtains 3 actual output values corresponding to the training samples from the processing module 151 through the second connection switch 158, obtains 3 mapping values of 5 input values corresponding to the training samples after being processed by the neural network from the BP neural network 157, adjusts the neural network structure parameters according to the 3 actual output values and the 3 mapping values to train the neural network, and repeats the training until a preset number of training times is reached or the objective function is smaller than the set threshold. And storing the trained network structure parameters in a storage module.
Example 4
If a person is constantly studying in multiple fixed environments with the same light fixture environment within the environment. In this case, in order to realize illumination with high attention factor, the light color conversion link in the middle of the conversion process of the driving current to the attention can be omitted, and the driving current value is directly mapped to the attention parameter value.
Referring to fig. 3, the third mapping is implemented by using an RBF neural network in the present embodiment.
In still another embodiment of the present invention, there is also provided an attention factor-based lighting control method, including the steps of:
s1, initializing, and establishing an RBF neural network in a control unit, wherein the neural network takes the driving current of u LED strings in a lamp group, v illumination angles, w illumination parameters and continuous learning time as input quantities, and takes the eye opening degree, the sight concentration degree, the heart rate 3 individual characteristic parameters of a learner and 6 attention factor parameters of the corresponding attention factor values as output quantities;
s2, sending a dimming signal to the dimmable lamp set through an output module of the control unit, carrying out signal acquisition on the changed luminous environment based on the image acquisition unit and the heart rate acquisition unit, and processing the acquired signal by a processing module to obtain a training sample of the RBF neural network;
s3, repeating the step S2 for multiple times, obtaining a training sample set of the RBF neural network, and training the RBF neural network by using the sample set;
s4, determining strategies for coding and decoding the w illumination parameters, determining respective value intervals of the strategies, and initializing an evolutionary population;
s5, predicting attention factor parameters of each individual in an evolution group in a search space by using a trained RBF neural network based on w illumination parameters and the current continuous learning time, and obtaining 6 predicted values of the attention factor parameters, such as eye opening, sight concentration, heart rate and respective attention factor values thereof;
s6, calculating the attention evaluation value according to an evaluation function based on the predicted value, performing cross inheritance and mutation operation according to the evaluation value, and updating the evolution population;
s7, turning to the step S5, repeating iteration until the optimization is finished, and outputting a Pareto optimization solution;
and S8, transmitting the driving current value and the irradiation angle of each driving current channel corresponding to the optimized solution to the driver in the lamp group for dimming.
Preferably, if the work surface has a positional shift relative to the lamps in the same light environment, the distance parameter of each lamp to the work surface is also increased in the input amount of the RBF neural network. When a plurality of samples with different distances are included in a training sample set and lighting parameter optimization is carried out in a field environment, the distance input by using an RBF neural network for attention parameter prediction is input as a fixed value and is not searched.
It can be understood that in the solution of the present invention, all models related to attention factors are based on specific individuals, and therefore, the related data in the processes of generating network training samples, lookup tables, converting polynomials, multi-objective optimization processing, etc. are based on users with the same identity; for multiple users, one data set should be created and maintained for each user independently.
The invention is applied to the detection and the prejudgment of learning attention under different light environments, after samples with abundant changes are collected, due to infinite combinations in light color change domains, the invention is adopted to predict the changes of attention parameters including eye opening, sight concentration and the like along with the accumulated learning time under different illumination conditions in various field environments, the predicted value is used in the attention evaluation calculation of searched light color conditions in the process of optimizing the light color parameters based on a multi-objective optimization algorithm, the optimization result is mapped into the driving current value of the lamp group, and the lamp group driver drives the LED string according to the current value, thereby realizing the illumination control with high attention.
While the embodiments of the present invention have been described above, these embodiments are presented as examples and do not limit the scope of the invention. These embodiments may be implemented in other various ways, and various omissions, substitutions, and changes may be made without departing from the spirit of the invention. These embodiments and modifications are included in the scope and gist of the invention, and are also included in the invention described in the claims and the equivalent scope thereof.

Claims (10)

1. An attention-based lighting control system, comprising:
the user interface unit that enters the parameters and initiates the operation,
having a lamp group adjustable in at least one of brightness, color temperature, color and illumination angle,
a light color sensing unit for collecting the illumination, color temperature and color of the illumination of the working surface,
an image collecting unit for collecting the face and working face area images of learners,
a heart rate collecting unit for collecting the heart rate of the learner,
a user identification unit for identifying the learner,
a control unit respectively connected with the user interface unit, the lamp group, the light color sensing unit, the image acquisition unit, the heart rate acquisition unit, the user identity identification unit and the dimming mapping unit,
wherein the control unit is configured to:
the processing module contained in the device processes the signals collected by the light color sensing unit to obtain 2 light color parameters including the illumination and the color temperature of the working surface, processes the signals collected by the image collecting unit to obtain the opening value of the eyes, the concentration value of the sight line and the change rate of the movement speed of the sight line of the learner, and obtains the heart rate and the heart rate change rate of the learner by reading the signals of the heart rate collecting unit,
3 parameters of the illumination intensity, the color temperature and the continuous learning time of a working surface are used as input quantities, 5 attention parameters of an eye opening value, a sight line concentration value, a sight line movement speed change rate, a heart rate and a heart rate change rate of a learner are used as output quantities, an artificial neural network is established, the artificial neural network adopts an RBF neural network,
the dimming processing part sends dimming signals to the lamp group through the output module or the user interface unit, acquires a training sample set of the RBF neural network based on the photochromic sensing unit, the image acquisition unit and the heart rate acquisition unit for the changed luminous environment, trains the RBF neural network by using the sample set,
in the field environment, the lighting optimization processing part establishes a luminous environment evaluation function based on 5 attention parameters, predicts the attention parameter values of different users under different light color parameter conditions by using the trained RBF neural network corresponding to the users respectively, optimizes the illumination and the color temperature of a working surface in a spatial range in which the light color parameters of the field lamp group can be valued by a multi-objective optimization algorithm, and transmits the optimized result to the dimming mapping unit;
and the dimming mapping unit maps the optimization result into a driving current value of each driving current channel of the lamp group and transmits the current value to a driver in the lamp group.
2. The attention-based lighting control system of claim 1 wherein the light set includes two LED strings of high color temperature and low color temperature, each LED string corresponding to a driving current channel, the dimming mapping unit includes a look-up table of light color space to two channel driving current space composed of working surface illuminance and color temperature, and the optimization result (E) is obtained0,K0) The dual channel drive current values are obtained by interpolation in a look-up table.
3. The attention-based lighting control system of claim 2 wherein the dimming mapping unit comprises a look-up table of light color space to dual channel driving current space comprising working surface illuminance and color temperature, and the result of the optimization (E) in a light color space is determined0,K0) Obtaining a dual-channel driving current value by interpolation in a lookup table;
first, find P (E) in the photochromic space0,K0) Four points around: a (E)1,K1),B(E2,K1),C(E1,K2) And D (E)2,K2) In which E1≤E0≤E2,K1≤K0≤K2
The optimization result is mappedTo a dual channel current value (i)01,i02) The distance is used as a weighted value to perform interpolation,
Figure FDA0002859484240000021
Figure FDA0002859484240000022
wherein d is1Represents the shortest distance of P to four points, d2The second shortest point, and so on, dTIs the sum of all distances; i.e. i11And i21The current values of the two channels with the shortest distance are respectively; and respectively adding different weights to four points closest to the P point to be searched according to different distances, wherein the four points are the shortest and the heaviest.
4. The attention-based lighting control system of claim 1 wherein:
the lamp set has w drive current channels,
the photochromic parameters also comprise xyz color coordinate values of illumination colors of the working surface, the artificial neural network is an RBF neural network, and 5 photochromic parameters in total of the xyz color coordinate values of the illumination colors, the color temperatures and the colors of the working surface and 6 parameters in total of the continuous learning time are used as input quantities,
the dimming mapping unit is replaced by a BP neural network established in the control unit, the BP neural network takes 5 light color parameters as input quantity and takes current values of w driving current channels as output quantity to establish an artificial neural network,
when the dimming processing part sends dimming signals to the lamp group through the output module or the user interface unit, the dimming processing part collects and processes photochromic signals of the changed light environment and records driving current values of w driving current channels corresponding to dimming so as to form a training sample set of the BP neural network,
in the field environment, the trained BP neural network maps the optimization results to drive current values of each drive current channel of the lamp group and transmits the current values to the drivers in the lamp group.
5. The attention-based lighting control system of claim 4, wherein the xyz color coordinate values of the color are replaced by RGB three-component values of the color, the lamp group is an LED lamp group, the driving current value of each LED lamp in the lamp group is adjusted by a driver, and the dimming signal is a PWM wave duty ratio value of the driving current of the LED lamp;
the image acquisition unit adopts two mesh cameras, processing module includes image processing portion and photochromic processing portion, image processing portion includes eye opening detector and sight detector again, photochromic processing portion includes illuminance detector, colour temperature detector and colour detector again.
6. The attention-based lighting control system of claim 4 wherein the model of the BP neural network is:
the output of the jth node of the hidden layer by layer is
Figure FDA0002859484240000031
The p-th node of the output layer outputs
Figure FDA0002859484240000032
Wherein the f () function is taken as sigmoid function, wijAnd vjpRespectively the connection weight from the input layer to the hidden layer and the connection weight from the hidden layer to the output layer, ajAnd bpRespectively representing the threshold values of a hidden layer and an output layer, and k representing the number of nodes of the hidden layer, and performing network training by adopting a gradient descent method.
7. The attention-based lighting control system of claim 1 wherein the model of the RBF neural network is:
the output of the ith node of the hidden layer is as follows:
Figure FDA0002859484240000033
the output of the jth node of the output layer is as follows:
Figure FDA0002859484240000034
wherein, the dimension of the input vector X is 6, the number of hidden layer H nodes is p, the dimension of the output vector Y is 5, CiIs the center of the Gaussian function of the ith node of the hidden layer, sigmaiIs the width of the center of the Gaussian function, | | X-CiI is the vectors X and CiEuclidean distance between, wijThe weight value from the ith hidden node to the jth output node.
8. The attention-based lighting control system of claim 1, wherein the image capturing unit employs a camera mounted on a support opposite to the person in the work scene, the input unit includes a key indicating the current learning difficulty, and the neural network adds a learning difficulty factor input parameter;
the input unit also comprises a sampling canceling key, and the control unit suspends data sampling and sample recording after detecting that the key is pressed;
in the input unit, a sliding input device with a cursor is further provided, and the control unit is further configured to:
in the multi-target optimization algorithm processing process, after a total evaluation value F is calculated according to a luminous environment evaluation function, the evaluation value is adjusted according to the position of a cursor after a learner operates a sliding input device:
F'=F·(1+η·Δ),
Figure FDA0002859484240000041
Figure FDA0002859484240000042
wherein E is the illumination of the light to be evaluated corresponding to the current individual, E0The current light intensity corresponds to the middle position of the slide input device, and the left and right end positions respectively correspond to E when the cursor slides towards the left and right sides of the slide input device00.9 and 1.1 times of illuminance, EnAnd delta is a set threshold value for grading adjustment according to the degree, eta is an adjustment coefficient, and F' are grading values before and after adjustment respectively.
9. An attention-based lighting control system, comprising: an image acquisition unit, a heart rate acquisition unit, a control unit, an input unit, an output unit, a storage unit and a dimmable lamp set,
the image acquisition unit acquires images of the face and the working face area of a learner, the heart rate acquisition unit acquires the heart rate of the learner, and the output unit is used for displaying signals and outputting attention factor values and dimming signals;
the control unit includes a processing module, an iterative learning module, a neural network module, and a connection switcher, and is configured to:
the processing module processes the signals acquired by the image acquisition unit to acquire an eye opening value, a sight concentration value and a sight movement rate of the learner, acquires the heart rate of the learner by reading the signals of the heart rate acquisition unit,
the neural network module takes w lighting parameters and continuous learning time of the driving current of u LED strings and v irradiation angles as input quantities, takes 6 attention factor parameters of the characteristics parameters of 3 individuals of the learner, the eye opening degree, the sight concentration degree and the heart rate as output quantities, and establishes an RBF neural network,
the light modulation processing part in the processing module sends out light modulation signals to the lamp group through the output unit or the user interface unit, obtains a training sample set of the RBF neural network for the changed light environment based on the image acquisition unit, the heart rate acquisition unit and the light modulation signals, trains the RBF neural network by using the sample set,
in the field environment, an illumination optimization processing part in a processing module establishes a luminous environment evaluation function based on 6 attention factor parameters, predicts the attention factor parameters of different users under different illumination parameter conditions by a trained RBF neural network corresponding to the users respectively, optimizes the driving current and the illumination angle of the LED string in a spatial range in which the illumination parameters of the field lamp group can be taken by a multi-objective optimization algorithm,
and outputting the drive current and the irradiation angle of the LED string obtained by optimization through a communication interface module of the output unit.
10. The attention-based lighting control system of claim 9 wherein at least one of the light properties of the light bank, such as brightness, color temperature, color, and illumination angle, is adjustable; the dimming signal is a PWM wave duty ratio value of the LED lamp driving current; the image acquisition unit adopts two mesh cameras, processing module includes image processing portion and photochromic processing portion, image processing portion includes eye opening detector and sight detector again, photochromic processing portion includes illuminance detector, colour temperature detector and colour detector again.
CN202011561377.6A 2019-04-02 2019-04-02 Attention factor-based lighting control system Withdrawn CN112672474A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011561377.6A CN112672474A (en) 2019-04-02 2019-04-02 Attention factor-based lighting control system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910263082.1A CN109905943B (en) 2019-04-02 2019-04-02 Illumination control device based on attention factor
CN202011561377.6A CN112672474A (en) 2019-04-02 2019-04-02 Attention factor-based lighting control system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910263082.1A Division CN109905943B (en) 2019-04-02 2019-04-02 Illumination control device based on attention factor

Publications (1)

Publication Number Publication Date
CN112672474A true CN112672474A (en) 2021-04-16

Family

ID=66954381

Family Applications (5)

Application Number Title Priority Date Filing Date
CN202011561380.8A Withdrawn CN112672475A (en) 2019-04-02 2019-04-02 Illumination control device based on attention factor
CN202011561132.3A Withdrawn CN112654116A (en) 2019-04-02 2019-04-02 Illumination control method based on attention factor
CN201910263082.1A Active CN109905943B (en) 2019-04-02 2019-04-02 Illumination control device based on attention factor
CN202011561377.6A Withdrawn CN112672474A (en) 2019-04-02 2019-04-02 Attention factor-based lighting control system
CN202011561378.0A Withdrawn CN112654117A (en) 2019-04-02 2019-04-02 Attention factor-based lighting control method, dimming mapping unit and application method thereof

Family Applications Before (3)

Application Number Title Priority Date Filing Date
CN202011561380.8A Withdrawn CN112672475A (en) 2019-04-02 2019-04-02 Illumination control device based on attention factor
CN202011561132.3A Withdrawn CN112654116A (en) 2019-04-02 2019-04-02 Illumination control method based on attention factor
CN201910263082.1A Active CN109905943B (en) 2019-04-02 2019-04-02 Illumination control device based on attention factor

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202011561378.0A Withdrawn CN112654117A (en) 2019-04-02 2019-04-02 Attention factor-based lighting control method, dimming mapping unit and application method thereof

Country Status (1)

Country Link
CN (5) CN112672475A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114925753A (en) * 2022-04-28 2022-08-19 南通东升灯饰有限公司 Use abnormity alarm system of LED floor lamp
CN116095915A (en) * 2023-04-10 2023-05-09 南昌大学 Dimming method and system based on human body thermal comfort
CN116887467A (en) * 2023-07-18 2023-10-13 江苏英索纳通信科技有限公司 Lamp light mixing method and system based on multicolor full-spectrum dimming technology

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110403604B (en) * 2019-07-03 2022-02-18 陈琦 Method for constructing environment space and training attention based on attention concentration degree
CN110458030A (en) * 2019-07-15 2019-11-15 南京青隐信息科技有限公司 A kind of method of depth self study adjustment user's attention of fresh air bookshelf
CN110415653B (en) * 2019-07-18 2022-01-18 昆山龙腾光电股份有限公司 Backlight brightness adjusting system and method and liquid crystal display device
CN112074053A (en) * 2020-08-24 2020-12-11 中国建筑科学研究院有限公司 Lighting equipment regulation and control method and device based on indoor environment parameters

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2496661C (en) * 2004-02-19 2009-05-19 Oz Optics Ltd. Light source control system
CN103081571B (en) * 2010-08-27 2015-04-01 皇家飞利浦电子股份有限公司 Automatically configuring of a lighting
JP6695021B2 (en) * 2015-11-27 2020-05-20 パナソニックIpマネジメント株式会社 Lighting equipment
CN108591868B (en) * 2018-03-27 2020-06-26 中国地质大学(武汉) Automatic dimming desk lamp based on eye fatigue degree
CN108712809B (en) * 2018-05-18 2019-12-03 浙江工业大学 A kind of luminous environment intelligent control method neural network based
CN108882480B (en) * 2018-06-20 2020-06-05 新华网股份有限公司 Stage lighting and device adjusting method and system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114925753A (en) * 2022-04-28 2022-08-19 南通东升灯饰有限公司 Use abnormity alarm system of LED floor lamp
CN116095915A (en) * 2023-04-10 2023-05-09 南昌大学 Dimming method and system based on human body thermal comfort
CN116887467A (en) * 2023-07-18 2023-10-13 江苏英索纳通信科技有限公司 Lamp light mixing method and system based on multicolor full-spectrum dimming technology
CN116887467B (en) * 2023-07-18 2024-03-22 江苏英索纳通信科技有限公司 Lamp light mixing method and system based on multicolor full-spectrum dimming technology

Also Published As

Publication number Publication date
CN109905943B (en) 2021-01-08
CN112654116A (en) 2021-04-13
CN112672475A (en) 2021-04-16
CN112654117A (en) 2021-04-13
CN109905943A (en) 2019-06-18

Similar Documents

Publication Publication Date Title
CN109905943B (en) Illumination control device based on attention factor
CN109949193B (en) Learning attention detection and prejudgment device under variable light environment
CN112533317B (en) Scene type classroom intelligent illumination optimization method
CN110163371B (en) Dimming optimization method for sleep environment
CN109890105B (en) Open office lighting system and control method
CN110113843B (en) Lighting control system based on sleep efficiency factor
CN110960036A (en) Intelligent mirror system and method with skin and makeup beautifying guide function
CN110062498B (en) Public dormitory mixed lighting system and method based on partition controllable ceiling lamp
CN109998497B (en) Sleep-in detection and judgment system in luminous environment
CN110013231A (en) Sleep environment illumination condition discrimination method and reading face light measuring method
CN112566333A (en) Public dormitory mixed lighting system based on isotropic symmetry ceiling lamp
CN113297966A (en) Night learning method based on multiple stimuli
CN115272645A (en) Multi-mode data acquisition equipment and method for training central fatigue detection model
CN117373075A (en) Emotion recognition data set based on eye feature points and eye region segmentation results
CN113297968A (en) Learning auxiliary system for multiple stimulation at night

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210416