CN109949193B - Learning attention detection and prejudgment device under variable light environment - Google Patents

Learning attention detection and prejudgment device under variable light environment Download PDF

Info

Publication number
CN109949193B
CN109949193B CN201910263070.9A CN201910263070A CN109949193B CN 109949193 B CN109949193 B CN 109949193B CN 201910263070 A CN201910263070 A CN 201910263070A CN 109949193 B CN109949193 B CN 109949193B
Authority
CN
China
Prior art keywords
value
attention
color
sight
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910263070.9A
Other languages
Chinese (zh)
Other versions
CN109949193A (en
Inventor
邹细勇
张维特
井绪峰
陈亮
杨凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Jiliang University Shangyu Advanced Research Institute Co Ltd
Original Assignee
China Jiliang University Shangyu Advanced Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Jiliang University Shangyu Advanced Research Institute Co Ltd filed Critical China Jiliang University Shangyu Advanced Research Institute Co Ltd
Priority to CN202011437396.8A priority Critical patent/CN112949372A/en
Priority to CN201910263070.9A priority patent/CN109949193B/en
Priority to CN202011437459.XA priority patent/CN112464863A/en
Priority to CN202011437412.3A priority patent/CN112949373A/en
Priority to CN202011434362.3A priority patent/CN112651303A/en
Publication of CN109949193A publication Critical patent/CN109949193A/en
Application granted granted Critical
Publication of CN109949193B publication Critical patent/CN109949193B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Strategic Management (AREA)
  • Evolutionary Computation (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Educational Administration (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Business, Economics & Management (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Educational Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Marketing (AREA)
  • Human Computer Interaction (AREA)
  • Probability & Statistics with Applications (AREA)
  • Primary Health Care (AREA)
  • Multimedia (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)

Abstract

The invention discloses a device and a method for detecting and pre-judging learning attention under a variable light environment, wherein the device comprises a light color sensing unit, an image acquisition unit, a heart rate acquisition unit, a control unit and a user interface unit; firstly, 6 parameters including working surface illumination, color temperature, xyz color coordinate value of color and continuous learning time are used as input quantities, and attention factor values of parameters such as the eye opening degree of a learner, the attention degree of sight, the heart rate, the sight movement rate and the like after filtering, fitting and evaluation are used as output quantities to establish an artificial neural network; secondly, after the range of the working surface block is adjusted to a desired position, the current of the lamp group is changed, a sample after the light color combination is changed is collected, and a neural network is trained; and finally, the trained network is used for predicting the attention parameters of the learner under the field illumination condition in a new light environment, so that the light environment evaluation prompt is carried out on the learner, and a basis is provided for the recommendation of a potential high-attention light environment.

Description

Learning attention detection and prejudgment device under variable light environment
Technical Field
The invention relates to the field of intelligent illumination and learning assistance, in particular to a learning attention detection and prejudgment device in a variable light environment.
Background
People acquire information from the outside through vision and respond quickly, the work and learning efficiency is directly limited by the lighting conditions of the environment, and the maintenance of basic visual functions also depends on lighting.
There are many aspects to the ambient lighting factors that affect vision, among which are important: illuminance level, brightness distribution, color appearance, light and shade color, etc., which affect the operation efficiency to different degrees. The eye strength and the eye time of the operation affect the visual fatigue degree, and further affect the working efficiency.
Under different light environments, the working efficiency of personnel is different. After the third kind of photoreceptor cells on the retina of the human eye, namely the intrinsic photoreceptor retinal nerve knot cells, are discovered, it has been proved that the third kind of photoreceptor cells can control the human circadian rhythm, biological clock and human eye pupil size by generating a series of chemical and biological reactions to the visible radiation entering the human eye, thereby having influence on the human physiology, psychology and the like. The physical characteristics of the light environment include luminous flux, illuminance, glare, brightness, spectrum, and the like. The illumination level is considered to be one of the main factors affecting the visual organs and the working efficiency, and the spectral color temperature and the like also play an important role.
The work efficiency is generally defined as the ratio of output to input within a certain period of time, and as the society advances to the information-based society, the nature of work of people gradually changes, and mental labor contributes more and more to the productivity of the society. The teams of brainworkers are ever growing, and the evaluation of brainworkers is more difficult than that of physical workers. Mental labor changes labor from tangible operation to intangible operation, human changes labor power to thinking tools, and the physiological requirement of labor to human becomes psychological requirement. For example, for construction workers, the work efficiency can be measured by the number of bricks laid in a unit work time, but for those working in creative work such as technical developers, how to quantitatively measure the work output and further evaluate the work efficiency?
In order to study the mechanism of the effect of illumination on the work efficiency, many scholars have studied from both theoretical and experimental points of view. For example, in the study on influence mechanism and evaluation of indoor environment on work efficiency of people and staff in the doctor's academic paper of lany, the university of Shanghai, 2010, a climate chamber simulation office is used for carrying out experimental tests on a subject, and the light environment is evaluated through subjective questionnaire survey and physiological parameter measurement, the result shows that too low illumination has negative influence on the work efficiency of the people, too high illumination may not be beneficial to long-term work of the people, and an optimal illumination level should exist for the work efficiency of the people.
In the existing research, the work efficiency is converted based on the completion speed of certain tasks such as arithmetic, graph recognition and the like, and the methods have certain subjectivity and lack individual pertinence.
The research aiming at human objects, environment as media and work efficiency output relates to a multi-dimensional research method. Many previous studies have yielded inconsistent and even contradictory results, in part because some of the evaluation criteria are subjective evaluations such as subjective questionnaires, and there are experience differences between individuals.
Therefore, there is a need for a device for associating various light environment influences with the work and learning efficiency of operators through objective detection, and a method for detecting and predicting factors related to the learning efficiency of the operators in different light environments.
Disclosure of Invention
The invention aims to provide a device and a method for detecting and judging individual attention under various illumination conditions, and the device and the method have strong generalization capability to carry out prediction judgment on other untested different illumination conditions: under this untested lighting condition, the individual will show what attention. Thus, there can be grounds for recommending a potentially high-attention light environment to the individual.
Through tests, comparison and research analysis of various operations, the attention of people is more directly detected and evaluated compared with the operation efficiency test, and the expression of various factors related to the attention can be objectively obtained through a detection means. When attention is focused, learning efficiency is often higher, and the personnel appear that eyes are concentrated on at this moment, and the sight is concentrated on the working face, and the heart rate is also gentler. Conversely, when the attention of the person is not focused due to fatigue or other factors, the eyes are gradually closed, the opening degree is reduced, the sight line is low or the person leans to the outside of the working surface, the heart rate is reduced, and yawning is sometimes expressed. Therefore, the attention of the person can be objectively detected by capturing the face state of the person.
In desktop learning under different lighting conditions, the difference in attention of learners includes not only slowly changing eye openness, but also the range of sight points, heart rate fluctuation, sight line movement rate and other physical signs.
And what constraint relationship exists between the lighting environment and attention, which is a complex non-linear problem. To describe the mapping between them, a suitable mapping network is needed. The neural network has self-organizing and self-learning capabilities, can directly receive data and learn, and is widely applied to the field of pattern recognition within a short time. As one kind of artificial neural network, the RBF network can approximate any nonlinear function, can process the regularity which is difficult to analyze in the system, has good generalization capability and fast learning convergence speed, and has been successfully applied to the fields of nonlinear function approximation, pattern recognition, information processing and the like.
In order to solve the problem that the influence of illumination conditions on learning efficiency can only be evaluated through operation experiments or subjective scoring in the prior art, the method and the device collect the physical sign data of the learner through the sensor, and use the parameters such as eye opening degree, sight concentration degree, heart rate, sight movement rate and the like as the attention factors, so that the attention of the learner in the luminous environment is evaluated. Based on the vital sign sensing data, attention assessment is performed, and there are several problems. First, how does the sampled vital sign data, quantify it? And also to be able to distinguish between the level of attention. Second, how are data sequences before and after associated, how to further distinguish whether attention is focused according to their course of change?
The scheme of the invention is that signals of several human body characteristics related to learning attention are collected through a device, then the signals are filtered and trend extracted, and probability distribution of characteristic data under normal conditions is obtained through statistics, so that accurate attention factor evaluation is obtained through comparing the value and the change trend of a sample data sequence with the characteristics counted in a priori mode. Then, changing the illumination condition, collecting sign data samples of the learner under different adjustments, and establishing a learning attention detection and pre-judgment model under the variable light environment based on the nonlinear mapping theory and processing calculation.
The method comprises the steps of modeling a complex nonlinear mapping relation between illumination conditions and attention of people through a neural network, wherein the illumination conditions comprise illumination of a working surface, color temperature and xyz color coordinate values of colors, and the attention is represented by parameters such as an eye opening value, a sight concentration value, a heart rate and a sight movement rate. Considering that the attention of the person is also influenced by the accumulated working or learning time, the neural network takes the above-mentioned 5 light color parameters and 6 continuous learning time as input quantities, and takes the attention parameter as an output quantity. The neural network adopts RBF network, after collecting enough samples, the number of nodes of hidden layer of RBF neural network and their respective central vectors are determined by K-means clustering algorithm, and the weight from hidden layer to output layer is corrected by gradient descent method, so that the error between the actual value of space output quantity of training sample and the network output value is minimum.
Specifically, the invention provides a device for learning attention detection and prejudgment under a variable light environment with the following structure, which comprises a light color sensing unit, an image acquisition unit, a heart rate acquisition unit, a control unit and a user interface unit;
the light color sensing unit is used for acquiring the illumination, color temperature and color of illumination of a working surface, the image acquisition unit is used for acquiring images of the face and the working surface area of a learner, the heart rate acquisition unit is used for acquiring the heart rate of the learner, the user interface unit is used for performing parameter input and key operation, and the output module in the control unit is used for performing signal display and outputting an attention factor value;
the control unit further comprises a processing module, an iterative learning module, a neural network module, a connection switcher, and a storage module, and is configured to:
the processing module processes the signals acquired by the light color sensing unit to acquire 5 light color parameters including the illumination, the color temperature and the xyz color coordinate value of the color of the working surface, processes the signals acquired by the image acquisition unit to acquire the eye opening value, the sight concentration value and the sight movement rate of the learner, and acquires the heart rate of the learner by reading the signals of the heart rate acquisition unit, wherein the sight concentration is the sight offset distance, namely the shortest distance from the intersection point of the current sight and the working surface to the preset working surface block,
the neural network module takes 6 parameters of working surface illumination, color temperature, xyz color coordinate value of color and continuous learning time as input quantities, takes attention factor values of 3 characteristic parameters of the learner's eye opening degree, sight concentration degree and heart rate for representing the attention factors as output quantities, establishes an artificial neural network,
the iterative learning module acquires 3 output quantity actual values corresponding to the training sample from the processing module through the connecting switcher respectively, acquires 3 mapping values of 6 input quantities corresponding to the training sample after being processed by the neural network from the neural network, adjusts the neural network structure parameters according to the 3 output quantity actual values and the 3 mapping values to train the neural network, and repeats the training until the training is completed,
during online prediction, the neural network predicts attention factor values of physical parameters such as eye opening, sight concentration, heart rate and the like of a learner based on the illumination of the current working surface, color temperature, xyz color coordinate values of colors and continuous learning time, and outputs the values to the output module through the processing module;
the storage module is used for recording and storing data such as neural network structure parameters, iterative learning parameters, calculation process values and the like.
Preferably, the attention factor values of the 3 individual characteristic parameters for characterizing the attention factor are obtained by processing as follows:
firstly, for the eye opening sequence de, firstly, the window average filtering is performed by the following formula to obtain the eye opening e at the current time,
Figure GDA0002714324970000041
then, a down-sampling sequence Xe of the eye opening degree is obtained by moving the window at intervals,
Xe={e(0),e(Ts),e(2Ts),...},
next, the sequence Xe is function-fitted using the following formula: y is a.e-b·xThe opening degree change time tu is obtained according to the fitted function,
Figure GDA0002714324970000042
wherein L is the window width, Ts is the down-sampling interval, a and b are both fitting coefficients, E1 and E2 are two thresholds of the eye opening degree, and for the normalized eye opening degree value sequence, the values of E1 and E2 are between 0 and 1;
calculating a first and a second volume characteristic value of the eye opening according to the eye opening e and the opening change time tu,
Figure GDA0002714324970000051
Figure GDA0002714324970000052
wherein be and ce are lower limit value and upper limit value of the interval which is obtained according to statistics and covers the eye opening value with the set proportion in the normal state, ae and de are the other two preset lower limit value and upper limit value respectively; btu is an upper limit value of eye opening change time covering a set proportion in a current continuous learning time range in a normal state, and atu is a set lower limit value;
the attention factor value for calculating the eye opening is,
ke=ke1·ke2;
secondly, detecting the intersection point of the learner's sight line and the working surface, if the intersection point falls outside the range of the preset working surface block, calculating the shortest distance from the intersection point to the working surface block and recording the time length of the corresponding sight point continuously exceeding the preset range, for the distance sequence dd, obtaining the current sight line offset distance d through window average filtering, and simultaneously calculating the maximum time length td of the sight point continuously exceeding the preset range in the corresponding window time range,
calculating a first body characteristic value and a second body characteristic value of the sight concentration degree according to the distance d and the time length td,
Figure GDA0002714324970000053
Figure GDA0002714324970000054
the method comprises the following steps that a and b are fitting coefficients, Td is the maximum time length of a viewpoint continuously exceeding a preset range, which covers a set proportion in a current continuous learning time range in a normal state, and sigma is a preset time width value;
the attention factor value for calculating the gaze concentration is,
kd=kd1·kd2;
thirdly, setting an up-and-down fluctuation interval for the heart rate data sequence according to the heart rate expected value in the normal state, counting the times N that the data fluctuation exceeds the fluctuation interval range within a preset time length with the current time as the center, and the number of samples Rb of the heart rate within the interval range within the preset time length,
N=N++N-
wherein N is+For the number of times of crossing the interval, N-The number of times of crossing into the interval;
respectively calculating a first body characteristic value and a second body characteristic value of the heart rate according to the times N and the ratio Rb,
Figure GDA0002714324970000061
Figure GDA0002714324970000062
the method comprises the following steps that TN is the maximum number of times that a preset proportion is covered in a current continuous learning time range and a heart rate exceeds a fluctuation interval range in a normal state, sigma N is a preset width value, and aRb and bRb are two proportion threshold values set according to statistics;
the attention factor value for calculating the heart rate is,
kb=kb1·kb2。
preferably, the device also comprises a working area setting unit for presetting the working area blocks,
the working area setting unit is supported at the top end of a bracket through a pivot positioned in the center, four triangular adjusting plates which are symmetrically distributed are movably connected on the pivot, a first adjusting shaft is connected between the left adjusting plate and the right adjusting plate, a second adjusting shaft is connected between the front adjusting plate and the rear adjusting plate, rectangular lamp grooves are respectively arranged at the bottom edges of the four adjusting plates,
the rectangular lamp groove emits strip-shaped light spots, and the control unit changes the inclination angle of the adjusting plate relative to the horizontal plane through the first adjusting shaft and the second adjusting shaft, so that a rectangular area is defined on the horizontal plane of the working surface through the four strip-shaped light spots and serves as a preset working surface block;
the user interface unit is provided with a learning mode key, when the user interface unit is selected to be a reading mode through the learning mode key, the output quantity of the neural network is increased by an attention factor value of a sight line movement rate sign parameter for representing the attention factor, and the calculation process is as follows:
detecting the intersection point of the learner's sight line and the working surface within a preset time length Tp, solving a circumscribed rectangle for the set of the intersection points falling within the range of the preset working surface block, calculating the sight line moving rate according to the length X and the width Y of the rectangle,
Figure GDA0002714324970000071
then, the attention factor value for calculating the line-of-sight movement rate is,
Figure GDA0002714324970000072
wherein, avs and bvs are two speed thresholds respectively set according to statistics.
Preferably, the image acquisition unit adopts a binocular camera, the processing module comprises an image processing part and a light color processing part, the image processing part comprises an eye opening detector and a sight line detector, and the light color processing part comprises an illuminance detector, a color temperature detector and a color detector;
the attention factor value is calculated by the image processing section.
Preferably, the image processing unit further comprises a mouth shape detector, the output quantity of the neural network is increased by a mouth opening degree sign parameter for representing an attention factor, and the attention factor value of the mouth opening degree is the product of the mouth opening degree sign value and a continuous mouth opening length sign value,
the mouth opening degree sign value is obtained through calculation according to a semi-normal distribution function with zero opening degree as a vertex, and the mouth continuous opening duration sign value is obtained through calculation according to another semi-normal distribution function with zero duration as a vertex.
Preferably, the camera used by the image acquisition unit is mounted on a support opposite to a person in a working scene, the user interface unit comprises a key for indicating the current learning difficulty, and the neural network is added with a learning difficulty coefficient input parameter.
Preferably, the support is further provided with an infrared auxiliary light source, and the output module in the control unit comprises a display bar for indicating the current concentration degree of the learner.
Preferably, the apparatus comprises a base plate, a plurality of calibration blocks with known positions are distributed on the surface of the base plate, each calibration block has a circular light spot, the user interface unit has a calibration confirmation key, and the control unit is further configured to:
and the calibration blocks are lightened in turn, the learner watches the lightened calibration blocks, the image of the face of the learner is collected through the image collection unit after the calibration confirmation key is pressed, the sight line direction of the human eyes is extracted based on the collected image, and the extraction result is compared with the position of the calibration blocks so as to calibrate the sight line direction detection parameters.
Preferably, the user interface unit includes a cancel sampling key, and the control unit suspends data sampling and sample recording after detecting that the key is pressed.
Preferably, the neural network adopts an RBF neural network, and the model of the RBF neural network is as follows:
the output of the ith node of the hidden layer is as follows:
Figure GDA0002714324970000081
the output of the jth node of the output layer is as follows:
Figure GDA0002714324970000082
wherein, the dimension of the input vector X is m, the number of hidden layer H nodes is p, the dimension of the output vector Y is n, CiIs the center of the Gaussian function of the ith node of the hidden layer, sigmaiIs the width of the center of the Gaussian function, | | X-CiI is the vectors X and CiEuclidean distance between, wijThe weight value from the ith hidden node to the jth output node;
wherein σiCan be determined by the following equation:
Figure GDA0002714324970000083
in the formula DiThe maximum distance between the center of the ith hidden node and other centers.
When the variation of the illumination, the color temperature and the color component in the light color parameters in the training samples is not enough, each sample X is taken as a central vector C of a hidden layer nodeiWith the enrichment of samples, determining the number of hidden layer nodes and respective central vectors C thereof by using a K-means clustering algorithmi
The sample data is firstly normalized and mapped into a [0, 1] numerical value space. The performance index function of the network approximation, i.e. the total average error function, is:
Figure GDA0002714324970000084
wherein N is the total number of samples in the training sample set, k is the sample number,
Figure GDA0002714324970000091
is relative to the input XkActual output of (2), YkIs relative to the input XkThe desired output of (c). In the RBF network training process, the adjustment of parameters needs to make the network approach to the corresponding mapping relation in the least square sense, namely to make E reach the minimum, for this reason, a gradient descent method can be adopted to correct the weight from the network hidden layer to the output layer, so as to make the target function reach the minimum.
Meanwhile, the invention also provides a learning attention detection and prejudgment method under the variable light environment, which comprises the following steps:
s1, establishing an artificial neural network in the control unit, wherein the neural network takes 6 parameters of the illumination intensity, the color temperature, the xyz color coordinate value of the color and the continuous learning time of the working surface as input quantities, and takes the attention factor values of 3 individual characteristic parameters of the eye opening degree, the line of sight concentration degree and the heart rate of the learner representing the attention factors as output quantities, wherein the line of sight concentration degree is the line of sight offset distance, namely the shortest distance from the intersection point of the current line of sight and the working surface to the preset working surface block;
s2, the processing module processes the signals collected by the light color sensing unit to obtain 5 light color parameters including working surface illuminance, color temperature and xyz color coordinate values of the color, processes the signals collected by the image collecting unit to obtain an eye opening value, a sight concentration value and a sight movement rate of the learner, obtains the heart rate of the learner by reading the signals of the heart rate collecting unit, and obtains respective attention factor values by respectively carrying out preprocessing such as filtering, normalization and the like on the signals according to the value intervals of each parameter and then evaluating and quantizing the preprocessed values;
s3, sending a dimming signal to the dimmable lamp set through an output module of the control unit or a user interface unit, carrying out signal acquisition on the changed luminous environment based on the light color sensing unit, the image acquisition unit and the heart rate acquisition unit, and then carrying out signal processing according to the method of the step S2;
s4, repeating the step S3 for multiple times, obtaining a training sample set of the neural network, and training the artificial neural network by using the sample set;
s5, on the basis of the trained neural network, in the field environment, the attention of the learner in the current light environment is predicted on line:
and predicting attention factor values of physical parameters such as the eye opening degree, the sight concentration degree, the heart rate and the like of the learner based on the acquired field working surface illumination, the color temperature, the xyz color coordinate value of the color and the input continuous learning time, and outputting the result through an output module.
Preferably, the step S1 further includes:
presetting a working face block by a working area setting unit;
setting a learning mode key in a user interface unit, and adding an attention factor value of a sight line movement rate sign parameter for representing an attention factor in the output quantity of the neural network when the learning mode key is selected as a reading mode;
accordingly, in the step S2, the attention factor value of the line of sight movement rate is also calculated.
Preferably, in the process of acquiring the training sample set, the training samples are covered with enough illumination states, wherein the sampling points are sparsely arranged near the end value region of each light color variable, and the sampling points are more densely arranged in the middle region, such as the region around the color temperature 4500k and the illumination 300 lx-500 lx.
Preferably, the image acquisition unit adopts a binocular camera, the processing module comprises an image processing part and a light color processing part, the image processing part comprises an eye opening detector, a sight line detector and a mouth detector, and the light color processing part comprises an illuminance detector, a color temperature detector and a color detector;
the step S2 includes the following processing procedures:
the eye opening detector acquires an eye opening value by calculating a human eye height-width ratio in the face region with respect to the acquired image,
the sight line detector judges the three-dimensional sight line direction of the learner by acquiring a three-dimensional coordinate vector formed by an eye pupil and a purkinje spot in the face area, and then the image processing part calculates the sight line concentration value according to the intersection point of the sight line and a working plane;
the illumination detector, the color temperature detector and the color detector respectively detect the illumination, the color temperature and the xyz color coordinate value of the color of the working surface.
Preferably, the image processing unit further includes a mouth detector for detecting a feature of the mouth opening degree of the mouth region,
in step S1, a mouth opening sign parameter for characterizing attention factor is further added to the output of the neural network,
accordingly, in step S2, the attention factor value of the mouth opening is calculated as the product of the sign value of the mouth opening and the sign value of the continuous opening time of the mouth,
the mouth opening degree sign value is obtained through calculation according to a semi-normal distribution function with zero opening degree as a vertex, and the mouth continuous opening duration sign value is obtained through calculation according to another semi-normal distribution function with zero duration as a vertex.
Preferably, the step S1 further includes the steps of: the method comprises the steps of lighting a plurality of calibration blocks located at known positions on a working surface in turn, collecting a face image of a learner through an image collecting unit after the learner watches the lighted calibration blocks and presses keys for confirmation, extracting the sight line direction of human eyes based on the collected image, and comparing the extraction result with the positions of the calibration blocks so as to calibrate the sight line direction detection parameters.
Preferably, in the training sample collection process, data sampling and sample recording can be suspended by pressing a sampling cancel button;
the neural network increases an input quantity taking the learning difficulty coefficient as a parameter, and the learner inputs the difficulty coefficient through a key or a sliding bar in the user interface unit;
the step S5 further includes: the output module is based on the display screen and adopts a plurality of independent display bars to respectively display the parameter values of the attention.
Preferably, the lamp group is an LED lamp group, the driving current value of each LED lamp in the lamp group is adjusted by a dimmer, and the dimming signal is a PWM wave duty ratio value of the driving current of the LED lamp;
the user interface unit inputs a dimming instruction through a set of light color adjusting module consisting of a color coarse adjustment knob, a color fine adjustment knob and a brightness adjustment knob, and the step S3 performs dimming operation through the light color adjusting module;
the user interface unit inputs a dimming command through a set of light color adjusting module consisting of a color coarse adjustment knob, a color fine adjustment knob, and a brightness adjustment knob, and the step S3 performs a dimming operation through the light color adjusting module.
Compared with the prior art, the scheme of the invention has the following advantages: the illumination condition is represented by the illumination value of a working surface, the color temperature and the xyz color coordinate value of the color, the attention is represented by the eye opening value, the sight concentration value, the heart rate, the sight movement rate and the like, the attention of a learner can be objectively distinguished by multi-factor quantization, and each parameter is automatically extracted by a control unit after signal acquisition is carried out by a photochromic sensing unit, an image acquisition unit or a heart rate acquisition unit; the nonlinear network is adopted to construct and model the mapping relation between the illumination condition of the environment and the attention of the personnel, and the trained network can predict the attention of the personnel in the variable light environment, so that the light environment evaluation prompt can be performed on the personnel, and a basis is provided for the recommendation of the potential high-attention light environment.
Drawings
FIG. 1 is a block diagram of a learning attention detection and prediction device and system under a variable light environment;
FIG. 2 is a view showing a constitution of a control unit; FIG. 3 is a block diagram of the processing module;
FIG. 4 is a schematic diagram of an RBF neural network structure;
FIG. 5 is a flowchart illustrating a learning attention detection and prediction method under a variable light environment;
FIG. 6 is a schematic diagram of a module layout structure according to an embodiment; FIG. 7 is a schematic view of a light modulation panel;
FIG. 8 is a schematic view showing a partial arrangement of a lower module according to another embodiment; FIG. 9 is a schematic view of a work area setting;
FIG. 10a is a structural diagram of a working area setting unit; FIGS. 10b and 10c are structural diagrams of the adjustment shaft; FIG. 10d is a view showing the structure of the lamp housing;
FIG. 11 is a schematic view of the intersection of a line of sight and a work surface; figure 12 is a schematic of a sign data sequence;
FIGS. 13a and 13b are schematic diagrams of first and second volume characteristic evaluation functions of eye opening, respectively;
fig. 13c and 13d are schematic diagrams of the first and second feature value evaluation functions of the gaze concentration, respectively;
fig. 14 is a view point distribution diagram.
Wherein:
1000 learning attention detection and prediction system, 100 learning attention detection and prediction device under variable light environment,
110 light color sensing unit, 120 image acquisition unit, 130 control unit, 140 user interface unit, 150 adjustable light set, 160 heart rate acquisition unit,
131 processing modules, 132 RBF neural networks, 133 connection switches, 134 iterative learning modules, 135 output modules, 136 storage modules, 151 dimmers, 152 LED lights,
1311 image processing unit, 1312 light color processing unit, 1351 display screen, 1352 communication interface, 13111 eye opening detector, 13112 sight line detector, 13113 mouth detector, 13121 illuminance detector, 13122 color temperature detector, 13123 color detector,
101 a base plate, 102 a bracket, 103 a binocular camera, 104 an infrared auxiliary light source, 105 a display bar, 106 a light color sensing block, 107 a key block, 108 a dimming panel, 109 a working area setting unit, 111 a calibration block,
1081 color coarse adjustment knob, 1082 color fine adjustment knob, 1083 brightness adjustment knob,
1091 pivot, 1092 adjusting plate, 1093 first adjusting shaft, 1094 second adjusting shaft, 1095 lamp groove, 1096 motor, 1097 driving rod, 1098 connecting rod,
951 LED lamp pearl, 952 glass cover, 953 spotlight piece.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings, but the present invention is not limited to only these embodiments. The invention is intended to cover alternatives, modifications, equivalents and alternatives which may be included within the spirit and scope of the invention.
In the following description of the preferred embodiments of the present invention, specific details are set forth in order to provide a thorough understanding of the present invention, and it will be apparent to those skilled in the art that the present invention may be practiced without these specific details.
The invention is described in more detail in the following paragraphs by way of example with reference to the accompanying drawings. It should be noted that the drawings are in simplified form and are not to precise scale, which is only used for convenience and clarity to assist in describing the embodiments of the present invention.
As shown in fig. 1, the method of the present invention is applied to a learning attention detection and anticipation system 1000, where the learning attention detection and anticipation system 1000 includes a learning attention detection and anticipation device 100 under a variable light environment and a dimmable light set 150, where the learning attention detection and anticipation device 100 under the variable light environment further includes a light color sensing unit 110, an image collecting unit 120, a heart rate collecting unit 160, a control unit 130, and a user interface unit 140.
The heart rate acquisition unit 160 acquires the heart rate of the learner, and the heart rate can be acquired through a wristwatch or a bracelet and transmitted to the control unit 130 through a communication interface.
The light color sensing unit 110 collects the illumination, color temperature and color of the illumination of the working surface, the illumination can be detected by an independent module, and the color temperature and color can be obtained by the same RGB or xyz color sensing module. Preferably, the color sensing module may be a TCS3430 sensor, the filter of TCS3430 having five channels including X, Y, Z channel and two Infrared (IR) channels, which may be used to infer the light source type. The TCS3430 sensor collects the light color signal of the working surface in real time, and the xyz color coordinate value and the color temperature of the color are respectively obtained after signal processing and conversion by the processing module in the control unit.
As shown in fig. 1 and fig. 2, the control unit 130 includes a processing module 131, an iterative learning module 134, a neural network module 132, a connection switch 133, an output module 135, and a storage module 136. The processing module 131 further includes an image processing unit 1311 and a light color processing unit 1312. As shown in fig. 2 and 3, the light color processing unit 1312 further includes an illuminance detector 13121, a color temperature detector 13122, and a color detector 13123, which process signals collected by the light color sensing unit to obtain three stimulus values of illuminance, color temperature, and xyz of color, which represent the illumination condition of the working surface, and 5 light color parameters in total. The image capturing unit 120 may employ a binocular camera, and the image processing part 1311 processes the signal captured by the image capturing unit 120 to obtain the attention of the learner.
The detection of the attention state can be based on technologies such as machine vision and image processing, and such methods are adopted in traffic driving, and there are many researches on realizing effective monitoring of the driver state by analyzing facial features of the driver.
For learning on the desktop, attention detection and analysis can be performed by an image processing method. Different from the state of full emotion and concentrated attention, the physiological parameters of people can change to different degrees when the people are tired or distracted, and the physiological parameters can be used as the basis for monitoring the attention state. When the learner is inattentive, the eyelids are closed, the opening degree of the eyes is obviously reduced, and even intermittent closure and blinking occur; in the sub-tired state before the obvious drowsiness, the phenomena of reduced reading speed and slow sight movement can also occur; occasionally, the person may also take yawning actions. Therefore, the invention is based on the detection of the attention state of the learner.
Specifically, as shown in fig. 2 and 3, the image processing section 1311 includes an eye opening degree detector 13111, a gaze direction detector 13112, and a mouth shape detector 13113, which respectively detect the opening degree, the gaze direction, and the mouth opening characteristics of the learner's eyes, and further obtains an eye opening degree value, a gaze concentration value, and a gaze movement rate of the learner in conjunction with the calibration and conversion processes. The sight concentration degree is the distance of the sight offset preset working face block.
Referring to fig. 4 and 5, the method for detecting and predicting learning attention under variable light environment of the present invention comprises the following steps:
s1, establishing an artificial neural network in the control unit, wherein the neural network takes 6 parameters of working surface illumination, color temperature, xyz color coordinate values of colors and continuous learning time as input quantities, and takes the attention factor values of 3 individual characteristic parameters of eye opening, sight concentration and heart rate of learners for representing attention factors as output quantities, and the sight concentration is sight offset distance;
s2, the processing module processes the signals collected by the light color sensing unit to obtain 5 light color parameters including working surface illuminance, color temperature and xyz color coordinate values of the color, processes the signals collected by the image collecting unit to obtain an eye opening value, a sight concentration value and a sight movement rate of the learner, obtains the heart rate of the learner by reading the signals of the heart rate collecting unit, and obtains respective attention factor values by respectively carrying out preprocessing such as filtering, normalization and the like on the signals according to the value intervals of each parameter and then evaluating and quantizing the preprocessed values;
s3, sending a dimming signal to the dimmable lamp set through an output module of the control unit or a user interface unit, carrying out signal acquisition on the changed luminous environment based on the light color sensing unit, the image acquisition unit and the heart rate acquisition unit, and then carrying out signal processing according to the method of the step S2;
s4, repeating the step S3 for multiple times, obtaining a training sample set of the neural network, and training the artificial neural network by using the sample set;
s5, on the basis of the trained neural network, in the field environment, the attention of the learner in the current light environment is predicted on line:
and predicting attention factor values of physical parameters such as the eye opening degree, the sight concentration degree, the heart rate and the like of the learner based on the acquired field working surface illumination, the color temperature, the xyz color coordinate value of the color and the input continuous learning time, and outputting the result through an output module.
The specific processing procedure of the present invention is described in detail below.
The vision estimation method by image processing can be selected from an iris-sclera marginal method, a pupil-eye corner point positioning method and a pupil-cornea reflection method. The first two estimate the sight line direction by using the infrared signal difference and the eye corner and pupil connecting line. Preferably, the present invention adopts a third method, wherein an infrared light source is used to irradiate the cornea of the human eye, and when light is irradiated on the eye, a reflection is generated on the outer surface of the cornea of the human eye, and the reflection is displayed as a bright spot in the eye, which is called a purkinje spot. When the eyeball rotates, the position of the purkinje spot is fixed, so that the sight line direction can be estimated according to the relative position relation between the pupil of the human eye and the purkinje spot.
In specific application, the pupil-cornea reflection method also comprises two types of realization methods, namely a two-dimensional sight estimation method and a three-dimensional sight estimation method. The two-dimensional sight estimation method adopts a calibrated sight mapping function, the two-dimensional eye characteristic parameters are input parameters of the function, and the output parameters are the sight direction or the screen fixation point. The three-dimensional sight estimation method is based on binocular vision, space three-dimensional information of the eyes of the driver is obtained through a three-dimensional reconstruction process, and the three-dimensional sight estimation method is high in detection precision and wide in range.
Based on a learning scene image acquired by a binocular camera, firstly, smoothing and threshold segmentation are carried out, noise is removed, the face and eye regions of a learner are positioned, and characteristic information such as the height-width ratio of human eyes, the pupils of the eyes, the Purkinje points and the like is extracted; secondly, performing stereo matching on the extracted feature points, and performing three-dimensional reconstruction on the pupils of the eyes and the Purkinje points based on a geometric constraint establishing process to obtain three-dimensional world coordinates of the feature points; and finally, judging the three-dimensional sight direction of the learner through a three-dimensional coordinate vector formed by the pupil and the Purkinje point. Based on the human eye height-width ratio and the sight direction tracking which are periodically obtained, the eye opening value and the sight space direction can be calculated.
Specifically, as shown in fig. 1 and 6, the device of the present invention mounts a binocular camera 103 used by an image capturing unit on a bracket 102 directly opposite to a person in a work scene, and the bracket 102 is fixed on a base plate 101. An infrared auxiliary light source 104 for assisting visual line detection is also fixed on the bracket 102, the light color sensing unit is fixed in the light color sensing block 106 area of the bottom surface, and the keys of the user interface unit are arranged in the key block 107 area at the other end of the light color sensing block 106 symmetrical with respect to the bracket.
Referring to fig. 9, in order to detect and determine the viewpoint of the learner in the image processing, a reasonable work area needs to be preset in the work plane. For this purpose, a work area setting unit 109 is added to the apparatus.
The working area setting unit 109 is supported at the top end of the bracket 102 by a pivot 1091 at the center, and four triangular adjusting plates 1092 are movably connected to the pivot 1091 and symmetrically distributed at the left, right, front and back. As shown in fig. 10a, a first adjusting shaft 1093 is connected between the left and right adjusting plates 1092, a second adjusting shaft 1094 is connected between the front and rear adjusting plates 1092, and a rectangular light groove 1095 is formed on the bottom edges of the four adjusting plates. The two adjusting shafts are staggered in the longitudinal height.
As shown in fig. 10b, the first and second adjusting shafts are driven by a motor 1096 to drive two driving rods 1097 moving in opposite directions, wherein the driving rods are connected to the inner side of the adjusting plate.
As shown in fig. 10c, the drive rods 1097 of the two adjustment shafts may also be connected to the adjustment plates by a link 1098.
As shown in fig. 10d, the lamp groove 1095 at the end of the adjusting plate is embedded with an LED lamp bead 951, a glass cover 952 is arranged outside the lamp bead, and the light of the LED is focused into a strip shape by a light focusing sheet 953 around the glass cover.
As shown in fig. 9 and 10b, the rectangular light trough 1095 emits a strip-shaped light spot GS. The control unit drives the first adjusting shaft and the second adjusting shaft by controlling the motor to rotate, so that the inclination angles of the left and right adjusting plates and the front and back adjusting plates relative to the horizontal plane are respectively changed, and a rectangular area is defined on the horizontal plane of the working surface through four strip-shaped light spots and serves as a preset working surface block. When the motor rotates clockwise, the driving rod drives the adjusting plate to move outwards, so that the inclination angle of the adjusting plate relative to the horizontal plane is reduced, the strip-shaped light spots move outwards, and the working surface area block is enlarged; conversely, when the motor rotates counterclockwise, the working surface area shrinks. Preferably, 4 buttons may be provided in the buttons of the user interface unit to adjust the expansion and contraction of the work surface block in the left-right and front-rear directions, respectively. The range of the working face block can be recorded by the rotation angle of a motor and other mechanisms.
Through the online adjustment of the working face blocks, the acquisition of the detection sample is greatly facilitated, and the accuracy and the applicability of the sample acquisition are improved.
As shown in fig. 11, the line of sight acquired by the image processing unit is a v-ray passing through point P0. In the working horizontal plane G2, the preset working surface block is a rectangular region G1 with GA, GB, GC, GD as corner points, the normal vector of the working plane is u, and the world coordinate system is O-XYZ, then the coordinates of the intersection point P1 of the sight line and the working plane can be calculated.
First, the ray's parametric equation is:
Figure GDA0002714324970000161
wherein t is an independent variable parameter,
then is formed by
Figure GDA0002714324970000162
The coordinates of the intersection point P1 of the line of sight with the working plane can be calculated,
Figure GDA0002714324970000171
as shown in fig. 11, in the G2 plane, the regions outside the range of the working surface block are divided into eight regions I to viii in total according to the four corners of the working surface block. If the viewpoint P1 is not located in the work surface area, it is first determined which area it is located in, and then the shortest distance d between the viewpoint and the work surface area is further calculated based on the located area. Specifically, if the viewpoint falls in the regions II, IV, VI and VIII of the diagonal region, the distance between the viewpoint and the corresponding corner point is calculated; otherwise, the distance between the viewpoint and the corresponding corner point in the X direction or the Y direction is calculated. As shown in the figure, P1 is in the V region, then,
d=|xP1-xGD|。
with reference to fig. 1 and 4, the present invention adopts a neural network to structurally model a mapping relationship between an ambient lighting condition and human attention. Specifically, the RBF neural network shown in fig. 4 is established, and the network takes 6 parameters of the illumination of the working surface, the color temperature, the xyz color coordinate value of the color and the duration learning time as input quantities, and takes the attention factor value of 3 characteristic parameters of the eye opening, the gaze concentration and the heart rate of the learner, which are used for representing the attention factor, as output quantities.
Wherein, the sight deviation distance according to the sight concentration value is represented according to the intersection point of the learner sight and the working surface, namely the distance between the viewpoint and the working surface block.
Referring to fig. 12, a schematic diagram of the normalized physical sign data sequence is shown, where the data sequence is recorded after the primary eye opening degree is filtered, and the midpoint of the maximum probability value interval of the physical sign quantity is 1.
To find the lighting environment that helps to improve the attention of the learner, first, the attention level of the learner is checked and judged. The invention respectively represents the attention factor of a learner through 3 individual characteristic parameters including the eye opening degree, the sight concentration degree and the heart rate of the learner, and the 3 individual characteristic parameters are quantized as follows:
t1, for the eye opening sequence de, because the eye opening changes many high frequency components, firstly, the window average filtering is carried out by the following formula to obtain the eye opening e at the current moment,
Figure GDA0002714324970000172
then, a down-sampling sequence Xe of the eye opening degree is obtained by moving the window at intervals,
Xe={e(0),e(Ts),e(2Ts),...},
next, the sequence Xe is function-fitted using the following formula: y is a.e-b·xAnd acquiring the variation trend of the eye opening. The opening degree change time tu is obtained according to the fitted function,
Figure GDA0002714324970000181
wherein L is the window width, Ts is the down-sampling interval, a and b are both fitting coefficients, E1 and E2 are two thresholds of the eye opening degree, and for the normalized eye opening degree value sequence, the values of E1 and E2 are between 0 and 1.
Then, as shown in FIG. 13a and FIG. 13b, the first and second volume characteristic values of the eye opening are calculated based on the eye opening e and the opening change time tu,
Figure GDA0002714324970000182
Figure GDA0002714324970000183
wherein be and ce are lower limit value and upper limit value of the interval which is obtained according to statistics and covers the eye opening value with the set proportion in the normal state, ae and de are the other two preset lower limit value and upper limit value respectively; btu is an upper limit value of eye opening change time covering a set proportion in a current continuous learning time range in a normal state, and atu is a set lower limit value;
the attention factor value for calculating the eye opening is,
ke=ke1·ke2。
and T2, for the sight concentration degree, detecting the intersection point of the sight of the learner and the working surface, if the intersection point falls outside the range of the preset working surface block, calculating the shortest distance from the intersection point to the working surface block and recording the time length of the corresponding viewpoint continuously exceeding the preset range, for the distance sequence dd, firstly obtaining the current sight offset distance d through window average filtering, and simultaneously calculating the maximum time length td of the viewpoint continuously exceeding the preset range in the corresponding window time range. If the intersection point falls within the working face block, the assigned distance d is zero.
As shown in fig. 13c and 13d, the first and second body characteristic values of the gaze concentration are calculated from the distance d and the time length td,
Figure GDA0002714324970000191
Figure GDA0002714324970000192
wherein a and b are fitting coefficients, and the larger the values of a and b are, the faster the function value is reduced; td is the maximum time length that the view point continuously exceeds the preset range and covers the set proportion in the current continuous learning time range in the normal state, and sigma is a preset width value;
the attention factor value for calculating the gaze concentration is,
kd=kd1·kd2。
t3, for the heart rate, the change interval is relatively much smaller, the change period is long, and the attention factor evaluation value is obtained as follows. As shown in fig. 12, two dotted lines are drawn at positions of Δ% above and below the unit value on the vertical axis. Counting the number N of times that data fluctuation exceeds the fluctuation interval range within a preset time length by taking the current time as the center and the number of samples Rb of the heart rate within the interval range within the preset time length according to an up-down fluctuation interval set by the heart rate expected value in a normal state,
N=N++N-
wherein N is+For the number of times of crossing the interval, N-The number of times the interval is crossed.
Respectively calculating a first body characteristic value and a second body characteristic value of the heart rate according to the times N and the ratio Rb,
Figure GDA0002714324970000193
Figure GDA0002714324970000194
the method comprises the following steps that TN is the maximum number of times that a preset proportion is covered in a current continuous learning time range and a heart rate exceeds a fluctuation interval range in a normal state, sigma N is a preset width value, and aRb and bRb are two proportion threshold values set according to statistics;
the attention factor value for calculating the heart rate is,
kb=kb1·kb2。
the preset parameters in the quantization, such as E1 and E2, can be gradually reduced according to the increase of the continuous learning time, and the setting of the two parameters can also be carried out by adopting relative proportions; other preset parameters may be similarly dynamically adjusted. In the heart rate parameter processing, the delta corresponding to the fluctuation interval can be set according to statistics, if the delta is set, the probability of the sign data of the interval corresponding to the dotted line range in the normal state is a probability threshold, and the upper limit and the lower limit of the probability threshold are both values between 0.92 and 0.98. The normal state refers to a physical sign detection sample of the learner under a comfortable illumination condition with a higher grade.
In the process of calculating the attention factor values of the various physical signs, the acquired attention factor values of the eye openness, the sight concentration and the heart rate are processed, and the characteristics of the various physical signs are considered, and meanwhile, the consistent evaluation standard can be embodied. For example, the greater the defined attention factor value, the higher the learner's attention. Meanwhile, compared with single-factor evaluation such as eye opening evaluation, the multi-factor sign evaluation can reflect the attention characteristics of different learners, thereby providing a foundation for subsequent illumination influence modeling and illumination optimization control.
Preferably, a learning mode key is set in the user interface unit, and when the reading mode is selected by the learning mode key, the output quantity of the neural network is increased by an attention factor value of a sign parameter of the line of sight movement rate used for representing the attention factor, and the calculation process is as follows:
referring to fig. 14, an intersection point P1 of the learner's gaze with the working plane Z1 is detected within a preset time length Tp, a circumscribed rectangle Z2 of the outermost viewpoint is found for the set of intersection points falling within a preset working plane block range, and a gaze movement rate is calculated based on the length X and width Y of the rectangle,
Figure GDA0002714324970000201
then, the attention factor value for calculating the line-of-sight movement rate is,
Figure GDA0002714324970000202
wherein, avs and bvs are two speed thresholds respectively set according to standard sample statistics under a normal state.
Preferably, a mouth shape detector is arranged, the mouth opening characteristic detection is carried out on the mouth part, correspondingly, a mouth opening sign parameter used for representing an attention factor is added in the output quantity of the neural network, the attention factor value of the mouth opening is the product of the mouth opening sign value and the continuous mouth opening duration sign value,
the mouth opening degree sign value is obtained through calculation according to a semi-normal distribution function with zero opening degree as a vertex, and the mouth continuous opening duration sign value is obtained through calculation according to another semi-normal distribution function with zero duration as a vertex.
Referring to fig. 4, the model of the RBF neural network is as follows.
The output of the ith node of the hidden layer is as follows:
Figure GDA0002714324970000211
the output of the jth node of the output layer is as follows:
Figure GDA0002714324970000212
wherein, the dimension of the input vector X is 6, the number of hidden layer H nodes is p, the dimension of the output vector Y is n, CiIs the center of the Gaussian function of the ith node of the hidden layer, sigmaiIs the width of the center of the Gaussian function, | | X-CiI is the vectors X and CiEuclidean distance between, wijThe weight value from the ith hidden node to the jth output node.
When the invention is adopted, parameter initialization is firstly carried out, wherein sigma of hidden layer nodeiCan be determined by the following equation:
Figure GDA0002714324970000213
in the formula DiThe maximum distance between the center of the ith hidden node and other centers.
In the initial stage of modeling and evaluating attention by using the method, when training samples are few and the change of illumination, color temperature and color components in the light color parameters in a sample set is not enough, each sample X is used as a central vector C of an implicit layer nodeiWith the enrichment of samples, determining the number of hidden layer nodes and respective central vectors C thereof by using a K-means clustering algorithmi. In order to obtain sufficient training samples, the person may be allowed to collect samples in a preferred environment where the lightness and chroma may be adjusted to a greater extent.
Because the value intervals of the network input and output quantity are likely to have large difference, in order to improve the effectiveness of data, the sample data is firstly subjected to normalization preprocessing, and the data is mapped into a [0, 1] numerical value space. The performance index function of the network approximation, i.e. the total average error function, is:
Figure GDA0002714324970000214
wherein N is the total number of samples in the training sample set, k is the sample number,
Figure GDA0002714324970000221
is relative to the input XkActual output of (2), YkIs relative to the input XkThe desired output of (c). In the RBF network training process, the adjustment of parameters needs to make the network approach to the corresponding mapping relation in the least square sense, namely to make E reach the minimum, for this reason, a gradient descent method can be adopted to correct the weight from the network hidden layer to the output layer, so as to make the target function reach the minimum.
In the application of the device and the method, no matter the training sample is acquired or the trained network is used for predicting the attention parameters, the light color acquisition unit is required to acquire signals; but for image acquisition, if the current task is to acquire training samples, image acquisition is needed, otherwise, if the current task is to predict, image acquisition is not needed.
In order to improve the generalization capability of the neural network, enough training samples are collected. The invention sends out dimming signals to the lamp group through the output module or the user interface unit, and obtains the training sample set of the artificial neural network based on the light color sensing unit, the image acquisition unit and the heart rate acquisition unit for the light environment after each change.
As shown in fig. 1, in an environment where the system is tested or used, the dimmable light set 150 is preferably a dimmable LED light set, and the driving current value of each LED light 152 in the light set is adjusted by a dimmer, the dimmer 151 is a driver capable of changing the output current, and the driver performs light output adjustment by changing the PWM duty cycle of the driving current of each channel of the LED light.
Preferably, the LED lamp is a dimming lamp having RGB three primary color current channels, and at this time, the light color of the lamp can be changed by changing the driving current value of one of the channels. When the three channel currents are increased or decreased in synchronization from a certain state, the lamp exhibits no change in color but a brightness that gradually increases or decreases.
Preferably, the processing module changes the light output of the LED lamp set in a stepwise manner within a known dimming range of the LED lamp set. For example, a variable mapping table is established by combining the values of the channel currents of the LED lamps with the corresponding illuminance, color temperature and color collected on the working surface, only one variable, such as the illuminance, is changed and the other variables, such as the color temperature and the color, are kept unchanged in the value interval of the illumination vector space composed of the illuminance, the color temperature and the color, the mapping table is reversely searched to find the current value of each channel current of the LED lamp corresponding to the current illumination vector, and the processing module sends the PWM wave duty ratio of each channel current to the dimmer in the form of a signal through the communication interface of the output module. The processing module obtains enough network training samples by continuously changing working points of an illumination vector space, wherein sampling points can be sparse in end value areas of various light color variables, and the sampling points are denser in middle areas such as areas with color temperature of 4500k and illumination of 300 lx-500 lx. The collected sample is stored in a storage module.
The iterative learning module 134 obtains 5 actual output values corresponding to the training samples from the processing module 131 through the connection switch 133, obtains 5 mapping values of 6 input values corresponding to the training samples after neural network processing from the RBF neural network 132, adjusts the neural network structural parameters according to the 5 actual output values and the 5 mapping values to train the neural network, and repeats the training until a preset training frequency is reached or the target function is less than a set threshold. And storing the trained network structure parameters in a storage module.
The parameters such as preset values required for processing by the control unit are input through keys in the user interface unit. The trained neural network can predict and judge what attention a learner will pay under the environment illumination condition after the personnel enter a new learning environment based on the generalization ability of the neural network, and display or output the predicted result through an output module.
Preferably, only one of the color temperature and the xyz color coordinate value of the color may be used in the input amount of the neural network.
As shown in fig. 6, the output module 135 preferably includes a display bar 105 for indicating the current concentration level of the learner. Alternatively, the output module may employ the display screen 1351 and a plurality of separate display bars to display the evaluations of the various factors of attention, respectively.
Preferably, the output module 135 further includes a communication interface 1352, and outputs the detected or predicted attention factor values to the outside through the interface module.
With reference to fig. 1 and 6, during online prediction, the control unit acquires illumination signals in real time through the sensing and collecting units and processes the illumination signals to obtain xyz color coordinate values of illumination, color temperature and color of the working surface, presets the 5 parameters such as integral multiple time of a sampling period or continuous learning time input through the user interface unit, and inputs the parameters into the trained neural network at the same time, and obtains attention factor prediction values of physical parameters such as eye opening, gaze concentration and heart rate after network mapping, wherein the prediction values of the attention parameters can be displayed by a display strip or displayed on the same display screen at the same time.
Preferably, the continuous learning time input can also be dynamically changed, and the attention parameter predicted value after the change of the value and the mapping by the neural network is displayed in the form of a curve changing along with time.
By the display of the output module, learners can predict whether the current illumination condition is superior or not, and can change the brightness or color temperature of the lamp by changing illumination such as adjusting current when dimming conditions exist, so that an illumination environment which is beneficial to improving attention is obtained.
Because the learning object has a difficulty score, as an optimization, a key for indicating the current learning difficulty can be set in the user interface unit, and meanwhile, the neural network is added with a learning difficulty coefficient input quantity, wherein the difficulty coefficient can be an integer between 1 and 5.
As shown in fig. 6 and 7, the user interface unit preferably has a light adjusting panel 108 on the bottom board 101, wherein the light adjusting panel comprises three knobs, namely a color coarse adjusting knob 1081, a color fine adjusting knob 1082 and a brightness adjusting knob 1083, which are respectively used for performing color coarse adjustment, color fine adjustment and brightness adjustment of the LED lamp.
The color rough adjustment knob 1081, that is, the shift adjustment knob, is divided into 6 steps, which correspond to red, yellow, green, cyan, blue, and magenta, respectively, and RGB values thereof are (255, 0, 0), (255, 255, 0), (0, 255, 255), (0, 0, 255), (255, 0, 255), and 255, respectively. Establishing a color circumference map similar to HSV color space, arranging red, yellow, green, cyan, blue and magenta in sequence, and separating every two colors by 60 degrees to form a circle. The color rough adjusting knob 1081 and the color fine adjusting knob 1082 together determine the RGB ratio of the light emitted from the LED lamp, and the brightness adjusting knob 1083 determines the relative magnitude of the driving current of the LED lamp, i.e. the brightness of the three primary color LEDs can be adjusted by rotating the brightness adjusting knob.
Because the requirement of the environment on the light color is difficult to meet only by adopting 6 color gears, the color fine adjustment is realized by the color fine adjustment knob. The color fine adjustment knob is adjustable clockwise and anticlockwise, referring to a color circumference diagram of an HSV color space, when the fine adjustment knob rotates clockwise, the color of an LED lamp can be slowly close to the next color in the clockwise direction on the color circumference diagram, otherwise, the color of the LED lamp is close to the next color in the anticlockwise direction, the fine adjustment knob can realize the color adjustment of 30 degrees on the color circumference diagram in the clockwise direction and the anticlockwise direction, and therefore, the color adjustment of 360 degrees on the color circumference diagram can be realized under the combined action of the first knob and the second knob which turn right from left. The RGB component values are gradually changed when the fine adjustment knob adjusts the color, for example, when the color coarse adjustment knob 1081, namely the gear adjustment knob, points to red, the fine adjustment knob rotates clockwise, the value R, B remains unchanged, the value G is linearly increased, and the color is gradually changed towards the direction of increasing the green component, namely the direction of yellow; conversely, when the fine adjustment knob is rotated counterclockwise, the value R, G remains unchanged, and the value B increases linearly, at which time the color gradually changes toward the direction in which the blue component increases, i.e., toward magenta.
After the color is set, according to the proportion of the brightness set value to the maximum value, the proportion value is multiplied by each channel component of RGB to be used as the basis for adjusting each channel current. With reference to fig. 1, the user interface unit may directly or via the control unit send a dimming signal to the lamp set to change the light output of the lamp set.
Preferably, the three knobs can respectively control the magnitude of the driving current of one channel in three channels of the LED lamp RGB.
Preferably, when the image capturing unit is a monocular camera, as shown in fig. 8, a plurality of calibration blocks 111 with known positions may be provided on the surface of the base plate, each of the calibration blocks having a circular light spot, and a calibration confirmation key may be provided in the user interface unit, and the control unit may perform distance calibration through the calibration blocks: and the calibration blocks are lightened in turn, the learner watches the lightened calibration blocks, the image of the face of the learner is collected through the image collection unit after the calibration confirmation key is pressed, the sight line direction of the human eyes is extracted based on the collected image, and the extraction result is compared with the position of the calibration blocks so as to calibrate the sight line direction detection parameters.
When the learner is distracted by emotions and the like, the collected samples are greatly deviated from the samples under normal conditions, and although the neural network has better fault tolerance, the accuracy of the network is affected by too many samples. For this purpose, a cancel sampling key is preferably provided in the user interface unit, and the control unit suspends data sampling and sample recording after detecting that this key is pressed.
To increase the applicability of the network, the control unit may preferably further include a real-time clock module, and the neural network module may further include a seasonal parameter obtained from the real-time clock module as an input.
Preferably, the neural network module may further add a time period parameter obtained from the real-time clock module as an input, wherein the time period is morning, afternoon or evening, respectively.
Preferably, the control unit can be additionally provided with a temperature and humidity measurement module, and the neural network module is used for adding two parameters of temperature and humidity acquired from the temperature and humidity measurement module as input.
Preferably, the control unit may further include a noise measurement module, and the neural network module adds a noise level parameter obtained from the noise measurement module as an input.
Preferably, an LED lamp can be controlled by the output module, and when the fact that the attention of the person is obviously reduced is detected, the LED lamp is commanded to flash for a short time to remind the learner to concentrate the attention or stop learning.
The invention is applied to the detection and the prejudgment of the learning attention under the variable light environment, and after samples with abundant changes are collected, due to infinite combinations in the light color change domain, the invention can be used for predicting the changes of the attention parameters including the eye opening degree, the sight concentration degree and the like under the illumination condition in various field environments along with the accumulated learning time, thereby providing a basis for the switching of the potential high-attention light environment.
The above-described embodiments do not limit the scope of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and principle of the above-described embodiments should be included in the protection scope of the technical solution.

Claims (5)

1. The device for detecting and pre-judging learning attention under the variable light environment is characterized by comprising a light color sensing unit, an image acquisition unit, a heart rate acquisition unit, a control unit and a user interface unit;
the light color sensing unit is used for acquiring the illumination, color temperature and color of illumination of a working surface, the image acquisition unit is used for acquiring images of the face and the working surface area of a learner, the heart rate acquisition unit is used for acquiring the heart rate of the learner, the user interface unit is used for performing parameter input and key operation, and the output module in the control unit is used for performing signal display and outputting an attention factor value;
the control unit further comprises a processing module, an iterative learning module, a neural network module, a connection switcher, and a storage module, and is configured to:
the processing module processes the signals acquired by the light color sensing unit to acquire 5 light color parameters including illumination, color temperature and xyz color coordinate values of colors of the working surface, processes the signals acquired by the image acquisition unit to acquire an eye opening value, a sight concentration value and a sight movement rate of a learner, and acquires the heart rate of the learner by reading the signals of the heart rate acquisition unit, wherein the sight concentration is a sight offset distance, namely the shortest distance from the intersection point of the current sight and the working surface to a preset working surface block,
the neural network module takes 6 parameters of working surface illumination, color temperature, xyz color coordinate value of color and continuous learning time as input quantities, takes attention factor values of 3 characteristic parameters of the learner's eye opening degree, sight concentration degree and heart rate for representing the attention factors as output quantities, establishes an artificial neural network,
the iterative learning module acquires 3 output quantity actual values corresponding to the training sample from the processing module through the connecting switcher respectively, acquires 3 mapping values of 6 input quantities corresponding to the training sample after being processed by the neural network from the neural network, adjusts the neural network structure parameters according to the 3 output quantity actual values and the 3 mapping values to train the neural network, and repeats the training until the training is completed,
during online prediction, the neural network predicts the eye opening degree, the sight concentration degree and the attention factor value of the heart rate physical sign parameter of the learner based on the illumination, the color temperature, the xyz color coordinate value of the color and the continuous learning time of the current working surface and outputs the values to the output module through the processing module;
the storage module is used for recording and storing the neural network structure parameters, the iterative learning parameters and the calculation process values;
the attention factor values of the 3 individual characteristic parameters for characterizing the attention factor are obtained by processing respectively as follows:
firstly, for the eye opening sequence de, firstly, the window average filtering is performed by the following formula to obtain the eye opening e at the current time,
Figure FDA0002776225080000021
then, a down-sampling sequence Xe of the eye opening degree is obtained by moving the window at intervals,
Xe={e(0),e(Ts),e(2Ts),...},
next, the sequence Xe is function-fitted using the following formula: y is a.e-b·xeThe opening degree change time tu is obtained according to the fitted function,
Figure FDA0002776225080000022
wherein L is the window width, Ts is the down-sampling interval, a and b are both fitting coefficients, E1 and E2 are two thresholds of the eye opening degree, and for the normalized eye opening degree value sequence, the values of E1 and E2 are between 0 and 1;
calculating a first and a second volume characteristic value of the eye opening according to the eye opening e and the opening change time tu,
Figure FDA0002776225080000023
Figure FDA0002776225080000024
wherein be and ce are lower limit value and upper limit value of the interval which is obtained according to statistics and covers the eye opening value with the set proportion in the normal state, ae and de are the other two preset lower limit value and upper limit value respectively; btu is an upper limit value of eye opening change time covering a set proportion in a current continuous learning time range in a normal state, and atu is a set lower limit value;
the attention factor value for calculating the eye opening is,
ke=ke1·ke2;
secondly, detecting the intersection point of the learner's sight line and the working surface, if the intersection point falls outside the range of the preset working surface block, calculating the shortest distance from the intersection point to the working surface block and recording the time length of the corresponding sight point continuously exceeding the preset range, for the distance sequence dd, obtaining the current sight line offset distance d through window average filtering, and simultaneously calculating the maximum time length td of the sight point continuously exceeding the preset range in the corresponding window time range,
calculating a first body characteristic value and a second body characteristic value of the sight concentration degree according to the distance d and the time length td,
Figure FDA0002776225080000031
Figure FDA0002776225080000032
the method comprises the following steps that a and b are fitting coefficients, Td is the maximum time length of a viewpoint continuously exceeding a preset range, which covers a set proportion in a current continuous learning time range in a normal state, and sigma is a preset time width value;
the attention factor value for calculating the gaze concentration is,
kd=kd1·kd2;
thirdly, setting an up-and-down fluctuation interval for the heart rate data sequence according to the heart rate expected value in a normal state, counting the times N that the data fluctuation exceeds the up-and-down fluctuation interval range within a preset time length with the current time as the center, and the number of samples Rb of the heart rate within the interval range within the preset time length,
N=N++N-
wherein N is+For the number of times of crossing the interval, N-The number of times of crossing into the interval;
respectively calculating a first body characteristic value and a second body characteristic value of the heart rate according to the times N and the ratio Rb,
Figure FDA0002776225080000033
Figure FDA0002776225080000034
the method comprises the following steps that TN is the maximum number of times that a preset proportion is covered in a current continuous learning time range and a heart rate exceeds a fluctuation interval range in a normal state, sigma N is a preset width value, and aRb and bRb are two proportion threshold values set according to statistics;
the attention factor value for calculating the heart rate is,
kb=kb1·kb2。
2. the device for learning attention detection and anticipation in a variable light environment according to claim 1, further comprising a working area setting unit for presetting the working area block,
the working area setting unit is supported at the top end of a bracket through a pivot positioned in the center, four triangular adjusting plates which are symmetrically distributed are movably connected on the pivot, a first adjusting shaft is connected between the left adjusting plate and the right adjusting plate, a second adjusting shaft is connected between the front adjusting plate and the rear adjusting plate, rectangular lamp grooves are respectively arranged at the bottom edges of the four adjusting plates,
the rectangular lamp groove emits strip-shaped light spots, and the control unit changes the inclination angle of the adjusting plate relative to the horizontal plane through the first adjusting shaft and the second adjusting shaft, so that a rectangular area is defined on the horizontal plane of the working surface through the four strip-shaped light spots and serves as a preset working surface block;
the user interface unit is provided with a learning mode key, when the user interface unit is selected to be a reading mode through the learning mode key, the output quantity of the neural network is increased by an attention factor value of a sight line movement rate sign parameter used for representing the attention factor, and the calculation process is as follows:
detecting the intersection point of the learner's sight line and the working surface within a preset time length Tp, solving a circumscribed rectangle for the set of the intersection points falling within the range of the preset working surface block, calculating the sight line moving rate according to the length X and the width Y of the rectangle,
Figure FDA0002776225080000041
then, the attention factor value for calculating the line-of-sight movement rate is,
Figure FDA0002776225080000042
wherein, avs and bvs are two speed thresholds respectively set according to statistics.
3. The device for detecting and predicting learning attention under variable light environment according to claim 2, wherein the camera adopted by the image acquisition unit is installed on a bracket right opposite to the person in the working scene, the user interface unit comprises a key for indicating the current learning difficulty, and the neural network adds a learning difficulty coefficient input parameter.
4. The apparatus for learning attention detection and anticipation in a variable light environment according to claim 1, comprising a base plate, wherein a plurality of calibration blocks with known positions are distributed on the surface of the base plate, each calibration block has a circular light spot, the user interface unit has a calibration confirmation key, and the control unit is further configured to:
and the calibration blocks are lightened in turn, the learner watches the lightened calibration blocks, the image of the face of the learner is collected through the image collection unit after the calibration confirmation key is pressed, the sight line direction of the human eyes is extracted based on the collected image, and the extraction result is compared with the position of the calibration blocks so as to calibrate the sight line direction detection parameters.
5. The device for learning attention detection and prejudgment under the variable light environment according to any one of claims 1 to 4, wherein the neural network adopts an RBF neural network, and the model of the RBF neural network is as follows:
the output of the ith node of the hidden layer is as follows:
Figure FDA0002776225080000051
the output of the jth node of the output layer is as follows:
Figure FDA0002776225080000052
wherein, the dimension of the input vector X is m, the number of hidden layer H nodes is p, the dimension of the output vector Y is n, CiIs the center of the Gaussian function of the ith node of the hidden layer, sigmaiIs the width of the center of the Gaussian function, | | X-CiI is the vectors X and CiEuclidean distance between, wijThe weight value from the ith hidden node to the jth output node.
CN201910263070.9A 2019-04-02 2019-04-02 Learning attention detection and prejudgment device under variable light environment Active CN109949193B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN202011437396.8A CN112949372A (en) 2019-04-02 2019-04-02 Working area setting unit and use method thereof in learning attention detection and prejudgment
CN201910263070.9A CN109949193B (en) 2019-04-02 2019-04-02 Learning attention detection and prejudgment device under variable light environment
CN202011437459.XA CN112464863A (en) 2019-04-02 2019-04-02 Learning attention detection and prejudgment device under variable light environment
CN202011437412.3A CN112949373A (en) 2019-04-02 2019-04-02 Learning attention detection and prejudgment method under variable light environment
CN202011434362.3A CN112651303A (en) 2019-04-02 2019-04-02 Learning attention detection and prejudgment system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910263070.9A CN109949193B (en) 2019-04-02 2019-04-02 Learning attention detection and prejudgment device under variable light environment

Related Child Applications (4)

Application Number Title Priority Date Filing Date
CN202011437459.XA Division CN112464863A (en) 2019-04-02 2019-04-02 Learning attention detection and prejudgment device under variable light environment
CN202011437412.3A Division CN112949373A (en) 2019-04-02 2019-04-02 Learning attention detection and prejudgment method under variable light environment
CN202011437396.8A Division CN112949372A (en) 2019-04-02 2019-04-02 Working area setting unit and use method thereof in learning attention detection and prejudgment
CN202011434362.3A Division CN112651303A (en) 2019-04-02 2019-04-02 Learning attention detection and prejudgment system

Publications (2)

Publication Number Publication Date
CN109949193A CN109949193A (en) 2019-06-28
CN109949193B true CN109949193B (en) 2020-12-25

Family

ID=67012507

Family Applications (5)

Application Number Title Priority Date Filing Date
CN202011437396.8A Withdrawn CN112949372A (en) 2019-04-02 2019-04-02 Working area setting unit and use method thereof in learning attention detection and prejudgment
CN202011434362.3A Withdrawn CN112651303A (en) 2019-04-02 2019-04-02 Learning attention detection and prejudgment system
CN202011437412.3A Withdrawn CN112949373A (en) 2019-04-02 2019-04-02 Learning attention detection and prejudgment method under variable light environment
CN202011437459.XA Withdrawn CN112464863A (en) 2019-04-02 2019-04-02 Learning attention detection and prejudgment device under variable light environment
CN201910263070.9A Active CN109949193B (en) 2019-04-02 2019-04-02 Learning attention detection and prejudgment device under variable light environment

Family Applications Before (4)

Application Number Title Priority Date Filing Date
CN202011437396.8A Withdrawn CN112949372A (en) 2019-04-02 2019-04-02 Working area setting unit and use method thereof in learning attention detection and prejudgment
CN202011434362.3A Withdrawn CN112651303A (en) 2019-04-02 2019-04-02 Learning attention detection and prejudgment system
CN202011437412.3A Withdrawn CN112949373A (en) 2019-04-02 2019-04-02 Learning attention detection and prejudgment method under variable light environment
CN202011437459.XA Withdrawn CN112464863A (en) 2019-04-02 2019-04-02 Learning attention detection and prejudgment device under variable light environment

Country Status (1)

Country Link
CN (5) CN112949372A (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458030A (en) * 2019-07-15 2019-11-15 南京青隐信息科技有限公司 A kind of method of depth self study adjustment user's attention of fresh air bookshelf
CN110415653B (en) * 2019-07-18 2022-01-18 昆山龙腾光电股份有限公司 Backlight brightness adjusting system and method and liquid crystal display device
CN110516553A (en) * 2019-07-31 2019-11-29 北京航空航天大学 The monitoring method and device of working condition
CN110309626B (en) * 2019-08-09 2024-03-15 浙江派威数字技术有限公司 Optical comfort evaluation data acquisition equipment and optical comfort evaluation system
CN110728724A (en) * 2019-10-21 2020-01-24 深圳创维-Rgb电子有限公司 Image display method, device, terminal and storage medium
CN110684547A (en) * 2019-10-22 2020-01-14 中国计量大学 Optimized control method for biomass pyrolysis carbonization kiln
CN112989865B (en) * 2019-12-02 2023-05-30 山东浪潮科学研究院有限公司 Crowd attention focus judging method based on head gesture judgment
CN111881830A (en) * 2020-07-28 2020-11-03 安徽爱学堂教育科技有限公司 Interactive prompting method based on attention concentration detection
CN113705349B (en) * 2021-07-26 2023-06-06 电子科技大学 Attention quantitative analysis method and system based on line-of-sight estimation neural network
CN113723277B (en) * 2021-08-27 2024-02-27 华中师范大学 Learning intention monitoring method and system integrated with multi-mode visual information

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101658425B (en) * 2009-09-11 2011-06-01 西安电子科技大学 Device and method for detecting attention focusing degree based on analysis of heart rate variability
CN101917801A (en) * 2010-07-30 2010-12-15 中山大学 Light regulation method, device and intelligent desk lamp
US8847771B2 (en) * 2013-01-25 2014-09-30 Toyota Motor Engineering & Manufacturing North America, Inc. Method and apparatus for early detection of dynamic attentive states for providing an inattentive warning
AU2016210245A1 (en) * 2015-01-20 2017-07-13 Balmuda Inc. Illumination device
JP6695021B2 (en) * 2015-11-27 2020-05-20 パナソニックIpマネジメント株式会社 Lighting equipment
CN105953125B (en) * 2016-06-08 2018-10-12 杭州鸿雁电器有限公司 Method from tracing type desk lamp and by providing illumination from trace mode
CN106195656B (en) * 2016-07-13 2019-01-04 河海大学常州校区 The operation shadowless lamp of colour temperature and brightness is adjusted according to human eye state
CN206481478U (en) * 2017-02-24 2017-09-08 合肥本山电子科技有限公司 A kind of LED eye-protecting lamps with toning dimming function
CN109492514A (en) * 2018-08-28 2019-03-19 初速度(苏州)科技有限公司 A kind of method and system in one camera acquisition human eye sight direction
CN109522815B (en) * 2018-10-26 2021-01-15 深圳博为教育科技有限公司 Concentration degree evaluation method and device and electronic equipment

Also Published As

Publication number Publication date
CN112949373A (en) 2021-06-11
CN112949372A (en) 2021-06-11
CN112651303A (en) 2021-04-13
CN112464863A (en) 2021-03-09
CN109949193A (en) 2019-06-28

Similar Documents

Publication Publication Date Title
CN109949193B (en) Learning attention detection and prejudgment device under variable light environment
CN109905943B (en) Illumination control device based on attention factor
US8967809B2 (en) Methods and systems for intelligent visual function assessments
CN112533317B (en) Scene type classroom intelligent illumination optimization method
WO2016078490A1 (en) Method for measuring visual effect of non-colored target in different light environments and system thereof
CN205006859U (en) Two mesh pupils comprehensive testing system of setting a camera
CN205181314U (en) Portable pair of mesh pupil detection device
CN105868570A (en) Method for measuring and calculating visual effects of target in different light environments
CN110163371B (en) Dimming optimization method for sleep environment
CN104739364B (en) Binocular pupil light reflex tracking system
CN110960036A (en) Intelligent mirror system and method with skin and makeup beautifying guide function
CN110113843A (en) Lighting control system and light modulation mapping device based on sleep efficiency factor
CN209029110U (en) Chinese medicine facial diagnosis is health management system arranged
CN109998497A (en) System and plane of illumination illumination testing apparatus are sentenced in inspection of falling asleep in luminous environment
CN110062498A (en) Public Quarters blending illumination system, method and optimization method based on the controllable ceiling lamp of subregion
CN110013231A (en) Sleep environment illumination condition discrimination method and reading face light measuring method
CN108154866A (en) A kind of brightness adjusts display screen system and its brightness real-time regulating method in real time
WO2012154279A1 (en) Methods and systems for intelligent visual function assessments
CN104739367A (en) Binocular pupil light synthetic detection system
CN109168222A (en) Improve the means of illumination and intelligent lighting system of study and work efficiency
CN210810960U (en) Diagnostic device of intelligent screening strabismus and diopter
CN208938661U (en) Chinese medicine facial diagnosis system
CN116936097B (en) Training lamp user eye abnormal movement intelligent detection method
Yang et al. A multichannel LED-based lighting approach to improve color discrimination for low vision people
Chung Development of a wearable eye tracker

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant