CN116782451B - LED atmosphere lamp control method and system with self-adaptive brightness and color - Google Patents

LED atmosphere lamp control method and system with self-adaptive brightness and color Download PDF

Info

Publication number
CN116782451B
CN116782451B CN202311076681.5A CN202311076681A CN116782451B CN 116782451 B CN116782451 B CN 116782451B CN 202311076681 A CN202311076681 A CN 202311076681A CN 116782451 B CN116782451 B CN 116782451B
Authority
CN
China
Prior art keywords
data
feature
preset
module
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311076681.5A
Other languages
Chinese (zh)
Other versions
CN116782451A (en
Inventor
李尧
肖琼
王子钧
汤爱保
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN DONGLU TECHNOLOGY CO LTD
Original Assignee
SHENZHEN DONGLU TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN DONGLU TECHNOLOGY CO LTD filed Critical SHENZHEN DONGLU TECHNOLOGY CO LTD
Priority to CN202311076681.5A priority Critical patent/CN116782451B/en
Publication of CN116782451A publication Critical patent/CN116782451A/en
Application granted granted Critical
Publication of CN116782451B publication Critical patent/CN116782451B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a method and a system for controlling an LED atmosphere lamp with self-adaptive brightness and color, wherein the method comprises the following steps: and acquiring sensor data of a plurality of persons, performing dimensionality transformation, coding, fusion and other processing to obtain emotion scores, and setting the brightness and the color of the LED atmosphere lamp according to the emotion scores. The invention has the beneficial effects that: according to the method, emotion scores of a plurality of people are analyzed according to sensor data of the plurality of people within a period of time, so that the calculation result of the emotion scores is closer to an actual value, and the user experience is improved.

Description

LED atmosphere lamp control method and system with self-adaptive brightness and color
Technical Field
The invention relates to the field of artificial intelligence, in particular to a method and a system for controlling an LED atmosphere lamp with self-adaptive brightness and color.
Background
In early processing of visual signals, color is an important visual signal, and both emotion and emotion are affected. Different colors may evoke different emotional and psychological responses. For example, red is often associated with agitation, anger, or enthusiasm, and blue is associated with calm, or meditation. At present, the color and brightness of the LED atmosphere lamp are adjusted by analyzing the emotion of the user to improve the experience of the user gradually becomes the main direction of research, however, emotion analysis of the user is mainly realized through limbs or sounds of the user, emotion change of the user in a period of time is ignored, so that the emotion of the user cannot be accurately analyzed, and the user cannot obtain better experience.
Disclosure of Invention
The invention mainly aims to provide a method and a system for controlling an LED atmosphere lamp with self-adaptive brightness and color, and aims to solve the problem that the prior art cannot accurately analyze emotion of a user, so that the user cannot obtain better experience.
The invention provides a control method of an LED atmosphere lamp with self-adaptive brightness and color, which comprises the following steps:
continuously acquiring sensor data of N personnel through a preset sensor; n is an integer greater than or equal to 2;
extracting T time point target sensor data closest to the current time point from the acquired sensor data;
arranging the target sensor data of the N persons according to the time sequence to obtain first dataThe method comprises the steps of carrying out a first treatment on the surface of the Wherein the first data,X tn Target sensor data representing an nth person at time order t;
performing dimension transformation on the first data through a preset weight matrix to obtain second data;
encoding the second data through a preset time encoder to obtain a first characteristic, and encoding the second data through a preset personnel encoder to obtain a second characteristic; wherein the time encoder is used for extracting the characteristics of the longitudinal data of the second data, and the personnel encoder is used for extracting the characteristics of the transverse data of the second data;
According to a preset data fusion method, carrying out fusion processing on the first feature and the second feature to obtain a target feature;
inputting the target characteristics into a preset emotion analysis model for processing, so as to obtain an emotion value output by the preset emotion analysis model;
and adjusting the brightness and the color of the LED atmosphere lamp according to the emotion value.
Further, the step of encoding the second data by a preset time encoder to obtain a first feature includes:
performing linear transformation on the second data through three different linear matrixes to respectively obtain three first intermediate matrixes,/>,/>Wherein Q, K, V respectively represent different first intermediate matrices, R is a real number, A represents the number of rows corresponding to the first intermediate matrices, and d represents the number of columns corresponding to the first intermediate matrices;
multiplying the first intermediate matrix Q with the transposed first intermediate matrix K to obtain a second intermediate matrix
Normalizing the second intermediate matrix and multiplying the second intermediate matrix with the first intermediate matrix V to obtain a target matrix;
and inputting the target matrix into a preset convolution layer to perform feature extraction to obtain the first feature.
Further, the step of performing fusion processing on the first feature and the second feature according to a preset data fusion method to obtain a target feature includes:
the first feature and the second feature are according to the formulaPerforming preliminary fusion to obtain preliminary features; wherein F is 1 Representing the preliminary characteristics, W represents a preset weight, f 1 Representing a first feature, f 2 Representing a second feature, b representing a preset offset, concat representing the operation of connecting the vectors in dimensions;
according to the softmax functionCalculating the weights w of the first feature and the second feature 1 And w 2 The method comprises the steps of carrying out a first treatment on the surface of the Wherein w is 1 Weights, w, representing the first characteristic 2 Weights representing the second feature, an
According to the formulaCalculating to obtain the target feature, wherein F 2 Representing the target feature.
Further, before the step of inputting the target feature into a preset emotion analysis model to process, so as to obtain an emotion value output by the preset emotion analysis model, the method further includes:
acquiring a designated number of sample data and emotion values corresponding to each group of sample data;
marking the corresponding sample data by each emotion value to obtain marked target data;
Dividing the target data into a training data set and a verification data set according to a preset proportion;
inputting the data in the training data set into a preset neural network model for supervised training, so as to obtain a temporary model;
verifying the temporary model by using the verification data set to obtain a verification result, and judging whether the verification result passes the verification;
and if the verification result is that the verification is passed, marking the temporary model as an emotion analysis model.
Further, the step of adjusting the brightness and the color of the LED atmosphere lamp according to the emotion value includes:
acquiring the required light brightness and color based on the emotion value;
and setting parameters of the LED atmosphere lamp according to the required light brightness and color, so that the LED atmosphere lamp displays corresponding brightness and color.
The invention also provides a system for controlling the LED atmosphere lamp with self-adaptive brightness and color, which comprises:
the acquisition module is used for continuously acquiring sensor data of N personnel through a preset sensor; n is an integer greater than or equal to 2;
the extraction module is used for extracting T time point target sensor data closest to the current time point from the acquired sensor data;
The arrangement module is used for arranging the target sensor data of the N persons according to the time sequence to obtain first dataThe method comprises the steps of carrying out a first treatment on the surface of the Wherein the first data,X tn Target sensor data representing an nth person at time order t;
the transformation module is used for carrying out dimension transformation on the first data through a preset weight matrix to obtain second data;
the encoding module is used for encoding the second data through a preset time encoder to obtain a first characteristic, and encoding the second data through a preset personnel encoder to obtain a second characteristic; wherein the time encoder is used for extracting the characteristics of the longitudinal data of the second data, and the personnel encoder is used for extracting the characteristics of the transverse data of the second data;
the fusion module is used for carrying out fusion processing on the first characteristic and the second characteristic according to a preset data fusion method to obtain a target characteristic;
the processing module is used for inputting the target characteristics into a preset emotion analysis model for processing, so as to obtain emotion values output by the preset emotion analysis model;
and the adjusting module is used for adjusting the brightness and the color of the LED atmosphere lamp according to the emotion numerical value.
Further, the encoding module includes:
a conversion sub-module for communicatingPerforming linear transformation on the second data through three different linear matrixes to respectively obtain three first intermediate matrixes,/>,/>Wherein Q, K, V respectively represent different first intermediate matrices, R is a real number, A represents the number of rows corresponding to the first intermediate matrices, and d represents the number of columns corresponding to the first intermediate matrices;
a first computing sub-module for multiplying the first intermediate matrix Q with the transposed first intermediate matrix K to obtain a second intermediate matrix
The second calculation sub-module is used for carrying out normalization processing on the second intermediate matrix and multiplying the second intermediate matrix with the first intermediate matrix V to obtain a target matrix;
and the feature extraction submodule is used for inputting the target matrix into a preset convolution layer to perform feature extraction to obtain the first feature.
Further, the fusion module includes:
a preliminary fusion sub-module for integrating the first feature and the second feature according to a formulaPerforming preliminary fusion to obtain preliminary features; wherein F is 1 Representing the preliminary characteristics, W represents a preset weight, f 1 Representing a first feature, f 2 Representing a second feature, b representing a preset offset, concat representing the operation of connecting the vectors in dimensions;
A third calculation sub-module for calculating the weights w of the first and second features according to the softmax function 1 And w 2 The method comprises the steps of carrying out a first treatment on the surface of the Wherein w is 1 Weights, w, representing the first characteristic 2 Weights representing the second feature, an
A fourth calculation sub-module for calculating a calculation result according to the formulaCalculating to obtain the target feature, wherein F 2 Representing the target feature.
Further, the LED atmosphere lamp control system with adaptive brightness and color further comprises:
the emotion value acquisition module is used for acquiring the appointed number of sample data and emotion values corresponding to each group of sample data;
the sample data marking module is used for marking the corresponding sample data by each emotion value to obtain marked target data;
the dividing module is used for dividing the target data into a training data set and a verification data set according to a preset proportion;
the training data input module is used for inputting the data in the training data set into a preset neural network model for supervised training so as to obtain a temporary model;
the verification module is used for verifying the temporary model by utilizing the verification data set to obtain a verification result and judging whether the verification result passes the verification;
And the marking module is used for marking the temporary model as an emotion analysis model if the verification result is that the verification is passed.
Further, the adjustment module includes:
the parameter acquisition sub-module is used for acquiring the required light brightness and color based on the emotion value;
and the parameter setting module is used for setting the parameters of the LED atmosphere lamp according to the required light brightness and color, so that the LED atmosphere lamp displays the corresponding brightness and color.
The invention has the beneficial effects that: and acquiring sensor data of a plurality of persons, performing dimensionality transformation, coding, fusion and other processing to obtain emotion scores, and setting the brightness and the color of the LED atmosphere lamp according to the emotion scores. Therefore, emotion scores of a plurality of people are analyzed according to sensor data of the plurality of people within a period of time, the calculation result of the emotion scores is closer to an actual value, and user experience is improved.
Drawings
FIG. 1 is a flow chart of a method for controlling an LED ambient light with adaptive brightness and color according to an embodiment of the present invention;
FIG. 2 is a block diagram of a control system for an LED ambient light with adaptive brightness and color in accordance with one embodiment of the present invention;
Fig. 3 is a schematic block diagram of a computer device according to an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that, in the embodiments of the present application, all directional indicators (such as up, down, left, right, front, and back) are merely used to explain the relative positional relationship, movement conditions, and the like between the components in a specific posture (as shown in the drawings), if the specific posture is changed, the directional indicators correspondingly change, and the connection may be a direct connection or an indirect connection.
The term "and/or" is herein merely an association relation describing an associated object, meaning that there may be three relations, e.g., a and B, may represent: a exists alone, A and B exist together, and B exists alone.
Furthermore, descriptions such as those referred to as "first," "second," and the like, are provided for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implying an order of magnitude of the indicated technical features in the present disclosure. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present invention.
Referring to fig. 1, the invention provides a method for controlling an LED atmosphere lamp with self-adaptive brightness and color, comprising:
s1: continuously acquiring sensor data of N personnel through a preset sensor; n is an integer greater than or equal to 2;
s2: extracting T time point target sensor data closest to the current time point from the acquired sensor data;
s3: arranging the target sensor data of the N persons according to the time sequence to obtain first data The method comprises the steps of carrying out a first treatment on the surface of the Wherein the first data,X tn Target sensor data representing an nth person at time order t;
s4: performing dimension transformation on the first data through a preset weight matrix to obtain second data;
s5: encoding the second data through a preset time encoder to obtain a first characteristic, and encoding the second data through a preset personnel encoder to obtain a second characteristic; wherein the time encoder is used for extracting the characteristics of the longitudinal data of the second data, and the personnel encoder is used for extracting the characteristics of the transverse data of the second data;
s6: according to a preset data fusion method, carrying out fusion processing on the first feature and the second feature to obtain a target feature;
s7: inputting the target characteristics into a preset emotion analysis model for processing, so as to obtain an emotion value output by the preset emotion analysis model;
s8: and adjusting the brightness and the color of the LED atmosphere lamp according to the emotion value.
As described in the above step S1, the preset sensor may be various sensor devices, it should be noted that the relevant sensor devices may be worn on the body of the person to obtain the corresponding sensor data, or the sensor may not be worn on the body of the person, and of course, the sensor may be in a one-to-one relationship with the person, or may be in a one-to-many relationship, or may obtain the sensor data corresponding to each person, and the sensor devices may be physiological sensors, which generally use different technologies and sensor elements to detect and measure the physiological indexes. These techniques include electrical, optical, pressure sensing, temperature sensing, and the like. Common physiological sensor devices include heart rate monitors, blood pressure meters, thermometers, respiration sensors, and the like. For collecting relevant data of the individual. By continuously acquired is meant that the data acquisition is continued for a period of time (which may be continuous, intermittent or according to predetermined time intervals) by sensors that can measure physiological indicators in contact or in a contactless manner with the body. For example, a heart rate monitor may measure heart rate through the perception of pulse, and a blood pressure monitor may measure blood pressure through the pressure of an arm. This allows long-term data to be collected from the individual for more comprehensive analysis and study. N personnel means that this data collection method can be applied to multiple personnel simultaneously.
As described in step S2 above, from the acquired sensor data, T time-point target sensor data closest to the current time point are extracted. In particular, the sensor data can be read first: first, sensor data is loaded into a program or system. The sensor data may be provided in the form of a file, database, or real-time stream. The current point in time is then determined and a time stamp or system time of the current point in time is obtained for subsequent computation. The current point in time is compared to the time stamp for each sensor data point and a time difference (absolute value) is calculated. T sensor data points with the smallest time differences are selected. So that time series data of the sensor can be acquired to better understand the time variation and trend of the sensor data.
As described in the above step S3, the target sensor data of the N persons are arranged in time sequence to obtain first data. Specifically, target sensor data of a person is collected: sensor data for each person is acquired and each data point is ensured to contain time stamp information. The sensor data of all persons are combined into one data set, which is then ordered according to the time stamps, so that a corresponding matrix, i.e. the first data, is obtained.
And (4) performing dimension transformation on the first data through a preset weight matrix to obtain second data. The preset weight matrix for the dimension transformation is preset according to the dimension of the first data, and since the sensor can be predetermined, although the specific value of the first data cannot be determined, the dimension thereof can be known in advance. It should be noted that the size of the weight matrix should be adapted to the dimension of the first data, so as to facilitate multiplication of the first data with a preset weight matrix, which may be implemented using a matrix multiplication operation. And ensuring that the column number of the weight matrix is matched with the dimension of the first data to obtain second data, namely the second data subjected to dimension transformation by the preset weight matrix.
As described in the above step S5, the second data is encoded by a preset time encoder to obtain a first feature, and the second data is encoded by a preset personnel encoder to obtain a second feature; the time encoder is used for extracting the characteristics of the longitudinal data of the second data, and the personnel encoder is used for extracting the characteristics of the transverse data of the second data. Where the time encoder is intended to obtain a time-dependent characteristic of the sensor data, any feasible encoder may be employed. In order to consider the influence of all the information contained in the time steps on the current time step, more specifically, the detailed coding rule of the time encoder is omitted, details are referred to later, and the personnel encoder is used for extracting the characteristics of the transverse data of the second data, namely the influence of other personnel on the current personnel, so as to obtain the second characteristics, and the judgment of the emotion scores of a plurality of personnel can be facilitated through two different characteristic extraction, so that the color and the brightness of the LED lamp can be adjusted.
And step S6, performing fusion processing on the first feature and the second feature according to a preset data fusion method to obtain a target feature. Specifically, a suitable data fusion method may be selected according to the data features of the first feature and the second feature, and the data fusion may be performed in different manners such as weighted summation, stitching, element-by-element multiplication, and the like, which is not limited in the present application. And then fusing according to a data fusion method to obtain target characteristics, wherein the target characteristics fuse the relationship between time and emotion of a plurality of people, so that emotion scores more conforming to emotion of a plurality of people can be obtained according to the target characteristics. In a specific embodiment, a data fusion method is provided later, see later for details.
And step S7, inputting the target characteristics into a preset emotion analysis model for processing, so as to obtain emotion values output by the preset emotion analysis model. The emotion analysis model may be a pre-trained neural network model, a machine learning model, or other model suitable for emotion analysis tasks. The target feature is input into the model, so that a corresponding emotion value can be obtained, and the emotion value corresponds to one emotion color, and can be related to emotion of a plurality of people, such as tension, pleasure and the like.
And (3) adjusting the brightness and the color of the LED atmosphere lamp according to the emotion value in the step S8. Specifically, emotion values are mapped to luminance and color ranges: and mapping the emotion numerical values to proper brightness and color ranges according to the range of the emotion numerical values and the design requirements. For example, it may be defined that the light is low in brightness and warms in color when the emotion value is below the threshold value, and high in brightness and cools in color when the emotion value is above the threshold value. And adjusting the brightness of the LED lamp according to the mapped emotion value. Dimming functionality may be used to control the brightness of the LED lamp, depending on the light control system. And adjusting the color of the LED lamp according to the mapped emotion value. The color control function may be used to change the color temperature, hue, or saturation of the LED lamp, depending on the light control system. In a specific embodiment, a table of correspondence between emotion values and brightness and color of the LED atmosphere lamp may be preset, and then setting may be performed according to the obtained emotion values. Therefore, emotion scores of a plurality of people are analyzed according to sensor data of the plurality of people within a period of time, the calculation result of the emotion scores is closer to an actual value, and user experience is improved.
In one embodiment, the step S5 of encoding, by a preset time encoder, the second data to obtain the first feature includes:
s501: performing linear transformation on the second data through three different linear matrixes to respectively obtain three first intermediate matrixes,/>,/>Wherein Q, K, V respectively represent different first intermediate matrices, R is a real number, A represents the number of rows corresponding to the first intermediate matrices, and d represents the number of columns corresponding to the first intermediate matrices;
s502: multiplying the first intermediate matrix Q with the transposed first intermediate matrix K to obtain a second intermediate matrix
S503: normalizing the second intermediate matrix and multiplying the second intermediate matrix with the first intermediate matrix V to obtain a target matrix;
s504: and inputting the target matrix into a preset convolution layer to perform feature extraction to obtain the first feature.
As described in the above step S501, the second data is linearly transformed by three different linear matrices to obtain three first intermediate matrices, where it is to be noted that dimensions of the three different linear matrices need to be kept identical, that is, dimensions of the three linear matrices are identical, and then the linear transformation is performed, so that three first intermediate matrices may be obtained, where a and T represent the same parameter.
As described in the above steps S502 to S503, the first intermediate matrix Q is multiplied by the transposed first intermediate matrix K to obtain a second intermediate matrix. In this way, matrix operations can be used to capture the correlation of the data or to extract useful time information, and then the second intermediate matrix is normalized and multiplied by the first intermediate matrix V to obtain the target matrix, and by the normalization process, normalization can be performed on the data to ensure that the data is within a certain range and to eliminate dimensional differences between different dimensions. So that a more accurate result of the result time information can be obtained.
In an embodiment, the step of encoding the second data by a preset personnel encoder to obtain the second feature may also be performed to obtain the second feature, where the specific implementation manner is similar to that of the first feature, and the difference is that the first feature is to process the transverse data of the second data, and the second feature is to process the longitudinal data, so that the corresponding linear matrix also needs to be kept the same as the dimension, so as to extract the features of the data of the column entries, thereby obtaining the second feature.
In one embodiment, the step S6 of performing fusion processing on the first feature and the second feature according to a preset data fusion method to obtain a target feature includes:
s601: the first feature and the second feature are according to the formulaPerforming preliminary fusion to obtain preliminary features; wherein F is 1 Representing the preliminary characteristics, W represents a preset weight, f 1 Representing a first feature, f 2 Representing a second feature, b representing a preset offset, concat representing the operation of connecting the vectors in dimensions;
s602: calculating weights w of the first feature and the second feature according to the softmax function 1 And w 2 The method comprises the steps of carrying out a first treatment on the surface of the Wherein w is 1 Weights, w, representing the first characteristic 2 Weights representing the second feature, an
S603: according to the formulaCalculating to obtain the target feature, wherein F 2 Representing the target feature.
As described above in step S601, the first feature and the second feature are connected in columns to form a new feature, i.e., a preliminary feature, in such a manner that the independence of each feature is preserved, and at the same time, the information thereof is considered, so that a more comprehensive and accurate feature representation can be provided.
As described in the above step S602, the weights w of the first feature and the second feature are calculated according to the softmax function 1 And w 2 The specific calculation mode is that the first feature and the second feature are respectively weighted and summed, and then the summed values are normalized, namely the summed values are input into a softmax calculation function, so that weight values corresponding to the two features are obtained.
As described in the above step S603, according to the formulaCalculating to obtain the target feature, wherein F 2 Representing the target feature. The two features are fused by adopting the gating structure, so that the correlation between the first feature and the second feature is considered, the subsequent analysis process is more facilitated, and the prediction accuracy is improved.
In one embodiment, before the step S7 of inputting the target feature into a preset emotion analysis model to obtain an emotion value output by the preset emotion analysis model, the method further includes:
s611: acquiring a designated number of sample data and emotion values corresponding to each group of sample data;
s612: marking the corresponding sample data by each emotion value to obtain marked target data;
s613: dividing the target data into a training data set and a verification data set according to a preset proportion;
s614: inputting the data in the training data set into a preset neural network model for supervised training, so as to obtain a temporary model;
S615: verifying the temporary model by using the verification data set to obtain a verification result, and judging whether the verification result passes the verification;
s616: and if the verification result is that the verification is passed, marking the temporary model as an emotion analysis model.
As described in the above steps S611-S612, a specified number of sample data and emotion values corresponding to each set of sample data are obtained; marking the corresponding sample data by each emotion value to obtain marked target data; the appointed number of sample data can be obtained through manual collection, and the corresponding emotion numerical value can also be obtained through analysis of related personnel according to the corresponding sample data, and then the sample data are marked, so that the follow-up supervised training is facilitated.
By inputting the data into the neural network model for training, a model for the emotion analysis task can be constructed by learning patterns and features of the data, as described in steps S614-S616 above. The model may then be evaluated using the validation data set to learn the performance of the model on the new data. If the verification results are satisfactory, the temporary model can be regarded as an emotion analysis model and used for subsequent prediction or analysis tasks.
In one embodiment, the step S8 of adjusting the brightness and color of the LED atmosphere lamp according to the emotion value includes:
s801: acquiring the required light brightness and color based on the emotion value;
s802: and setting parameters of the LED atmosphere lamp according to the required light brightness and color, so that the LED atmosphere lamp displays corresponding brightness and color.
As described in the above steps S801-S802, the brightness and color of the lamplight can be adjusted by setting the parameters of the LED atmosphere lamp according to the emotion value, so that the lamplight shows illumination effect suitable for emotion. For example, the brightness of the LED lamp can be adjusted to transmit the brightness degree under different emotion states, or the temperature or the hue of the color can be adjusted to transmit the temperature or the color characteristics under the emotion states. This approach may combine emotional experience with ambient lighting, providing a more personalized and immersive atmosphere experience for the user.
Referring to fig. 2, the present invention also provides an LED atmosphere lamp control system with adaptive brightness and color, comprising:
the acquisition module 10 is used for continuously acquiring sensor data of N personnel through a preset sensor; n is an integer greater than or equal to 2;
An extracting module 20, configured to extract T time point target sensor data closest to the current time point from the acquired sensor data;
an arrangement module 30 for arranging the target sensor data of the N persons according to time sequence to obtain first dataThe method comprises the steps of carrying out a first treatment on the surface of the Wherein the first data,X tn Target sensor data representing an nth person at time order t;
the transformation module 40 is configured to perform dimension transformation on the first data through a preset weight matrix to obtain second data;
the encoding module 50 is configured to encode the second data by using a preset time encoder to obtain a first feature, and encode the second data by using a preset personnel encoder to obtain a second feature; wherein the time encoder is used for extracting the characteristics of the longitudinal data of the second data, and the personnel encoder is used for extracting the characteristics of the transverse data of the second data;
the fusion module 60 is configured to perform fusion processing on the first feature and the second feature according to a preset data fusion method, so as to obtain a target feature;
the processing module 70 is configured to input the target feature into a preset emotion analysis model for processing, so as to obtain an emotion value output by the preset emotion analysis model;
And the adjusting module 80 is used for adjusting the brightness and the color of the LED atmosphere lamp according to the emotion numerical value.
In one embodiment, the encoding module 50 includes:
the transformation submodule is used for carrying out linear transformation on the second data through three different linear matrixes to respectively obtain three first intermediate matrixes,/>,/>Wherein Q, K, V respectively represent different first intermediate matrices, R is a real number, A represents the number of rows corresponding to the first intermediate matrices, and d represents the number of columns corresponding to the first intermediate matrices;
a first computation sub-module for combining the first intermediate matrix Q and the transposed first intermediate matrixMultiplying the matrix K to obtain a second intermediate matrix
The second calculation sub-module is used for carrying out normalization processing on the second intermediate matrix and multiplying the second intermediate matrix with the first intermediate matrix V to obtain a target matrix;
and the feature extraction submodule is used for inputting the target matrix into a preset convolution layer to perform feature extraction to obtain the first feature.
In one embodiment, the fusion module 60 includes:
a preliminary fusion sub-module for integrating the first feature and the second feature according to a formulaPerforming preliminary fusion to obtain preliminary features; wherein F is 1 Representing the preliminary characteristics, W represents a preset weight, f 1 Representing a first feature, f 2 Representing a second feature, b representing a preset offset, concat representing the operation of connecting the vectors in dimensions;
a third calculation sub-module for calculating the weights w of the first and second features according to the softmax function 1 And w 2 The method comprises the steps of carrying out a first treatment on the surface of the Wherein w is 1 Weights, w, representing the first characteristic 2 Weights representing the second feature, an
A fourth calculation sub-module for calculating a calculation result according to the formulaCalculating to obtain the target feature, wherein F 2 Representing the target feature.
In one embodiment, the LED atmosphere lamp control system with adaptive brightness and color further comprises:
the emotion value acquisition module is used for acquiring the appointed number of sample data and emotion values corresponding to each group of sample data;
the sample data marking module is used for marking the corresponding sample data by each emotion value to obtain marked target data;
the dividing module is used for dividing the target data into a training data set and a verification data set according to a preset proportion;
the training data input module is used for inputting the data in the training data set into a preset neural network model for supervised training so as to obtain a temporary model;
The verification module is used for verifying the temporary model by utilizing the verification data set to obtain a verification result and judging whether the verification result passes the verification;
and the marking module is used for marking the temporary model as an emotion analysis model if the verification result is that the verification is passed.
In one embodiment, the adjustment module 80 includes:
the parameter acquisition sub-module is used for acquiring the required light brightness and color based on the emotion value;
and the parameter setting module is used for setting the parameters of the LED atmosphere lamp according to the required light brightness and color, so that the LED atmosphere lamp displays the corresponding brightness and color.
The application has the beneficial effects that: and acquiring sensor data of a plurality of persons, performing dimensionality transformation, coding, fusion and other processing to obtain emotion scores, and setting the brightness and the color of the LED atmosphere lamp according to the emotion scores. Therefore, emotion scores of a plurality of people are analyzed according to sensor data of the plurality of people within a period of time, the calculation result of the emotion scores is closer to an actual value, and user experience is improved.
Referring to fig. 3, in an embodiment of the present application, there is further provided a computer device, which may be a server, and an internal structure thereof may be as shown in fig. 3. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the computer is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used to store various sensor data and the like. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, can implement the LED atmosphere lamp control method with adaptive brightness and color according to any of the embodiments described above.
It will be appreciated by those skilled in the art that the architecture shown in fig. 3 is merely a block diagram of a portion of the architecture in connection with the present inventive arrangements and is not intended to limit the computer devices to which the present inventive arrangements are applicable.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, can implement the method for controlling the LED atmosphere lamp with adaptive brightness and color according to any one of the above embodiments.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium provided by the present application and used in embodiments may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration, and not limitation, RAM may be such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, system, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, system, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, system, article, or method that comprises the element.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (10)

1. The LED atmosphere lamp control method with the self-adaptive brightness and color is characterized by comprising the following steps of:
continuously acquiring sensor data of N personnel through a preset sensor; n is an integer greater than or equal to 2;
extracting T time point target sensor data closest to the current time point from the acquired sensor data;
arranging the target sensor data of the N persons according to the time sequence to obtain first dataThe method comprises the steps of carrying out a first treatment on the surface of the Wherein the first data,X tn Target sensor data representing an nth person at time order t;
performing dimension transformation on the first data through a preset weight matrix to obtain second data;
encoding the second data through a preset time encoder to obtain a first characteristic, and encoding the second data through a preset personnel encoder to obtain a second characteristic; wherein the time encoder is used for extracting the characteristics of the longitudinal data of the second data, and the personnel encoder is used for extracting the characteristics of the transverse data of the second data;
According to a preset data fusion method, carrying out fusion processing on the first feature and the second feature to obtain a target feature;
inputting the target characteristics into a preset emotion analysis model for processing, so as to obtain an emotion value output by the preset emotion analysis model;
and adjusting the brightness and the color of the LED atmosphere lamp according to the emotion value.
2. The method for controlling an LED atmosphere lamp with adaptive brightness and color according to claim 1, wherein the step of encoding the second data by a preset time encoder to obtain the first characteristic comprises:
the second data is processed through three different linear matricesLinear transformation to obtain three first intermediate matrices respectively,/>,/>Wherein Q, K, V respectively represent different first intermediate matrices, R is a real number, A represents the number of rows corresponding to the first intermediate matrices, and d represents the number of columns corresponding to the first intermediate matrices;
multiplying the first intermediate matrix Q with the transposed first intermediate matrix K to obtain a second intermediate matrix
Normalizing the second intermediate matrix and multiplying the second intermediate matrix with the first intermediate matrix V to obtain a target matrix;
and inputting the target matrix into a preset convolution layer to perform feature extraction to obtain the first feature.
3. The method for controlling an LED atmosphere lamp with adaptive brightness and color according to claim 1, wherein the step of fusing the first feature and the second feature according to a preset data fusion method to obtain a target feature comprises:
the first feature and the second feature are according to the formulaPerforming preliminary fusion to obtain preliminary features; wherein,
F 1 representing the preliminary characteristics, W represents a preset weight, f 1 Representing a first feature, f 2 Representing a second feature, b representing a preset offset, concat representing the operation of connecting the vectors in dimensions;
according to the softmax functionCalculating the weights w of the first feature and the second feature 1 And w 2 The method comprises the steps of carrying out a first treatment on the surface of the Wherein w is 1 Weights, w, representing the first characteristic 2 Weights representing the second feature, an
According to the formulaCalculating to obtain the target feature, wherein F 2 Representing the target feature.
4. The method for controlling an LED atmosphere lamp with adaptive brightness and color according to claim 1, wherein before the step of inputting the target feature into a preset emotion analysis model for processing, thereby obtaining an emotion value output by the preset emotion analysis model, the method further comprises:
Acquiring a designated number of sample data and emotion values corresponding to each group of sample data;
marking the corresponding sample data by each emotion value to obtain marked target data;
dividing the target data into a training data set and a verification data set according to a preset proportion;
inputting the data in the training data set into a preset neural network model for supervised training, so as to obtain a temporary model;
verifying the temporary model by using the verification data set to obtain a verification result, and judging whether the verification result passes the verification;
and if the verification result is that the verification is passed, marking the temporary model as an emotion analysis model.
5. The method for controlling an LED atmosphere lamp with adaptive brightness and color according to claim 1, wherein the step of adjusting the brightness and color of the LED atmosphere lamp according to the emotion value comprises:
acquiring the required light brightness and color based on the emotion value;
and setting parameters of the LED atmosphere lamp according to the required light brightness and color, so that the LED atmosphere lamp displays corresponding brightness and color.
6. An LED ambient light control system with adaptive brightness and color, comprising:
The acquisition module is used for continuously acquiring sensor data of N personnel through a preset sensor; n is an integer greater than or equal to 2;
the extraction module is used for extracting T time point target sensor data closest to the current time point from the acquired sensor data;
the arrangement module is used for arranging the target sensor data of the N persons according to the time sequence to obtain first dataThe method comprises the steps of carrying out a first treatment on the surface of the Wherein the first data,X tn Target sensor data representing an nth person at time order t;
the transformation module is used for carrying out dimension transformation on the first data through a preset weight matrix to obtain second data;
the encoding module is used for encoding the second data through a preset time encoder to obtain a first characteristic, and encoding the second data through a preset personnel encoder to obtain a second characteristic; wherein the time encoder is used for extracting the characteristics of the longitudinal data of the second data, and the personnel encoder is used for extracting the characteristics of the transverse data of the second data;
the fusion module is used for carrying out fusion processing on the first characteristic and the second characteristic according to a preset data fusion method to obtain a target characteristic;
The processing module is used for inputting the target characteristics into a preset emotion analysis model for processing, so as to obtain emotion values output by the preset emotion analysis model;
and the adjusting module is used for adjusting the brightness and the color of the LED atmosphere lamp according to the emotion numerical value.
7. The LED ambient light control system with adaptive brightness and color of claim 6, wherein the encoding module comprises:
the transformation submodule is used for carrying out linear transformation on the second data through three different linear matrixes to respectively obtain three first intermediate matrixes,/>,/>Wherein Q, K, V respectively represent different first intermediate matrices, R is a real number, A represents the number of rows corresponding to the first intermediate matrices, and d represents the number of columns corresponding to the first intermediate matrices;
a first computing sub-module for multiplying the first intermediate matrix Q with the transposed first intermediate matrix K to obtain a second intermediate matrix
The second calculation sub-module is used for carrying out normalization processing on the second intermediate matrix and multiplying the second intermediate matrix with the first intermediate matrix V to obtain a target matrix;
and the feature extraction submodule is used for inputting the target matrix into a preset convolution layer to perform feature extraction to obtain the first feature.
8. The LED ambient light control system with adaptive brightness and color of claim 6, wherein the fusion module comprises:
a preliminary fusion sub-module for integrating the first feature and the second feature according to a formulaPerforming preliminary fusion to obtain preliminary features; wherein F is 1 Representing the preliminary characteristics, W represents a preset weight, f 1 Representing a first feature, f 2 Representing a second feature, b representing a preset offset, concat representing the operation of connecting the vectors in dimensions;
a third calculation sub-module for calculating the weights w of the first and second features according to the softmax function 1 And w 2 The method comprises the steps of carrying out a first treatment on the surface of the Wherein w is 1 Weights, w, representing the first characteristic 2 Weights representing the second feature, an
A fourth calculation sub-module for calculating a calculation result according to the formulaCalculating to obtain the target feature, wherein F 2 Representing the target feature.
9. The LED ambient light control system with adaptive brightness and color of claim 6, further comprising:
the emotion value acquisition module is used for acquiring the appointed number of sample data and emotion values corresponding to each group of sample data;
The sample data marking module is used for marking the corresponding sample data by each emotion value to obtain marked target data;
the dividing module is used for dividing the target data into a training data set and a verification data set according to a preset proportion;
the training data input module is used for inputting the data in the training data set into a preset neural network model for supervised training so as to obtain a temporary model;
the verification module is used for verifying the temporary model by utilizing the verification data set to obtain a verification result and judging whether the verification result passes the verification;
and the marking module is used for marking the temporary model as an emotion analysis model if the verification result is that the verification is passed.
10. The LED ambient light control system with adaptive brightness and color of claim 6, wherein the adjustment module comprises:
the parameter acquisition sub-module is used for acquiring the required light brightness and color based on the emotion value;
and the parameter setting module is used for setting the parameters of the LED atmosphere lamp according to the required light brightness and color, so that the LED atmosphere lamp displays the corresponding brightness and color.
CN202311076681.5A 2023-08-25 2023-08-25 LED atmosphere lamp control method and system with self-adaptive brightness and color Active CN116782451B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311076681.5A CN116782451B (en) 2023-08-25 2023-08-25 LED atmosphere lamp control method and system with self-adaptive brightness and color

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311076681.5A CN116782451B (en) 2023-08-25 2023-08-25 LED atmosphere lamp control method and system with self-adaptive brightness and color

Publications (2)

Publication Number Publication Date
CN116782451A CN116782451A (en) 2023-09-19
CN116782451B true CN116782451B (en) 2023-11-14

Family

ID=88008498

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311076681.5A Active CN116782451B (en) 2023-08-25 2023-08-25 LED atmosphere lamp control method and system with self-adaptive brightness and color

Country Status (1)

Country Link
CN (1) CN116782451B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102325155A (en) * 2011-07-14 2012-01-18 福建冰原网络科技有限公司 Vital sign monitoring method and system based on wireless sensor network
CN115175404A (en) * 2022-09-07 2022-10-11 杭州雅观科技有限公司 One-stop automatic dimming method based on LED lamp
CN115294639A (en) * 2022-07-11 2022-11-04 惠州市慧昊光电有限公司 Color temperature adjustable lamp strip and control method thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10980096B2 (en) * 2019-01-11 2021-04-13 Lexi Devices, Inc. Learning a lighting preference based on a reaction type

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102325155A (en) * 2011-07-14 2012-01-18 福建冰原网络科技有限公司 Vital sign monitoring method and system based on wireless sensor network
CN115294639A (en) * 2022-07-11 2022-11-04 惠州市慧昊光电有限公司 Color temperature adjustable lamp strip and control method thereof
CN115175404A (en) * 2022-09-07 2022-10-11 杭州雅观科技有限公司 One-stop automatic dimming method based on LED lamp

Also Published As

Publication number Publication date
CN116782451A (en) 2023-09-19

Similar Documents

Publication Publication Date Title
Benitez et al. On the use of the Emotiv EPOC neuroheadset as a low cost alternative for EEG signal acquisition
WO2021047190A1 (en) Alarm method based on residual network, and apparatus, computer device and storage medium
CN110543823B (en) Pedestrian re-identification method and device based on residual error network and computer equipment
US11185990B2 (en) Method for learning and embodying human facial expression by robot
CN109726662A (en) Multi-class human posture recognition method based on convolution sum circulation combination neural net
JP2011039934A (en) Emotion estimation system and learning system using the same
KR102030131B1 (en) Continuous skin condition estimating method using infrared image
CN116782451B (en) LED atmosphere lamp control method and system with self-adaptive brightness and color
CN114782775A (en) Method and device for constructing classification model, computer equipment and storage medium
CN115083337A (en) LED display driving system and method
Zatarain-Cabada et al. Building a corpus and a local binary pattern recognizer for learning-centered emotions
Kumbhar et al. Anytime prediction as a model of human reaction time
CN117557941A (en) Video intelligent analysis system and method based on multi-mode data fusion
CN116645721B (en) Sitting posture identification method and system based on deep learning
CN113288144A (en) Emotion state display terminal and method based on emotion guidance
CN109711306A (en) A kind of method and apparatus obtaining facial characteristics based on depth convolutional neural networks
CN110135357A (en) A kind of happiness real-time detection method based on long-range remote sensing
Jaber et al. Elicitation hybrid spatial features from HD-sEMG signals for robust classification of gestures in real-time
CN106773703B (en) Online prediction technique is loaded based on fuzzy reasoning and the forging press of Taylor expansion
Granato et al. Goal-directed top-down control of perceptual representations: A computational model of the Wisconsin Card Sorting Test
CN116070861B (en) Course customization method and device based on dynamic learning target
Bharti et al. Detection and classification of plant diseases
Nath et al. Survey On Various Techniques Of Attendance Marking And Attention Detection
CN116528438B (en) Intelligent dimming method and device for lamp
Roscow et al. Discrimination-based perception for robot touch

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant