CN115793490B - Intelligent household energy-saving control method based on big data - Google Patents

Intelligent household energy-saving control method based on big data Download PDF

Info

Publication number
CN115793490B
CN115793490B CN202310065645.2A CN202310065645A CN115793490B CN 115793490 B CN115793490 B CN 115793490B CN 202310065645 A CN202310065645 A CN 202310065645A CN 115793490 B CN115793490 B CN 115793490B
Authority
CN
China
Prior art keywords
pixel value
value
neural network
feature map
convolution kernel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310065645.2A
Other languages
Chinese (zh)
Other versions
CN115793490A (en
Inventor
芦峰
裴涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong Yijiang Intelligent Technology Co ltd
Original Assignee
Nantong Yijiang Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong Yijiang Intelligent Technology Co ltd filed Critical Nantong Yijiang Intelligent Technology Co ltd
Priority to CN202310065645.2A priority Critical patent/CN115793490B/en
Publication of CN115793490A publication Critical patent/CN115793490A/en
Application granted granted Critical
Publication of CN115793490B publication Critical patent/CN115793490B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of intelligent home control, in particular to an intelligent home energy-saving control method based on big data, which comprises the steps of carrying out character detection on video data to obtain a plurality of sections of equilong video sequences with continuous people, and forming a data set of a neural network by taking the video sequences with labels as samples; according to a feature map data set obtained by inputting each sample into a neural network, respectively forming a pixel value sequence according to pixel values of the same position of each feature map in each feature map data set so as to calculate a category interval; and calculating an information gain value corresponding to each pixel value sequence according to the loss value of each sample, obtaining the confidence of each convolution kernel parameter by combining the class interval and the information gain value, and obtaining the turn-off time interval of the lamp by using the trained neural network obtained by adjusting the update gradient of the convolution kernel parameter by the confidence. The invention improves the training precision and efficiency of the neural network, and makes the lamp control more convenient.

Description

Intelligent household energy-saving control method based on big data
Technical Field
The invention relates to the technical field of intelligent home control, in particular to an intelligent home energy-saving control method based on big data.
Background
At present, household products closely related to people's lives tend to be intelligentized gradually, for example, a lamp system controls a lamp to be turned on and off by identifying whether people are in a room, but people leave the room for a short time in real life, and frequent turning on and off of the lamp is caused under the condition. Frequent switching of the lamp not only reduces the service life of the lamp, but also increases the power consumption of the lamp in each switching operation compared with the power consumption of the lamp in a short time, so that it is inconvenient to control the switching of the lamp only by the presence or absence of people in a room.
In the prior art, in order to prevent frequent lamp switching, a traditional neural network is used for analyzing the state of a person before the person leaves a room so as to judge whether the person returns in a short period, and then the lamp is controlled to be switched on or off according to a judgment result. However, in the training process of the traditional neural network, when the traditional neural network analyzes the state of a person before leaving a room, the same importance degree is adopted for analyzing all the characteristics in the whole video image, but a part of characteristics in the video image are important for predicting whether the person returns in a short time, so that the traditional neural network can go a certain curve to extract important characteristics in data; and the traditional neural network generally updates the network neuron parameters in a gradient descending manner, the updating efficiency of the network neuron parameters is low, the accuracy is not accurate enough, especially, the network updating effect and accuracy on a network front-end feature extraction layer are poor, and further the control effect of the lamp switch is not ideal.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide an intelligent household energy-saving control method based on big data, and the adopted technical scheme is as follows:
acquiring indoor video data, and carrying out person detection on each frame of image in the video data to obtain a plurality of sections of equilong video sequences with continuous persons; labeling label types of each video sequence, wherein the label types comprise two labels; taking the video sequence with the label as a sample of a neural network to form a data set;
respectively acquiring a characteristic diagram data set corresponding to each sample input into a neural network in a data set, forming a pixel value sequence by a z-th pixel value of a kth characteristic diagram in each characteristic diagram data set, wherein k and z are positive integers, and dividing the pixel value sequence into two subsequences according to a label category corresponding to the characteristic diagram; calculating the class interval of the z-th pixel value of the kth characteristic diagram according to the absolute difference value of any two pixel values in the subsequence and the pixel value sequence;
setting a plurality of loss levels, obtaining the loss value of each sample, and calculating the information entropy according to the loss level of each loss value; obtaining a plurality of characteristic grades according to a pixel value sequence corresponding to a z-th pixel value of a k-th characteristic diagram, and calculating a conditional entropy corresponding to the z-th pixel value of the k-th characteristic diagram according to the occurrence probability of different loss grades under each characteristic grade; subtracting the conditional entropy from the information entropy to obtain an information gain value corresponding to a z-th pixel value of the kth characteristic diagram;
obtaining the product of category intervals corresponding to the z-th pixel value of the kth feature map and an information gain value, taking the addition result of the products corresponding to all the pixel values in the kth feature map as the confidence level of the kth feature map, and taking the confidence level of the kth feature map as the confidence level of the kth convolution kernel parameter in the neural network;
obtaining the trust degrees of all the convolution kernel parameters in the neural network, and adjusting the update gradient of each convolution kernel parameter by using the trust degrees so as to finish the training of the neural network; and acquiring a possibility index of the person returning to the room in a short time by using the trained neural network, and acquiring the turn-off time interval of the lamp according to the possibility index.
Further, the method for labeling the label category of each video sequence includes:
and acquiring the time length of the person leaving in each video sequence, marking the corresponding video sequence as a person short-time no return label when the time length is greater than a time length threshold value, and marking the corresponding video sequence as a person short-time return label when the time length is less than or equal to the time length threshold value.
Further, the method for acquiring the category interval includes:
respectively calculating the absolute value of the difference between any two pixel values in the current subsequence to obtain a first absolute value difference mean value of the current subsequence; acquiring a first difference absolute value mean value of each subsequence to obtain the sum of the first difference absolute value mean values;
respectively calculating the difference absolute value of any two pixel values in the pixel value sequence to obtain a second difference absolute value mean value of the pixel value sequence; and taking the ratio of the sum of the first difference absolute value mean and the second difference absolute value mean as the class interval of the z-th pixel value of the kth feature map.
Further, the method for obtaining a plurality of feature levels according to a pixel value sequence corresponding to a z-th pixel value of a k-th feature map includes:
acquiring a maximum pixel value and a minimum pixel value in a pixel value sequence to obtain a pixel value range, and taking the ratio of the pixel value range to the characteristic grade quantity as a grade interval based on the set characteristic grade quantity; the pixel value range between the maximum pixel value and the minimum pixel value is divided into a plurality of feature levels equal in number to the set feature levels by the level interval.
Further, the method for adjusting the update gradient of each convolution kernel parameter by using the confidence level includes:
calculating the average trust degree according to the trust degree of each convolution kernel parameter, obtaining the trust degree difference value between the trust degree of the current convolution kernel parameter and the average trust degree, obtaining the adjustment coefficient of the current convolution kernel parameter according to the trust degree difference value, and taking the product of the adjustment coefficient and the update gradient of the current convolution kernel parameter as the adjusted update gradient.
Further, the method for obtaining the turn-off time interval of the lamp according to the probability index includes:
and setting the maximum closing time interval of the lamp, and taking the product of the possibility index and the maximum closing time interval as the closing time interval.
The embodiment of the invention at least has the following beneficial effects: training a neural network by using a sample with a label, and acquiring a class interval according to a pixel value of a feature map corresponding to each convolution kernel parameter in a training process so as to directly show the distinguishing capability of the corresponding feature on the label class; obtaining an information gain value of corresponding characteristics by using the information entropy calculated by the loss value of the sample and the conditional entropy of the characteristics corresponding to each convolution kernel parameter so as to reflect the influence degree of the characteristics on the loss function, and further obtaining the trust degree of each convolution kernel parameter by combining the class interval and the information gain value so as to accurately express the capability of extracting the characteristics of the corresponding convolution kernel parameter; and adjusting the update gradient of the convolution kernel parameters according to the trust degree so as to ensure that the convolution kernel parameters with high accuracy obtain optimal values around the convolution kernel parameters and the convolution kernel parameters with low accuracy obtain optimal values in a far range, so that the adjusted convolution kernel parameters are used for training the neural network, the training precision and efficiency of the neural network are improved, and the control of the lamp switch is more convenient.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart illustrating steps of a smart home energy saving control method based on big data according to an embodiment of the present invention.
Detailed Description
In order to further explain the technical means and effects of the present invention adopted to achieve the predetermined invention purpose, the following detailed description, the structure, the features and the effects of the intelligent home energy saving control method based on big data according to the present invention are provided with the accompanying drawings and the preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The specific scenes aimed by the invention are as follows: under the normal condition, intelligent control's lamps and lanterns can be through whether there is the people in the analysis room to control the switching state, but because people often can appear going out the thing of taking the article etc. to leave for a short time in the real life, then return fast indoor again, people leave this moment, lamps and lanterns will close, and people return, lamps and lanterns will open, and then can cause unnecessary power consumption and lamps and lanterns damage because the lamps and lanterns switch is frequently closed. Therefore, the neural network is trained by utilizing video image data before people leave the room, whether the people return in a short time or not is judged, the lamp does not need to be turned off when the people return in the short time, but the data characteristics cannot be analyzed in the training process of the traditional neural network, so that the result prediction is more important, the network cannot be guided to obtain more characteristics, and the accuracy and the efficiency of network training cannot be guaranteed.
The following describes a specific scheme of the intelligent home energy-saving control method based on big data in detail with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of steps of a smart home energy saving control method based on big data according to an embodiment of the present invention is shown, where the method includes the following steps:
s001, acquiring indoor video data, and performing person detection on each frame of image in the video data to obtain a plurality of sections of equilong video sequences with persons continuously; labeling label types of each video sequence, wherein the label types comprise two labels; and taking the video sequence with the label as a sample of the neural network to form a data set.
Specifically, a camera is installed indoors, and the camera is used for collecting indoor video data. And carrying out person detection on each frame of image in the video data by using a person detection network, wherein the person detection network is a DNN network, the network structure is an Encoder-FC structure, a data set for training the DNN network is a video image data set with labels, and a loss function of the DNN network is a cross entropy loss function. The method for labeling each frame of image in the video data comprises the following steps: the method comprises the steps of manually judging whether people exist in each frame of image, marking the image with the people as a people type, and marking the type as a person type in the scheme
Figure SMS_1
An image in which no person exists is labeled as an unmanned class, which is labeled [0,1 ] in the present embodiment]。
And carrying out person detection by using the trained DNN network to obtain images of persons and images of persons not existing in the video data. Then, intercepting a video sequence 5min before the person leaves, namely the video sequence is an image set in which the person continuously exists, further obtaining a plurality of sections of video sequences in which the person continuously exists, and labeling the label type of each section of video sequence, wherein the labeling method comprises the following steps:
and acquiring the time length of the person leaving in each video sequence according to the initial time of the person leaving and the time of the person appearing again in the video sequences, marking the corresponding video sequence as a person short-time no return label when the time length is greater than a time length threshold value, and marking the corresponding video sequence as a person short-time return label when the time length is less than or equal to the time length threshold value.
Preferably, in the scheme, the time length threshold value is an empirical value of 300 seconds.
And taking the video sequence with the label as a sample of the neural network, and taking one video sequence as one sample to form a data set of the neural network.
Step S002, respectively obtaining a characteristic map data set corresponding to each sample input into the neural network in the data set, forming a pixel value sequence by the z-th pixel value of the kth characteristic map in each characteristic map data set, wherein k and z are positive integers, and dividing the pixel value sequence into two subsequences according to the label category corresponding to the characteristic map; and calculating the class interval of the z-th pixel value of the kth characteristic map according to the absolute value of the difference value of any two pixel values in the sub-sequence and the pixel value sequence.
Specifically, the neural network used in the scheme comprises an input layer, a convolution layer, a full connection layer and an output layer, and the training process of the neural network is as follows: and inputting the samples in the data set into the neural network one by one in the input layer, enabling the input samples to enter the convolutional layer for convolution and pooling to obtain a characteristic diagram data set of the middle layer, and performing convolution operation in the convolutional layer in a packet convolution mode. And inputting the feature map data set output by the convolutional layer into the full-link layer for weighting calculation, outputting a two-dimensional feature vector by the obtained output result through the output layer, calculating a loss value according to the output result and the label of each sample, and continuously updating each parameter of the neural network by using the loss value in a gradient descending manner.
It should be noted that, the two rounds of training are performed on the neural network according to the above training method for the neural network, and the two rounds of training refer to: and inputting the samples in the data set into a neural network one by one for network training, and inputting the data set into the neural network for training after the training is finished.
Because the feature isolation can be realized through the grouping convolution, the feature map corresponding to each sample can be obtained by processing each sample through one convolution kernel parameter. The feature map data sets corresponding to the samples describe the same feature at the same pixel position, and the samples have different presentation forms of the feature, so that the values of the pixel values in the feature maps corresponding to the samples are different, but because the samples of the label category have similar features, the interval of the pixel values of the same category is smaller, and the interval of the pixel values of different categories is larger, so that the reliability of the convolution kernel parameter corresponding to each feature is determined by calculating the category interval of each feature, wherein taking the z-th pixel value of the kth feature map in the feature map data set as an example, the acquisition process of the category interval is as follows:
obtaining samples in a data set during neural network training
Figure SMS_5
Characteristic map data set obtained by inputting into neural network>
Figure SMS_9
Wherein is present>
Figure SMS_13
Is a sample->
Figure SMS_3
The 1 st feature map, i.e. the set of feature map data +, which is input into the neural network>
Figure SMS_7
Characteristic map [ 1 ], [>
Figure SMS_11
Is a sample>
Figure SMS_15
The 2 nd characteristic diagram, i.e. the characteristic diagram data set->
Figure SMS_2
The 2 nd feature map of (4), based on the comparison result>
Figure SMS_6
Is a sample>
Figure SMS_10
The kth feature map, i.e. the set of feature map data->
Figure SMS_14
The kth feature map of (4), is selected>
Figure SMS_4
Is a sample>
Figure SMS_8
The N1 th feature map, i.e. the set of feature map data +, which is input into the neural network>
Figure SMS_12
The N1 th feature map in (1).
Respectively acquiring a feature map data set corresponding to each sample input into the neural network in the data set, wherein one sample corresponds to one feature map set, and the feature map data sets are enabled to be arranged in the feature map data sets
Figure SMS_17
The z-th pixel value of the kth feature map is recorded as ∑>
Figure SMS_20
Then the zth pixel value of the kth feature map of all feature map data sets corresponding to all samples is combined into a pixel value sequence->
Figure SMS_23
In which>
Figure SMS_18
Set->
Figure SMS_21
The z-th pixel value of the kth feature map>
Figure SMS_22
Set->
Figure SMS_25
The z-th pixel value of the kth feature map, in>
Figure SMS_16
Set->
Figure SMS_19
The z-th pixel value of the kth feature map>
Figure SMS_24
Indicating the number of samples.
Dividing the pixel value sequence into two subsequences according to the class label of the sample corresponding to each feature map data set, wherein one subsequence corresponds to a person short-time non-return label, and the other subsequence corresponds to a person short-time return label, and calculating the difference absolute value of any two pixel values in the current subsequence respectively to obtain a first difference absolute value mean value of the current subsequence; acquiring a first difference absolute value mean value of each subsequence to obtain the sum of the first difference absolute value mean values; respectively calculating the absolute value of the difference between any two pixel values in the pixel value sequence to obtain a second absolute value mean value of the difference of the pixel value sequence; and taking the ratio of the sum of the first difference absolute value mean and the second difference absolute value mean as the class interval of the z-th pixel value of the kth feature map.
As an example, the class interval of the z-th pixel value of the kth feature map
Figure SMS_26
The calculation formula of (2) is as follows:
Figure SMS_27
wherein the content of the first and second substances,
Figure SMS_28
is the mean of the absolute values of the first differences of a subsequence>
Figure SMS_29
Is a first mean absolute value of difference of another subsequence, is combined>
Figure SMS_30
Is the second mean absolute value of differences of the sequence of pixel values.
The calculation formula of the category interval shows the value of each sample of the same characteristic in different label categories, the greater the category interval is, the easier the two label category information can be distinguished, when the category interval can well distinguish the two label category information, the value difference of the sample of the same label category is smaller, and the value difference of the sample of the different label category is larger, that is, the smaller the first difference absolute value mean value is, the larger the category interval is, the larger the second difference absolute value mean value is, and the larger the category interval is.
And acquiring the class interval of each pixel value of each feature map in the feature map data set based on the acquisition process of the class interval of the z-th pixel value of the k-th feature map in the feature map data set.
S003, setting a plurality of loss levels, acquiring a loss value of each sample, and calculating information entropy according to the loss level of each loss value; obtaining a plurality of characteristic levels according to a pixel value sequence corresponding to a z-th pixel value of a k-th characteristic diagram, and calculating a conditional entropy corresponding to the z-th pixel value of the k-th characteristic diagram according to the occurrence probability of different loss levels under each characteristic level; and subtracting the conditional entropy from the information entropy to obtain an information gain value corresponding to the z-th pixel value of the kth characteristic diagram.
Specifically, since the class interval of each feature is different and it cannot be considered that the class interval of which feature is large, the accuracy of the feature on the prediction and classification of the neural network is high, and therefore, the contribution condition of each feature on the accurate classification of the neural network needs to be analyzed to determine the contribution weight of each feature.
Setting a plurality of losses, etcStage (2): at a loss level interval of 0.1, will
Figure SMS_32
Evaluation as loss class 1>
Figure SMS_36
Is in loss rating of 2, \ 8230;, is>
Figure SMS_39
Loss rating of 10; obtaining the corresponding loss value when each sample is input into the neural network to form a loss value sequence->
Figure SMS_33
In which>
Figure SMS_35
Is the loss value of the 1 st sample, is>
Figure SMS_38
Is the loss value of the 2 nd sample, is>
Figure SMS_41
Is the loss value of the ith sample, is>
Figure SMS_31
Is the first->
Figure SMS_34
A loss value of one sample; according to the loss grade corresponding to each loss value in the loss value sequence, the occurrence probability of each loss grade is counted>
Figure SMS_37
Calculating the entropy of the information of the sequence of loss values>
Figure SMS_40
The larger the information entropy is, the lower the purity of the loss value is, wherein the calculation method of the information entropy is a known technology, and the scheme is not described any more.
Similarly, taking the z-th pixel value of the kth feature map in the feature map data set as an example, the pixel value sequence is obtained
Figure SMS_43
Is greater than or equal to the maximum pixel value->
Figure SMS_46
And a minimum pixel value->
Figure SMS_48
Obtaining a pixel value range, based on the set characteristic grade number, the characteristic grade number in the scheme is 10, and judging the ratio between the pixel value range and the characteristic grade number>
Figure SMS_44
As a rank interval; dividing the pixel value range between the maximum pixel value and the minimum pixel value into a number of characteristic classes which is equal to the number of characteristic classes set, i.e. <' > with a class interval>
Figure SMS_47
Is characterized by a characteristic rank 1, is greater than or equal to>
Figure SMS_49
Is characterized by grade 2, \ 8230;, is>
Figure SMS_50
Is a feature level of 10; statistics of the probability of occurrence of the i-th loss level in the j-th characteristic level in a pixel value sequence of the z-th pixel value of the k-th characteristic map>
Figure SMS_42
And calculating the conditional entropy corresponding to the z-th pixel value of the k-th feature map>
Figure SMS_45
The conditional entropy represents the probability of loss value determination in the case of the z-th pixel value determination of the k-th feature map. The method for calculating the conditional entropy is a known technology, and is not described in detail in the scheme.
Entropy of information
Figure SMS_51
Subtracting stripsEntropy is greater or less>
Figure SMS_52
As an information gain value ^ which corresponds to the z-th pixel value of the k-th feature map>
Figure SMS_53
The information gain value represents the influence of the corresponding feature on the overall loss function, and the larger the information gain value is, the more influence of the accuracy of the value of the z-th pixel value of the k-th feature map on the loss value condition of the neural network is, so that the feature is more important for the neural network.
And acquiring the information gain value of each pixel value of each characteristic diagram in the characteristic diagram data set based on the information gain value acquisition method.
Step S004, the product of the category interval corresponding to the z-th pixel value of the kth feature map and the information gain value is obtained, the addition result of the products corresponding to all the pixel values in the kth feature map is used as the confidence level of the kth feature map, and the confidence level of the kth feature map is used as the confidence level of the kth convolution kernel parameter in the neural network.
Specifically, according to step S002 and step S003, the class interval corresponding to the z-th pixel value of the kth feature map in the feature map data set is obtained
Figure SMS_54
And an information gain value->
Figure SMS_55
To acquire a category interval pick>
Figure SMS_56
And an information gain value->
Figure SMS_57
The product between them.
It should be noted that the larger the class interval is, the better the label class can be distinguished by the corresponding features; the larger the information gain value, the greater the degree of influence of the corresponding feature on the loss function.
And acquiring products corresponding to each pixel value in the kth feature map, and taking the addition result of all the products as the confidence of the kth feature map in the feature map data set. Because each feature map is obtained by processing input data through the corresponding convolution kernel parameter, the greater the confidence level of the feature map is, the greater the confidence level of the corresponding convolution kernel parameter is, and therefore the confidence level of the kth feature map is taken as the confidence level of the kth convolution kernel parameter in the neural network.
And similarly, obtaining the trust degree of each feature map in the feature map data set so as to obtain the trust degree of each convolution kernel parameter in the neural network.
Step S005, obtaining the trust of all the convolution kernel parameters in the neural network, and adjusting the update gradient of each convolution kernel parameter by using the trust to complete the training of the neural network; and acquiring a possibility index of the person returning to the room in a short time by using the trained neural network, and acquiring the turn-off time interval of the lamp according to the possibility index.
Specifically, the neural network obtained in step S004 summarizes the confidence levels of all the convolution kernel parameters, and the neural network is guided to train according to the confidence levels of the convolution kernel parameters: when the confidence of one convolution kernel parameter is higher, the extracted features of the convolution kernel parameter are better, the probability that better parameters exist around the convolution kernel parameter is higher, and when the confidence of one convolution kernel parameter is lower, the probability that better parameters exist around the convolution kernel parameter is lower, so that the update gradient of the corresponding convolution kernel parameter is adjusted by using the confidence of each convolution kernel parameter, and the adjusted update gradient of the convolution kernel parameter is used for guiding the training of the neural network.
The adjustment method for updating the gradient comprises the following steps: calculating the average trust level according to the trust level of each convolution kernel parameter, obtaining the trust level difference value between the trust level of the current convolution kernel parameter and the average trust level, obtaining the adjusting coefficient of the current convolution kernel parameter according to the trust level difference value, and taking the product of the adjusting coefficient and the updating gradient of the current convolution kernel parameter as the adjusted updating gradient.
As an example, the adjustment formula for updating the gradient is:
Figure SMS_58
wherein the content of the first and second substances,
Figure SMS_59
updating the gradient of the k convolution kernel after the parameter is correspondingly adjusted; />
Figure SMS_60
The confidence of the kth convolution kernel parameter; />
Figure SMS_61
An updated gradient for the kth convolution kernel parameter; />
Figure SMS_62
The total number of the convolution kernel parameters in the neural network;
Figure SMS_63
is the average confidence level; />
Figure SMS_64
The adjustment coefficient of the k-th convolution kernel parameter.
The greater the confidence of the convolution kernel parameter, the greater the update gradient thereof, so the greater the confidence, the greater the corresponding adjustment coefficient, the greater the adjustment of the update gradient of the corresponding convolution kernel parameter, that is, the greater the adjusted update gradient.
And adjusting the update gradients of all the convolution kernel parameters in the neural network by using the adjustment formula of the update gradients, judging whether the loss function of the neural network meets the convergence condition or not when each round of training is finished, stopping the training of the neural network when the convergence condition is met, adjusting the update gradient of each convolution kernel parameter in the neural network when the convergence condition is not met, and performing the next round of training by using the adjusted update gradients until the loss function meets the convergence condition to finish the training of the neural network.
Collecting video sequence of 5min before people leave the room in real timeInputting the video sequence into a trained neural network, outputting a two-dimensional feature vector, wherein one value in the two-dimensional feature vector represents a possibility index that a person returns to the room in a short time, the other value represents a possibility index that the person does not return to the room in a short time, and the possibility index that the person returns to the room in a short time is obtained according to the two-dimensional feature vector
Figure SMS_65
And setting the maximum closing time interval of the lamp, and taking the product of the possibility index and the maximum closing time interval as the closing time interval, namely the actual closing time interval of the lamp in a real-time acquisition scene. The light fixture is directly turned off after the off-time interval is reached when the person does not return within the off-time interval, and the light fixture does not need to be turned off when the person returns within the off-time interval.
Preferably, the maximum turn-off time interval of the lamp is set to be 5min in the scheme.
In summary, the embodiment of the present invention provides an intelligent home energy saving control method based on big data, where the method performs person detection on video data to obtain multiple equal-length video sequences of consecutive persons, and uses the video sequences with tags as samples to form a data set of a neural network; according to a feature map data set obtained by inputting each sample into a neural network, respectively forming a pixel value sequence according to pixel values at the same position of each feature map in each feature map data set so as to calculate a category interval; and calculating an information gain value corresponding to each pixel value sequence according to the loss value of each sample, obtaining the confidence of each convolution kernel parameter by combining the class interval and the information gain value, and obtaining the turn-off time interval of the lamp by using the trained neural network obtained by adjusting the update gradient of the convolution kernel parameter by the confidence. The invention ensures the training precision and efficiency of the neural network, and makes the lamp control more convenient.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that are within the spirit of the present invention are intended to be included therein.

Claims (6)

1. The intelligent household energy-saving control method based on big data is characterized by comprising the following steps:
acquiring indoor video data, and carrying out person detection on each frame of image in the video data to obtain a plurality of sections of equilong video sequences with continuous persons; labeling label types of each video sequence, wherein the label types comprise two labels; taking the video sequence with the label as a sample of a neural network to form a data set;
respectively obtaining a characteristic diagram data set corresponding to each sample input into a neural network in a data set, forming a pixel value sequence by a z-th pixel value of a k-th characteristic diagram in each characteristic diagram data set, wherein k and z are positive integers, and dividing the pixel value sequence into two subsequences according to a label category corresponding to the characteristic diagram; calculating the class interval of the z-th pixel value of the kth characteristic diagram according to the absolute difference value of any two pixel values in the subsequence and the pixel value sequence;
setting a plurality of loss levels, obtaining the loss value of each sample, and calculating the information entropy according to the loss level of each loss value; obtaining a plurality of characteristic levels according to a pixel value sequence corresponding to a z-th pixel value of a k-th characteristic diagram, and calculating a conditional entropy corresponding to the z-th pixel value of the k-th characteristic diagram according to the occurrence probability of different loss levels under each characteristic level; subtracting the conditional entropy from the information entropy to obtain an information gain value corresponding to a z-th pixel value of the kth characteristic diagram;
obtaining the product of category intervals corresponding to the z-th pixel value of the kth feature map and an information gain value, taking the addition result of the products corresponding to all the pixel values in the kth feature map as the confidence level of the kth feature map, and taking the confidence level of the kth feature map as the confidence level of the kth convolution kernel parameter in the neural network;
obtaining the trust degrees of all the convolution kernel parameters in the neural network, and adjusting the update gradient of each convolution kernel parameter by using the trust degrees so as to finish the training of the neural network; and acquiring a possibility index of the person returning to the room in a short time by using the trained neural network, and acquiring the turn-off time interval of the lamp according to the possibility index.
2. The intelligent household energy-saving control method based on big data according to claim 1, wherein the method for labeling each video sequence with label category comprises:
and acquiring the time length of leaving of a person in each video sequence, marking the corresponding video sequence as a person transient non-return label when the time length is greater than a time length threshold value, and marking the corresponding video sequence as a person transient return label when the time length is less than or equal to the time length threshold value.
3. The intelligent household energy-saving control method based on big data according to claim 1, wherein the method for acquiring the category interval comprises:
respectively calculating the difference absolute value of any two pixel values in the current subsequence to obtain a first difference absolute value mean value of the current subsequence; acquiring a first difference absolute value mean value of each subsequence to obtain the sum of the first difference absolute value mean values;
respectively calculating the difference absolute value of any two pixel values in the pixel value sequence to obtain a second difference absolute value mean value of the pixel value sequence; and taking the ratio of the sum of the first difference absolute value mean and the second difference absolute value mean as the class interval of the z-th pixel value of the kth feature map.
4. The intelligent household energy-saving control method based on big data according to claim 1, wherein the method for obtaining a plurality of feature levels according to the pixel value sequence corresponding to the z-th pixel value of the k-th feature map comprises:
acquiring a maximum pixel value and a minimum pixel value in a pixel value sequence to obtain a pixel value range, and taking the ratio of the pixel value range to the characteristic grade quantity as a grade interval based on the set characteristic grade quantity; the pixel value range between the maximum pixel value and the minimum pixel value is divided into a plurality of feature levels equal in number to the set feature levels by the level interval.
5. The intelligent household energy-saving control method based on big data according to claim 1, wherein the method for adjusting the update gradient of each convolution kernel parameter by using the confidence level comprises the following steps:
calculating the average trust level according to the trust level of each convolution kernel parameter, obtaining the trust level difference value between the trust level of the current convolution kernel parameter and the average trust level, obtaining the adjusting coefficient of the current convolution kernel parameter according to the trust level difference value, and taking the product of the adjusting coefficient and the updating gradient of the current convolution kernel parameter as the adjusted updating gradient.
6. The intelligent household energy-saving control method based on big data according to claim 1, wherein the method for obtaining the turn-off time interval of the lamp according to the possibility index comprises:
and setting the maximum closing time interval of the lamp, and taking the product of the possibility index and the maximum closing time interval as the closing time interval.
CN202310065645.2A 2023-02-06 2023-02-06 Intelligent household energy-saving control method based on big data Active CN115793490B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310065645.2A CN115793490B (en) 2023-02-06 2023-02-06 Intelligent household energy-saving control method based on big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310065645.2A CN115793490B (en) 2023-02-06 2023-02-06 Intelligent household energy-saving control method based on big data

Publications (2)

Publication Number Publication Date
CN115793490A CN115793490A (en) 2023-03-14
CN115793490B true CN115793490B (en) 2023-04-11

Family

ID=85429959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310065645.2A Active CN115793490B (en) 2023-02-06 2023-02-06 Intelligent household energy-saving control method based on big data

Country Status (1)

Country Link
CN (1) CN115793490B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363086A (en) * 2019-06-11 2019-10-22 中国科学院自动化研究所南京人工智能芯片创新研究院 Diagram data recognition methods, device, computer equipment and storage medium
CN110610143A (en) * 2019-08-27 2019-12-24 汇纳科技股份有限公司 Crowd counting network method, system, medium and terminal for multi-task joint training
CN111382191A (en) * 2020-03-26 2020-07-07 吕梁学院 Machine learning identification method based on deep learning
CN111814661A (en) * 2020-07-07 2020-10-23 西安电子科技大学 Human behavior identification method based on residual error-recurrent neural network
WO2021087985A1 (en) * 2019-11-08 2021-05-14 深圳市欢太科技有限公司 Model training method and apparatus, storage medium, and electronic device
CN114037838A (en) * 2021-10-20 2022-02-11 北京旷视科技有限公司 Neural network training method, electronic device and computer program product
CN115049814A (en) * 2022-08-15 2022-09-13 聊城市飓风工业设计有限公司 Intelligent eye protection lamp adjusting method adopting neural network model
CN115068940A (en) * 2021-03-10 2022-09-20 腾讯科技(深圳)有限公司 Control method of virtual object in virtual scene, computer device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10546242B2 (en) * 2017-03-03 2020-01-28 General Electric Company Image analysis neural network systems

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363086A (en) * 2019-06-11 2019-10-22 中国科学院自动化研究所南京人工智能芯片创新研究院 Diagram data recognition methods, device, computer equipment and storage medium
CN110610143A (en) * 2019-08-27 2019-12-24 汇纳科技股份有限公司 Crowd counting network method, system, medium and terminal for multi-task joint training
WO2021087985A1 (en) * 2019-11-08 2021-05-14 深圳市欢太科技有限公司 Model training method and apparatus, storage medium, and electronic device
CN111382191A (en) * 2020-03-26 2020-07-07 吕梁学院 Machine learning identification method based on deep learning
CN111814661A (en) * 2020-07-07 2020-10-23 西安电子科技大学 Human behavior identification method based on residual error-recurrent neural network
CN115068940A (en) * 2021-03-10 2022-09-20 腾讯科技(深圳)有限公司 Control method of virtual object in virtual scene, computer device and storage medium
CN114037838A (en) * 2021-10-20 2022-02-11 北京旷视科技有限公司 Neural network training method, electronic device and computer program product
CN115049814A (en) * 2022-08-15 2022-09-13 聊城市飓风工业设计有限公司 Intelligent eye protection lamp adjusting method adopting neural network model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
丁琳.基于神经网络算法的居家室内环境监控系统的设计.《CNKI》.2019,全文. *
卢毅.基于轻量级卷积神经网络的人脸检测和识别算法研发.《CNKI》.2018,全文. *

Also Published As

Publication number Publication date
CN115793490A (en) 2023-03-14

Similar Documents

Publication Publication Date Title
CN108664924B (en) Multi-label object identification method based on convolutional neural network
CN109769333B (en) Event-driven household intelligent lighting method and system
CN110347839A (en) A kind of file classification method based on production multi-task learning model
CN111735178A (en) Air conditioner energy-saving system based on elite meaning and SVR regression algorithm and optimization method
CN109101888A (en) A kind of tourist&#39;s flow of the people monitoring and early warning method
CN102902756A (en) Video abstraction extraction method based on story plots
CN111797707B (en) Clustering-based shot key frame extraction method
CN107622236B (en) Crop disease diagnosis and early warning method based on swarm and gradient lifting decision tree algorithm
CN103208126A (en) Method for monitoring moving object in natural environment
CN105931639B (en) A kind of voice interactive method for supporting multistage order word
CN109376772A (en) A kind of Combination power load forecasting method based on neural network model
CN112287827A (en) Complex environment pedestrian mask wearing detection method and system based on intelligent lamp pole
CN110766046A (en) Air quality measurement method for two-channel convolutional neural network ensemble learning
CN109799726B (en) Smart home system combined with living environment detection
CN112686456A (en) Power load prediction system and method combining edge calculation and energy consumption identification
CN103699771A (en) Cold load predication scene clustering method
CN115835463A (en) Intelligent lighting control system
CN109558467B (en) Method and system for identifying user category of electricity utilization
CN109919073A (en) A kind of recognition methods again of the pedestrian with illumination robustness
CN115793490B (en) Intelligent household energy-saving control method based on big data
CN115880562A (en) Lightweight target detection network based on improved YOLOv5
CN117354986B (en) Intelligent control method and system for multifunctional LED lamp beads
CN117115754B (en) Intelligent duck shed monitoring method based on computer vision
Xiao et al. Group-housed pigs and their body parts detection with Cascade Faster R-CNN
CN112085540A (en) Intelligent advertisement pushing system and method based on artificial intelligence technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant