CN115357645A - Pyramid weighting-based online sampling algorithm for time sequence data of energy management system - Google Patents
Pyramid weighting-based online sampling algorithm for time sequence data of energy management system Download PDFInfo
- Publication number
- CN115357645A CN115357645A CN202211079816.9A CN202211079816A CN115357645A CN 115357645 A CN115357645 A CN 115357645A CN 202211079816 A CN202211079816 A CN 202211079816A CN 115357645 A CN115357645 A CN 115357645A
- Authority
- CN
- China
- Prior art keywords
- data
- pyramid
- weighting
- sequence
- variance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2458—Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
- G06F16/2474—Sequence data queries, e.g. querying versioned data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24564—Applying rules; Deductive queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/248—Presentation of query results
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Probability & Statistics with Applications (AREA)
- Software Systems (AREA)
- Complex Calculations (AREA)
Abstract
According to the pyramid weighting-based online sampling algorithm for the time series data of the energy management system, a small segment of time series data is sampled online through a sliding window, the weight of the obtained data is further weighted according to a pyramid model framework based on variance fluctuation change, and then the data with large variance fluctuation is sampled, so that the storage capacity of the data can be effectively reduced through the online sampling mode of the sliding window. The method for weighting the weight of the acquired data according to the pyramid model architecture based on variance fluctuation changes can greatly reduce the calculated amount of variance fluctuation between data in the sampling process on the premise of effectively ensuring the sampling accuracy.
Description
Technical Field
The invention belongs to the technical field of data communication, and particularly relates to an energy management system time sequence data online sampling algorithm based on pyramid weighting.
Background
In embedded development, data communication needs to be carried out among chips of a single chip microcomputer, a large amount of time sequence data transmission exists, when an upper computer is displayed, the upper computer does not need too fine data, and only data characteristics need to be represented, so that a certain sampling method is needed, and the data transmission pressure is reduced.
In the prior art, the following methods are generally adopted to sample data: the patent application numbers are: CN202110322088 discloses a processing method of time series data, a time series data processing device and a computer readable storage medium. The provided time series data processing method comprises the following steps: the method comprises the steps of obtaining the number of data points of first time sequence data to be rendered in a set duration, responding to the maximum rendering data number of which the number of the data points is larger than the set duration, adopting a down-sampling method matched with the number of the data points to down-sample the first time sequence data to obtain second time sequence data after down-sampling, and drawing a time sequence diagram for the second time sequence data.
It can be seen that in the prior art, the data is sampled at fixed intervals in most cases. However, because the signal does not change regularly, the fixed-interval sampling wastes storage in a relatively flat time period, and the critical signal is missed during a period when the information changes rapidly, so that the sampling strategy is not optimal. In addition, in the prior art, dimension reduction sampling is performed only when the number of data points in a certain time period is greater than the maximum number of rendering data with a set duration, and dimension reduction sampling is not performed when the number of data points in a certain time period is less than the maximum number of rendering data with a set duration, so that data sampling in each time period may be accurate by separately sampling data in the time period based on different time periods, but the relation among different time periods is not considered for the whole data sampling, and thus the sampling effect is not accurate enough.
The above problems are currently urgently needed.
Disclosure of Invention
The present invention provides an online sampling algorithm for time series data based on pyramid weighting, which overcomes the above disadvantages of the prior art.
The technical scheme adopted by the invention for solving the technical problem is as follows: an online pyramid weighting-based time series data sampling algorithm, comprising: s1, storing a data signal acquired on line based on a sliding window into an original binary data set, wherein each element comprises data and a weight corresponding to the data; s2, traversing the original binary data set, and generating a first sub-data sequence for ith data based on a pyramid model architecture; s3, generating a first variance based on the first sub-data sequence; s4, updating the i point and the i +2 in the first sub data sequence based on the first variance k -1 point data weight; s5, integrating the updated first subdata sequence to generate an updated binary data group; s6, comparing the weight value in the first element in the updated binary data set with a preset weight threshold value; s7, updating the weight threshold value to a weight value corresponding to the first data, deleting the first element and generating a deleted binary data group; s8, generating a second sub data sequence and a second variance based on the obtained new data signal and the deleted binary data set; s9, updating the weight value of the second sub-data sequence based on the second variance to generate an updated second sub-data sequence; s10, integrating the deleted binary data group and the updated second subdata sequence to generate an updated binary data group; s11, go to step S6 until no new data signal is generated.
Further, step S2 includes: and S21, according to the pyramid model architecture, starting from the ith data, taking 2 k-th power continuous data to generate a first subdata sequence, wherein k is a positive integer.
Further, the step S3 includes: s31, calculating a first variance V based on data in the first sub-data sequence ik Where i ∈ [1, n ]],k∈N * 。
Further, the step S4 includes: s41, i points and i +2 in the first sub data sequence are compared k -1 point data weight increase
Further, the step S5 includes: and S51, integrating the plurality of updated first subdata sequences, wherein the same data takes the corresponding element with the large weight value as the element in the updated binary data group.
Further, the step S6 includes: s61, when the weight value of the first element is not smaller than a preset weight threshold, outputting data in the element as a sampling point; and S62, when the weighted value in the first element is greater than the preset threshold value, not outputting the sampling point.
Further, the step S7 includes: and S71, deleting the first element in the updated binary data group, sequentially shifting the other elements forward by one bit, and freeing the last bit of cache.
Further, the step S8 includes: s81, storing the obtained new signal data in the last cache position in the deleted binary data group; s82, sequentially taking 2 k-th power continuous data from the last data to the front to generate a second sub-data sequence; s83, calculating the variance of the data contained in the second subdata to generate a second variance V'.
The present invention also provides a computer-readable storage medium, having one or more instructions stored therein, for causing the computer to execute the pyramid weighting-based time-series data online sampling algorithm described above.
The present invention also provides an electronic device, comprising: a memory and a processor; at least one program instruction is stored in the memory; the processor loads and executes the at least one program instruction to realize the pyramid weighting-based time series data online sampling algorithm.
The beneficial effects of the invention are: according to the pyramid weighting-based time sequence data online sampling algorithm, a small segment of time sequence data is sampled online through a sliding window, the weight of the obtained data is further weighted according to a pyramid model framework based on variance fluctuation change, and then the data with large variance fluctuation is sampled, so that the storage capacity of the data can be effectively reduced through the sliding window online sampling mode. The method for weighting the weight of the acquired data according to the pyramid model architecture based on variance fluctuation changes can greatly reduce the calculated amount of variance fluctuation between data in the sampling process on the premise of effectively ensuring the sampling accuracy.
Drawings
The invention is further illustrated by the following examples in conjunction with the drawings.
FIG. 1 is a schematic diagram illustrating steps of an energy management system time series data online sampling algorithm based on pyramid weighting according to an embodiment of the present invention;
fig. 2 is a partial block diagram of an electronic device provided by an embodiment of the invention.
Detailed Description
Before discussing exemplary embodiments in greater detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel, concurrently, or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but could have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The present invention will now be described in detail with reference to the accompanying drawings. This figure is a simplified schematic diagram, and merely illustrates the basic structure of the present invention in a schematic manner, and therefore it shows only the constitution related to the present invention.
Example 1
Referring to fig. 1, a schematic step diagram of an energy management system time series data online sampling algorithm based on pyramid weighting according to the present invention is shown. The algorithm is applied to an energy management system, and the time-lapse data are sampled online based on the weighting of the variance fluctuation pyramid, so that the data storage amount is reduced, and the data calculation amount is also reduced.
The pyramid weighting-based online sampling algorithm for the time series data of the energy management system comprises the following steps of:
step S1: and storing the data signal acquired on line based on the sliding window as an original binary data set, wherein each element comprises data and a weight corresponding to the data.
As an example, the acquired time series data amount, that is, the number n of acquired data signals, may be determined by presetting the size of the sliding window, and the acquired data signals are stored in a binary data group, where the binary data group includes the acquired data values and weights corresponding to the data values, and an initial value of the weight may be set to 0.
Specifically, n data signals are continuously acquired through a sliding window to make a binary group (d) i ,p i ) And storing the data into a cache. Wherein d is i Is the ith data, p i For corresponding weights, p is i Is set to 0. Therefore, online sampling can be realized on a small segment of time sequence data through the sliding window, sampling is not needed after all data are stored, and the storage pressure can be reduced.
Step S2: and traversing the original binary data group, and generating a first subdata sequence for the ith data based on a pyramid model architecture.
As an example, the step S2 includes: s21, according to the pyramid model architecture, starting from the ith data, taking 2 k-th power continuous data to generate a first subdata sequence, namely from i to i +2 k -1 and not exceeding the buffer, i.e. i +2 k -1 is not more than n, wherein k is a positive integer.
Specifically, when n is 20, i =1,2, 3.
As shown in table 1 below, when i =1,k =1,2 is taken as the first data 1 A first sub data sequence generated by the data processing unit is { (1, 0), (2, 0) }; when i =1,k =2, 2 is taken first from the first data 2 A first sub data sequence generated by the data processing unit is { (1, 0), (2, 0), (3, 0), (4, 0) }; by analogy, when i =16,k =1, the 16 th data is taken as the head, and 2 is taken 1 Continuous data, and the generated first sub-data sequence is { (16, 0), (17, 0) };
when i =16,k =2, 2 is taken first for the 16 th data 2 A first sub data sequence generated by { (16, 0), (17, 0), (18, 0), (19, 0) }; when i =19,k =1,2 is taken to begin with the 19 th data 1 Continuous data, the first sub data sequence generated is { (19, 0), (20, 0) }; when i =19,k =2, since i +2 k -1=22 exceeds 20, so when i =19, only k =1 is taken. Similarly, when i =20,k =1, since i +2 k -1=21 exceeds 20, so no data is fetched when i = 20.
TABLE 1
And step S3: a first variance is generated based on the first sub-data sequence.
As an example, the step S3 includes: s31, calculating a first variance V based on data in the first sub-data sequence i,k Where i ∈ [1, n ]],k∈N * 。
Specifically, based on the above Table 1, for the first sub-data sequence { (1, 0), (2, 0) }, the first variance thereof is denoted as V 1,1 Wherein the variance formula is:
V i,k ={(x 1 -m) 2 +(x 2 -m) 2 +......+(x n -m) 2 }/n
wherein x is 1 ,.......,x n Denotes the nth data value, i.e., data value 1 and data value 2, m corresponding to the above { (1, 0), (2, 0) } are data values x 1 ,.......,x n Average value of (a).
Similarly, the first variance corresponding to the first sub-data sequence { (1, 0), (2, 0), (3, 0), (4, 0) } described in Table 1 above is V 1,2 (ii) a The first variance corresponding to the first sub-data sequence { (19, 0), (20, 0) } set forth in Table 1 above is V 19,1 . The first differences corresponding to the remaining first sub-data sequences are analogically repeated, which is not described herein. That is, this method can improve the accuracy of data sampling by combining the local variance (when k = 1) and the global variance (when k = 4), because if only the local variance is considered and the global variance is ignored, a case occurs where the fluctuation of the local variance is ignored to affect the whole, for example, the difference between data 10000 and data 10001 is small and thus can be ignored, but if there are many data, each data is a little more than the previous data and can be ignored, but finally the first data and the last data have a great fluctuation, and therefore, the above phenomenon can be effectively avoided by comprehensively considering the local variance fluctuation and the global variance fluctuation, thereby effectively improving the accuracy of the sampled data.
And step S4: updating i point and i +2 in the first sub data sequence based on the first variance k -1 point data weight.
As an example, the step S4 includes: s41, i points and i +2 in the first sub data sequence are compared k -1 point data weight increase
Specifically, taking the above table 1 as an example, when the first sub-data sequence is { (1, 0), (2, 0) }, the corresponding first variance is V 1,1 At the ith point and the (i + 2) th point k -increase in weight value within data correspondence of 1 pointI.e. at point 1 and point 2The corresponding weighted values are respectively increasedThe first sub-data sequence { (1, 0), (2, 0) } is updated to { (1, V) 1,1 ×e -1 ),(2,V 1,1 ×e -1 )}。
Similarly, when the first sub-data sequence is { (1, 0), (2, 0), (3, 0), (4, 0) }, the updated first sub-data sequence is { (1, V) 1,1 ×e -1 +V 1,2 ×e -4 ),(2,V 1,1 ×e -1 ),(3,0),(4,V 1,2 ×e -4 ) }. That is, each acquired data forms a corresponding pyramid model architecture, and according to the pyramid model architecture, repeated calculation of the same data can be avoided, and the calculation amount can be effectively reduced. And the method of weighting each data in the pyramid model architecture based on the variance can also greatly ensure the accuracy of the sampled data, and can greatly ensure that the sampled data is the most representative data. Compared with the prior art that variance calculation needs to be performed on every two pieces of acquired data respectively to ensure that the sampled data are the data with the largest variance fluctuation, the method only needs to calculate variance fluctuation between a few pieces of data.
Step S5: and integrating the updated first sub data sequence to generate an updated binary data group.
As an example, the step S5 includes: and S51, integrating the plurality of updated first subdata sequences, wherein the same data takes the corresponding element with the large weight value as the element in the updated binary data group.
Specifically, as shown in table 1, the same data value may exist in different first sub-data sequences, but the weight value corresponding to the same data value is different, such as the weight value corresponding to the data value 1 in the first sub-data sequence { (1, 0), (2, 0) } is V 1,1 ×e -1 And the data value 1 in the first sub-data sequence { (1, 0), (2, 0), (3, 0), (4, 0) } has a weight value V 1,1 ×e -1 +V 1,2 ×e -4 Then updated binaryThe weight value corresponding to the data value 1 in the data group is V with a larger value 1,1 ×e -1 +V 1,2 ×e -4 . In this way, the weight values in the elements included in the updated binary data group are the largest corresponding to the same data value. Therefore, the precision of the sampling data can be effectively improved by weighting the weighted values corresponding to the data based on the variance fluctuation change pyramid.
Step S6: and comparing the weight value in the first element in the updated binary data set with a preset weight threshold value.
As an example, the step S6 includes: s61, when the weight value of the first element is not smaller than a preset weight threshold value, outputting data in the element as a sampling point; and S62, when the weighted value in the first element is greater than the preset threshold value, not outputting the sampling point.
Specifically, the initial value of the preset weight threshold is 0, and when the weight value of the first element in the updated binary data set is greater than or equal to the preset weight threshold, the data is sampled, otherwise, the data is not sampled.
Step S7: and updating the weight threshold value to be the weight value corresponding to the first data and deleting the first element to generate a deleted binary data group.
Step S8: and generating a second sub data sequence and a second variance based on the obtained new data signal and the pruned binary data group.
And S9, updating the weight value of the second sub-data sequence based on the second variance to generate an updated second sub-data sequence.
As an example, the step S7 includes: and S71, deleting the first element in the updated binary data group, sequentially shifting the other elements forward by one bit, and freeing the last bit of cache.
As an example, the step S8 includes: s81, storing the obtained new signal data in the last cache position in the deleted binary data group; s82, sequentially taking 2 k-th power continuous data from the last data to the front to generate a second sub-data sequence; s83, calculating the variance of the data contained in the second subdata to generate a second variance V'.
As an example, the step S9 includes: s91, respectively increasing the weight corresponding to the data in the second sub data sequence
Specifically, a new signal is obtained and stored in the n-th bit of the buffer, the weight is initialized to 0, and 2 k-th power continuous data are sequentially taken from the n-bit to the front, namely from n-2 k +1 to n and not exceeding the buffer, i.e. n-2 k +1 is not less than 1. All positive integers k meeting the limit are circulated, the variance V' of continuous data is calculated, and n-2 is added k +1 and n point data weight increasesAccording to the method, new data are acquired online, the weighting value of the newly acquired data is weighted, the situation that the data storage amount is too large can be effectively avoided in the mode of acquiring the new data online, and the situation that repeated calculation is needed each time the new data is acquired can be avoided by weighting the newly acquired data and the data which is calculated previously based on the variance fluctuation change pyramid again, namely, the situation that the calculation amount is large in the prior art can be avoided. It is easy to understand that, as in the prior art, after 10 data are sampled from 1000 data, if 1000 data are added, 20 data are obtained from 2000 data, and after 10 data are sampled from 1000 data, if 1000 new data are added, the method of the present application is applied, which is equivalent to obtaining 10 data from 1010 data, obviously, the effect of reducing the calculation amount of the present application is more obvious when the data amount is larger.
Step S10: and integrating the deleted binary data group and the updated second subdata sequence to generate an updated binary data group.
As an example, the step S10 includes: and comparing the deleted binary data group with the updated second subdata sequence, supplementing the data lacking in the second subdata sequence and generating an updated binary data group.
Step S11: go to step S6 until no new data signal is generated.
Example 2
The embodiment of the present invention further provides a storage medium, where the time series data online sampling algorithm based on pyramid weighting is stored on the storage medium, and when the sampling algorithm program is executed by a processor, the steps of the time series data online sampling algorithm based on pyramid weighting as described above are implemented. Since the storage medium adopts all technical solutions of all the embodiments, at least all the beneficial effects brought by the technical solutions of the embodiments are achieved, and no further description is given here.
Example 3
Referring to fig. 2, an embodiment of the present invention further provides an electronic device, including: a memory and a processor; at least one program instruction is stored in the memory; the processor is used for realizing the pyramid weighting-based energy management system time series data online sampling algorithm provided by the embodiment 1 by loading and executing the at least one program instruction.
The memory 202 and processor 201 are coupled in a bus that may include any number of interconnected buses and bridges that connect one or more of the various circuits of the processor 201 and memory 202 together. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 201 is transmitted over a wireless medium through an antenna, which further receives the data and transmits the data to the processor 201.
The processor 201 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. While the memory 202 may be used to store data used by the processor 201 in performing operations.
In light of the foregoing description of preferred embodiments in accordance with the invention, it is to be understood that numerous changes and modifications may be made by those skilled in the art without departing from the scope of the invention. The technical scope of the present invention is not limited to the contents of the specification, and must be determined according to the scope of the claims.
Claims (10)
1. An energy management system time series data online sampling algorithm based on pyramid weighting is characterized by comprising the following steps:
s1, storing a data signal acquired on line based on a sliding window into an original binary data set, wherein each element comprises data and a weight corresponding to the data;
s2, traversing the original binary data group, and generating a first subdata sequence for ith data based on a pyramid model architecture;
s3, generating a first variance based on the first sub-data sequence;
s4, updating the i point and the i +2 in the first sub data sequence based on the first variance k -1 point data weight;
s5, integrating the updated first subdata sequence to generate an updated binary data group;
s6, comparing the weight value in the first element in the updated binary data set with a preset weight threshold value;
s7, updating the weight threshold value to a weight value corresponding to the first data and deleting the first element to generate a deleted binary data group;
s8, generating a second sub data sequence and a second variance based on the obtained new data signal and the pruned binary data group;
s9, updating the weight value of the second sub-data sequence based on the second variance to generate an updated second sub-data sequence;
s10, integrating the deleted binary data group and the updated second subdata sequence to generate an updated binary data group;
s11, the step S6 is carried out until no new data signal is generated.
2. The pyramid weighting-based online sampling algorithm for energy management system time series data as claimed in claim 1, wherein step S2 comprises:
and S21, according to the pyramid model architecture, starting from the ith data, taking 2 k-th power continuous data to generate a first subdata sequence, wherein k is a positive integer.
3. The pyramid weighting-based energy management system time series data online sampling algorithm of claim 1, wherein the step S3 comprises:
s31, calculating a first variance V based on data in the first sub-data sequence i,k Where i ∈ [1, n ]],k∈N * 。
5. The pyramid weighting-based energy management system time series data online sampling algorithm of claim 1, wherein the step S5 comprises:
and S51, integrating the plurality of updated first subdata sequences, wherein the same data takes the corresponding element with the large weight value as the element in the updated binary data group.
6. The pyramid weighting-based online sampling algorithm for energy management system time series data according to claim 1, wherein the step S6 comprises:
s61, when the weight value of the first element is not smaller than a preset weight threshold value, outputting data in the element as a sampling point;
and S62, when the weighted value in the first element is greater than the preset threshold value, not outputting the sampling point.
7. The pyramid weighting-based energy management system time series data online sampling algorithm of claim 1, wherein the step S7 comprises:
and S71, deleting the first element in the updated binary data group, sequentially shifting the other elements forward by one bit, and freeing the last bit of cache.
8. The pyramid weighting-based energy management system time series data online sampling algorithm of claim 1, wherein the step S8 comprises:
s81, storing the obtained new signal data in the last cache position in the deleted binary data group;
s82, sequentially taking 2 k-th power continuous data from the last data to the front to generate a second sub-data sequence;
s83, calculating the variance of the data contained in the second subdata to generate a second variance V'.
9. A computer readable storage medium having one or more instructions stored therein for causing the computer to perform the pyramid weighting based energy management system time series data online sampling algorithm of any one of claims 1-8.
10. An electronic device, comprising: a memory and a processor; at least one program instruction is stored in the memory; the processor implements the pyramid weighting-based energy management system time series data online sampling algorithm of any one of claims 1-8 by loading and executing the at least one program instruction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211079816.9A CN115357645B (en) | 2022-09-05 | 2022-09-05 | Pyramid weighting-based energy management system time sequence data online sampling method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211079816.9A CN115357645B (en) | 2022-09-05 | 2022-09-05 | Pyramid weighting-based energy management system time sequence data online sampling method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115357645A true CN115357645A (en) | 2022-11-18 |
CN115357645B CN115357645B (en) | 2023-09-01 |
Family
ID=84005818
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211079816.9A Active CN115357645B (en) | 2022-09-05 | 2022-09-05 | Pyramid weighting-based energy management system time sequence data online sampling method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115357645B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111798263A (en) * | 2020-05-22 | 2020-10-20 | 北京国电通网络技术有限公司 | Transaction trend prediction method and device |
CN112528966A (en) * | 2021-02-05 | 2021-03-19 | 华东交通大学 | Intelligent monitoring and identifying method, device and medium for peripheral environment of payee |
CN112614158A (en) * | 2020-12-18 | 2021-04-06 | 北京理工大学 | Sampling frame self-adaptive multi-feature fusion online target tracking method |
CN113516732A (en) * | 2021-05-25 | 2021-10-19 | 山东大学 | Pyramid-based scatter diagram sampling method and system |
CN113870422A (en) * | 2021-11-30 | 2021-12-31 | 华中科技大学 | Pyramid Transformer-based point cloud reconstruction method, device, equipment and medium |
CN114663540A (en) * | 2022-03-22 | 2022-06-24 | 聚时领臻科技(浙江)有限公司 | Golden template graph generation method based on CUDA and readable storage medium |
-
2022
- 2022-09-05 CN CN202211079816.9A patent/CN115357645B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111798263A (en) * | 2020-05-22 | 2020-10-20 | 北京国电通网络技术有限公司 | Transaction trend prediction method and device |
CN112614158A (en) * | 2020-12-18 | 2021-04-06 | 北京理工大学 | Sampling frame self-adaptive multi-feature fusion online target tracking method |
CN112528966A (en) * | 2021-02-05 | 2021-03-19 | 华东交通大学 | Intelligent monitoring and identifying method, device and medium for peripheral environment of payee |
CN113516732A (en) * | 2021-05-25 | 2021-10-19 | 山东大学 | Pyramid-based scatter diagram sampling method and system |
CN113870422A (en) * | 2021-11-30 | 2021-12-31 | 华中科技大学 | Pyramid Transformer-based point cloud reconstruction method, device, equipment and medium |
CN114663540A (en) * | 2022-03-22 | 2022-06-24 | 聚时领臻科技(浙江)有限公司 | Golden template graph generation method based on CUDA and readable storage medium |
Non-Patent Citations (1)
Title |
---|
齐雅静: "二元B样条基函数的金字塔算法研究及应用", 《中国优秀硕士论文全文数据库 基础科学辑 》, pages 1 - 4 * |
Also Published As
Publication number | Publication date |
---|---|
CN115357645B (en) | 2023-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106127302A (en) | Process the circuit of data, image processing system, the method and apparatus of process data | |
CN109389212B (en) | Reconfigurable activation quantization pooling system for low-bit-width convolutional neural network | |
CN111507993A (en) | Image segmentation method and device based on generation countermeasure network and storage medium | |
CN111831355B (en) | Weight precision configuration method, device, equipment and storage medium | |
CN112153139B (en) | Control system and method based on sensor network and in-memory computing neural network | |
CN111831358A (en) | Weight precision configuration method, device, equipment and storage medium | |
US11640534B2 (en) | Threshold triggered back propagation of an artificial neural network | |
US20240095522A1 (en) | Neural network generation device, neural network computing device, edge device, neural network control method, and software generation program | |
CN115796338A (en) | Photovoltaic power generation power prediction model construction and photovoltaic power generation power prediction method | |
CN117217274A (en) | Vector processor, neural network accelerator, chip and electronic equipment | |
CN111831354A (en) | Data precision configuration method, device, chip array, equipment and medium | |
CN112446460A (en) | Method, apparatus and related product for processing data | |
CN115357645A (en) | Pyramid weighting-based online sampling algorithm for time sequence data of energy management system | |
CN111027435B (en) | Identification system, device and method based on gradient lifting decision tree | |
US20210044303A1 (en) | Neural network acceleration device and method | |
CN106875010B (en) | Neuron weight information processing method and system | |
CN108388943B (en) | Pooling device and method suitable for neural network | |
CN116032457A (en) | Chaotic stream encryption method, device, equipment and medium based on Tent mapping | |
CN116342420A (en) | Method and system for enhancing mixed degraded image | |
CN116992932A (en) | Parameterized LSTM acceleration system for data off-chip block transmission and design method thereof | |
CN112101537B (en) | CNN accelerator and electronic device | |
DE112022000723T5 (en) | BRANCHING PROCESS FOR A CIRCUIT OF A NEURONAL PROCESSOR | |
CN111783979B (en) | Image similarity detection hardware accelerator VLSI structure based on SSIM algorithm | |
US6842744B2 (en) | Coding and memorizing method for fuzzy logic membership functions and corresponding method and circuit architecture for calculating the membership degree | |
CN114548355A (en) | CNN training method, electronic device, and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |