CN113194430B - Switch cabinet sensor network data compression method based on periodic transmission model - Google Patents
Switch cabinet sensor network data compression method based on periodic transmission model Download PDFInfo
- Publication number
- CN113194430B CN113194430B CN202110469096.6A CN202110469096A CN113194430B CN 113194430 B CN113194430 B CN 113194430B CN 202110469096 A CN202110469096 A CN 202110469096A CN 113194430 B CN113194430 B CN 113194430B
- Authority
- CN
- China
- Prior art keywords
- vector
- reading
- elements
- vectors
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 238000013144 data compression Methods 0.000 title claims abstract description 13
- 230000005540 biological transmission Effects 0.000 title claims abstract description 10
- 230000000737 periodic effect Effects 0.000 title claims abstract description 10
- 239000013598 vector Substances 0.000 claims abstract description 138
- 230000000875 corresponding effect Effects 0.000 claims description 17
- 230000002596 correlated effect Effects 0.000 claims description 9
- 238000007906 compression Methods 0.000 claims description 8
- 230000006835 compression Effects 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 6
- 125000004432 carbon atom Chemical group C* 0.000 claims description 3
- 238000005265 energy consumption Methods 0.000 abstract description 5
- 238000004146 energy storage Methods 0.000 abstract 1
- 238000004891 communication Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
- H04W4/38—Services specially adapted for particular environments, situations or purposes for collecting sensor information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/04—Protocols for data compression, e.g. ROHC
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W28/00—Network traffic management; Network resource management
- H04W28/02—Traffic management, e.g. flow control or congestion control
- H04W28/06—Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
The invention discloses a switch cabinet sensor network data compression method based on a periodic transmission model, which comprises the following steps: s1, collecting readings in a current period by a sensor node, and constructing a reading vector R; s2, selecting a certain vector R from the reading vector set to be executed according to the adding orderiAnd according to the vector RiThe number of the medium elements divides the medium elements into two types; s3, regarding a reading element in the first two sub-vectors in the step S2 as a candidate outlier, and judging whether the reading element is the candidate outlier: if yes, calculating and updating a reading vector; otherwise, keeping the original numerical value; s4, when the reading vector set to be executed is empty, counting the number of the same elements and the number of different elements in the reading vector set, and compiling a dictionary; s5, transmitting the dictionary and the R obtained in the S4 to the next sensor node; and S6, entering the next period, and continuously circulating according to the S1-S5. The method can greatly compress data and save energy consumption and storage space.
Description
Technical Field
The invention relates to the technical field of sensors, in particular to a switch cabinet sensor network data compression method based on a periodic transmission model.
Background
A switch cabinet is one of the key main devices of an electric power system, and its operating state has an important influence on the reliability of the whole electric power system. In recent years, power system accidents caused by switch cabinet faults frequently occur, and collecting and monitoring switch cabinet data through a wireless sensor network is an effective method for avoiding accidents. The wireless sensor network has limited resources such as energy consumption, storage space, communication bandwidth, processing speed and the like. How to save limited resources of the sensor is one of the popular research directions of the wireless sensor network. The energy consumption of sensor processing is far lower than that of sensor communication, and the sensing information has a large amount of data redundancy, so that compressing and then transmitting the data is an effective method for saving the energy consumption of the sensor. Meanwhile, the data compression can also save the storage space resources of the sensor. Compared with the traditional data compression method, the wireless sensor network data compression algorithm has the characteristics of low complexity and small size so as to achieve the effect of saving the storage space. In a wireless sensor network, most of the existing methods adopt a mode of continuously collecting, compressing and transmitting data, so that a large amount of energy loss and waste of communication resources are caused.
Disclosure of Invention
In order to overcome the defects of the technology, the invention provides a switch cabinet sensor network data compression method based on a periodic transmission model. Aiming at the problem of data deviation caused by interference and the like, the invention introduces the substitution outlier to process data, introduces the Pearson correlation coefficient to ensure that the data can keep time sequence while being compressed, and finally compiles a dictionary with the processed data to reduce the number of bits of the data and further compress the data.
The technical scheme adopted by the invention for overcoming the technical problems is as follows:
a switch cabinet sensor network data compression method based on a periodic transmission model comprises the following steps:
s1, collecting readings in the current period by a sensor node, and constructing a reading vector R by all the collected readings according to a time sequence;
s2, selecting a certain vector R from the reading vector set to be executed according to the adding orderiAnd according to the vector RiThe number of middle elements divides it into two categories: the first is to use the vector RiDividing into two sub-vectors with equal elementsCalculating the Pearson correlation coefficient of the two sub-vectors and the absolute value of the difference between the mean values of the elements in the two sub-vectors; the second type is that the absolute value of the difference between two elements in the vector is directly calculated;
step S3, regarding a reading element in the first two sub-vectors in the step S2 as a candidate outlier, and judging whether the candidate outlier is an outlier: if yes, calculating and updating a reading vector; otherwise, keeping the original numerical value;
s4, when the reading vector set to be executed is empty, counting the number of the same elements and the number of different elements in the reading vector set, and compiling a dictionary;
s5, transmitting the dictionary and reading vector set R obtained in the step S4 to the next sensor node;
and S6, entering the next period, and continuously circulating according to the steps S1-S5.
Further, step S1 specifically includes the following steps:
the sensor node collects readings in the current period, when every time one reading is collected, the number of the readings is added with '1', and when tau readings are obtained, a reading vector is constructed according to the time sequence:
R=[r1,r2,...,rτ]
in the above formula, τ represents the total number of readings in the current cycle, and τ =2NN belongs to Z, and Z is an integer set; and adds R to the set of read vectors to be executed.
Further, step S2 specifically includes the following steps:
step S2.1, selecting a vector R from the reading vector set to be executed according to the adding orderiAnd judging the vector RiNumber of middle elements 2n equal to 2: if 2n ≠ 2, executing step S2.2; if 2n =2, execute step S2.3;
step S2.2, the vector RiDividing into two sub-vectors of equal number of elementsAndand calculating the Pearson correlation coefficient of the two sub-vectorsAnd the absolute value of the difference between the mean values of the elements in the two subvectors
In the above formula, the first and second carbon atoms are, representing the relevance of the data;
(1) If it isAnd isThen deem to beAndare highly positively correlated and vector element means are similar; wherein,andrespectively representing the mean, t, of the two subvector elementspRepresenting a highly relevant threshold number, tp∈[-1,1],tmRepresenting the number of close thresholds of the mean, tm≥0,tpHigher values the more accurate the data, lower compression ratios, tmThe lower the value, the more accurate the data, the higher the compression ratio;
the element value being the vector of the average of the corresponding element values of the two sub-vectors
UpdatingAnd updates R in the reading vector RiThe value of the corresponding position of (a); removing vector R from vector set to be executediAnd will beAdding a data set to be executed;
when in useThe number of the medium elements n =2 andthe absolute value of the difference between the sum and the two elements is greater than tmIf yes, executing step S3; otherwise, the sequence will beAndadding a reading vector set to be executed;
step S2.3, calculating the absolute value of the difference between two elements in the vectorJudging againAnd tmThe size relationship of (1):
then updating R in reading vector RiThe value of the corresponding position of (a);
Further, step S3 specifically includes the following steps:
wherein R isi' the method of obtaining is as follows: taking reading vector set RiIf the remainder is 1, 4 elements are taken backward from the last element position to form a vector Ri' if the remainder is 5, then 4 elements are taken from the position immediately preceding the first element to form a vector Ri′;j=4,5,rl∈Ri′;
Step S3.3, respectively judgingAndand tpAnd the absolute value of the difference between the mean values of the elements in the two corresponding sub-vectorsAnd tmThe size relationship of (1):
(1) If it isAnd correspond toThen it is considered thatAnd Ri' highly correlated, close element mean and reading element riIs an outlier; if it isAnd correspond toThen it is considered thatAnd Ri' highly correlated, close element mean and reading element riIs an outlier;
secondly, ifAndif the calculated values of (A) all satisfy the above conditions, then the calculated values are takenThe conditions are satisfied: calculate correspondencesUpdatingAnd updates R in the reading vector RiAnd Ri' a value of the corresponding position;
(2) If at the same time satisfyOr correspond toAndor correspond toThen the reading element r is considerediNot outliers, retained the original values.
Further, step S4 specifically includes the following steps:
when the reading vector set to be executed is empty, all data processing is finished; counting the number of the same elements and the number of different elements in the reading vector set, arranging the same elements from large to small according to the number of the same elements, distributing binary indexes according to the number of the different elements, compiling the binary indexes into a dictionary as follows, and finally replacing the element reading in the reading vector R with the binary indexes:
number n of non-identical elementsi | Index binary representation s1 | Corresponding element reading ri |
1,2 | 0,1 | r1,r2 |
3,4 | 00,01,10,11 | r1,r2,r3,r4 |
5,6,7,8 | 000,001,...,111 | r1,r2,...,r8 |
… | … | … |
。
The invention has the beneficial effects that:
the invention provides a novel data compression method for a switch cabinet sensor network based on a periodic transmission model, introduces the concept of Pearson correlation coefficient, replaces outliers generated by factors such as interference, and finally is compiled into a dictionary, so that data are compressed more greatly, and the energy consumption and the storage space of a wireless sensor network are saved under the condition of keeping a data time sequence.
Drawings
Fig. 1 is a flowchart of a data compression method for a switch cabinet sensor network based on a periodic transmission model according to an embodiment of the present invention.
Detailed Description
In order to facilitate a better understanding of the invention for those skilled in the art, the invention will be described in further detail with reference to the accompanying drawings and specific examples, which are given by way of illustration only and do not limit the scope of the invention.
The method for compressing data of the switch cabinet sensor network based on the periodic transmission model disclosed by the embodiment, as shown in fig. 1, includes the following steps:
s1, the sensor node collects the readings in the current period, and all the collected readings construct a reading vector R according to the time sequence.
Specifically, the sensor node collects readings in the current period, and when τ readings are obtained, the reading vector is constructed in time order by adding "1" to the number of readings each time one reading is collected:
R=[r1,r2,...,rτ]
in the above formula, τ represents the total number of readings in the current cycle, τ =2NN belongs to Z, and Z is an integer set;
and adds R to the set of read vectors to be executed.
S2, selecting a certain vector R from the reading vector set to be executed according to the adding orderiAnd according to the vector RiThe number of middle elements divides it into two categories: the first is to use the vector RiDividing the vector into two sub-vectors with equal element numbers, and calculating the Pearson correlation coefficient of the two sub-vectors and the absolute value of the element mean value difference in the two sub-vectors; the second type is to directly calculate the absolute value of the difference between two elements in the vector.
Specifically, step S2 specifically includes the following steps:
step S2.1, selecting a vector R from the reading vector set to be executed according to the adding orderiAnd judging the vector RiNumber of middle elements 2n equal to 2: if 2n ≠ 2, executing step S2.2; if 2n =2, execute step S2.3;
step S2.2, the vector RiDividing into two sub-vectors of equal number of elementsAndand calculating the Pearson correlation coefficient of the two sub-vectorsAnd the absolute value of the difference between the mean values of the elements in the two subvectors
in the above formula, the first and second carbon atoms are, representing the relevance of the data;
(1) If it isAnd isThen it is considered thatAndare highly positive correlated and vector element means are similar; wherein,andrespectively representing the mean, t, of the two subvector elementspRepresenting a highly relevant threshold number, tp∈[-1,1],tmRepresenting the number of close thresholds of the mean, tm≥0,tpHigher values the more accurate the data, lower compression ratios, tmThen, in contrast, tmLower values the more accurate the data, higher compression ratios, tpAnd tmThe value of (A) is reasonably adjusted according to specific conditions;
the element value being the vector of the average of the corresponding element values of the two sub-vectors
UpdatingAnd updates R in the reading vector RiA value of the corresponding position of (a); removing vector R from vector set to be executediAnd will beAdding a data set to be executed;
(2) If it isOrThen it is considered thatAndnot highly positive correlation or dissimilar vector element means;
when the temperature is higher than the set temperatureThe number of the medium elements n =2 andthe absolute value of the difference between the sum and the two elements is greater than tmIf yes, executing step S3; otherwise, it will be in orderAndadding a reading vector set to be executed;
step S2.3, calculating the absolute value of the difference between two elements in the vectorJudging againAnd tmThe size relationship of (1):
(1) If it isThen it is considered as RiThe values of the two elements are close to each other, so that:
then updating R in reading vector RiThe value of the corresponding position of (a);
Step S3, regarding a reading element in the first two sub-vectors in the step S2 as a candidate outlier, and judging whether the candidate outlier is an outlier: if yes, calculating and updating a reading vector; otherwise, the original value is kept.
In this embodiment, step S3 specifically includes the following steps:
wherein R isi' the method of obtaining is as follows: taking reading vector set R in RiThe remainder of the division of the position of the head element of the vector by 8, if the remainder is 1, then 4 elements are taken backward from the position of the tail element to form the vector Ri' if the remainder is 5, then 4 elements are taken from the position immediately preceding the first element to form a vector Ri′;j=4,5,rl∈Ri′;
Step S3.3, respectively judgingAndand tpAnd the absolute value of the difference between the mean values of the elements in the two corresponding sub-vectorsAnd tmThe size relationship of (1):
(1) If it isAnd correspond toThen it is considered thatAnd Ri' highly correlated, close element mean and reading element riIs an outlier; if it isAnd correspond toThen it is considered thatAnd Ri' highly correlated, close element mean and reading element riIs an outlier;
secondly, ifAndthe calculated values of (a) all satisfy the above condition, that is,and correspond toAndand correspond toThen getThe conditions are satisfied: calculate correspondencesUpdatingAnd updates R in the reading vector RiAnd Ri' a value of the corresponding position;
(2) If at the same time satisfyOr correspond toAndor correspond toThen the reading element r is considerediInstead of outliers, the original values are maintained.
And S4, when the reading vector set to be executed is empty, counting the number of the same elements and the number of different elements in the reading vector set, compiling a dictionary to reduce the number of bits of data, and further compressing the data.
In this embodiment, step S4 specifically includes the following steps:
when the reading vector set to be executed is empty, all data processing is finished; counting the number of the same elements and the number of different elements in the reading vector set, arranging the same elements from large to small according to the number of the same elements, distributing binary indexes according to the number of the different elements, compiling the binary indexes into a dictionary as follows, and finally replacing the element reading in the reading vector R with the binary indexes:
TABLE 1 compiled dictionary
Number n of non-identical elementsi | Index binary representation s1 | Corresponding element reading ri |
1,2 | 0,1 | r1,r2 |
3,4 | 00,01,10,11 | r1,r2,r3,r4 |
5,6,7,8 | 000,001,...,111 | r1,r2,...,r8 |
… | … | … |
And S5, transmitting the dictionary and reading vector set R obtained in the step S4 to the next sensor node.
And S6, entering the next period, and continuously circulating according to the steps S1-S5.
The data compression method can achieve good effect when processing the period switch cabinet sensing data with disturbance, can compress the switch cabinet sensor network data to 10% -30% of the original data, and ensures that the data distortion rate is within 0.5% -5%, and the more the total number of readings in a single reading period is, the higher the compression rate is. The method ensures the time sequence, so that the change trend of the switch cabinet sensor data to the time is also ensured to a certain extent. Meanwhile, two threshold values t are reasonably adjusted according to specific conditionspAnd tmThe compression method can achieve different effects and has certain flexibility.
The foregoing merely illustrates the principles and preferred embodiments of the invention and many variations and modifications may be made by those skilled in the art in light of the foregoing description, which are within the scope of the invention.
Claims (4)
1. A switch cabinet sensor network data compression method based on a periodic transmission model is characterized by comprising the following steps:
s1, collecting readings in the current period by a sensor node, and constructing a reading vector R by all the collected readings according to a time sequence;
s2, selecting a certain vector R from the reading vector set to be executed according to the adding orderiAnd according to the vector RiThe number of middle elements divides it into two categories: the first is to use the vector RiDividing the vector into two sub-vectors with the same number of elements, and calculating Pearson correlation coefficients of the two sub-vectors and absolute values of mean value differences of the elements in the two sub-vectors; the second type is that the absolute value of the difference between two elements in the vector is directly calculated;
the step S2 specifically includes the following steps:
step S2.1, selecting a vector R from the reading vector set to be executed according to the adding orderiAnd determining the vector RiNumber of middle elements 2n equal to 2: if 2n ≠ 2, executing step S2.2; if 2n =2, execute step S2.3;
step S2.2, the vector RiDividing into two sub-vectors of equal number of elementsAndand calculating the Pearson correlation coefficient of the two sub-vectorsAnd the absolute value of the difference between the mean values of the elements in the two subvectors
(1) If it isAnd isThen it is considered thatAndare highly positively correlated and vector element means are similar; wherein,andrespectively representing the mean, t, of the two subvector elementspRepresenting a highly relevant threshold number, tp∈[-1,1],tmRepresenting the number of close thresholds of the mean, tm≥0,tpHigher values the more accurate the data, lower compression ratios, tmThe lower the value, the more accurate the data, the higher the compression ratio;
the element value being the vector of the average of the corresponding element values of the two sub-vectors
UpdatingAnd updates R in the reading vector RiThe value of the corresponding position of (a); removing vector R from the set of read vectors to be executediAnd will beAdding a reading vector set to be executed;
(2) If it isOrThen it is considered thatAndnot highly positive correlation or dissimilar vector element means;
when the temperature is higher than the set temperatureThe number of middle elements n =2 andthe absolute value of the difference between the two elements is greater than tmIf yes, executing step S3; otherwise, it will be in orderAndadding a reading vector set to be executed;
step S2.3, calculating the absolute value of the difference between two elements in the vectorJudging againAnd tmThe size relationship of (1):
then updating R in reading vector RiThe value of the corresponding position of (a);
step S3, regarding a reading element in the first two sub-vectors in the step S2 as a candidate outlier, and judging whether the candidate outlier is an outlier: if yes, calculating and updating a reading vector; otherwise, keeping the original numerical value;
s4, when the reading vector set to be executed is empty, counting the number of the same elements and the number of different elements in the reading vector, and compiling a dictionary;
s5, transmitting the dictionary and the reading vector R obtained in the step S4 to the next sensor node;
and step S6, entering the next period, and continuously circulating according to the steps S1-S5.
2. The method according to claim 1, wherein step S1 comprises in particular the following:
the sensor node collects readings in the current period, when every time one reading is collected, the number of the readings is added with '1', and when tau readings are obtained, a reading vector is constructed according to the time sequence:
R=[r1,r2,...,rτ]
in the above formula, τ represents the total number of readings in the current cycle, and τ =2NN belongs to Z, and Z is an integer set;
and adding R to the set of read vectors to be executed.
3. The method according to claim 2, wherein step S3 specifically comprises the following:
wherein R isi′The obtaining method comprises the following steps: taking R in reading vector RiIf the remainder is 1, 4 elements are taken backward from the last element position to form a vector Ri′If the remainder is 5, then 4 elements are taken from the position immediately before the first element to form a vector Ri′;j=4,5,rl∈Ri′;
Step S3.3, respectively judgingAndand tpAnd the absolute value of the difference between the mean values of the elements in the two corresponding sub-vectorsAnd tmThe size relationship of (1):
(1) If it isAnd correspond toThen deem to beAnd Ri′Highly correlated, close element mean and reading element riIs an outlier; if it isAnd correspond toThen deem to beAnd Ri′Highly correlated, close element mean and reading element riIs an outlier;
secondly, ifAndif the calculated values of (A) all satisfy the above conditions, then the calculated values are takenThe conditions are satisfied: calculate correspondencesUpdatingAnd updates R in the reading vector RiAnd Ri′The value of the corresponding position of (a);
4. The method according to claim 3, wherein step S4 specifically comprises the following:
when the reading vector set to be executed is empty, all data processing is finished; counting the number of the same elements and the number of different elements in the reading vector, arranging the same elements from large to small, distributing binary indexes according to the number of the different elements, compiling the binary indexes into a dictionary, and finally replacing the element reading in the reading vector R with the binary indexes:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110469096.6A CN113194430B (en) | 2021-04-28 | 2021-04-28 | Switch cabinet sensor network data compression method based on periodic transmission model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110469096.6A CN113194430B (en) | 2021-04-28 | 2021-04-28 | Switch cabinet sensor network data compression method based on periodic transmission model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113194430A CN113194430A (en) | 2021-07-30 |
CN113194430B true CN113194430B (en) | 2022-11-01 |
Family
ID=76980099
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110469096.6A Active CN113194430B (en) | 2021-04-28 | 2021-04-28 | Switch cabinet sensor network data compression method based on periodic transmission model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113194430B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108494408A (en) * | 2018-03-14 | 2018-09-04 | 电子科技大学 | While-drilling density logger underground high speed real-time compression method based on Hash dictionary |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011067769A1 (en) * | 2009-12-03 | 2011-06-09 | Infogin Ltd. | Shared dictionary compression over http proxy |
US9864846B2 (en) * | 2012-01-31 | 2018-01-09 | Life Technologies Corporation | Methods and computer program products for compression of sequencing data |
EP3622520A1 (en) * | 2017-10-16 | 2020-03-18 | Illumina, Inc. | Deep learning-based techniques for training deep convolutional neural networks |
CN108990108B (en) * | 2018-07-10 | 2021-07-02 | 西华大学 | Self-adaptive real-time spectrum data compression method and system |
-
2021
- 2021-04-28 CN CN202110469096.6A patent/CN113194430B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108494408A (en) * | 2018-03-14 | 2018-09-04 | 电子科技大学 | While-drilling density logger underground high speed real-time compression method based on Hash dictionary |
Also Published As
Publication number | Publication date |
---|---|
CN113194430A (en) | 2021-07-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109727446B (en) | Method for identifying and processing abnormal value of electricity consumption data | |
CN112633316B (en) | Load prediction method and device based on boundary estimation theory | |
CN114239718B (en) | High-precision long-term time sequence prediction method based on multi-element time sequence data analysis | |
CN109656887B (en) | Distributed time series mode retrieval method for mass high-speed rail shaft temperature data | |
CN115731498B (en) | Video abstract generation method combining reinforcement learning and contrast learning | |
CN111104241A (en) | Server memory anomaly detection method, system and equipment based on self-encoder | |
CN112101765A (en) | Abnormal data processing method and system for operation index data of power distribution network | |
CN117290364A (en) | Intelligent market investigation data storage method | |
CN116611884A (en) | Product recommendation method and system based on multidimensional different-composition neural network | |
CN114169091A (en) | Method for establishing prediction model of residual life of engineering mechanical part and prediction method | |
CN113194430B (en) | Switch cabinet sensor network data compression method based on periodic transmission model | |
CN113724101B (en) | Table relation identification method and system, equipment and storage medium | |
CN117556148B (en) | Personalized cross-domain recommendation method based on network data driving | |
CN112148942A (en) | Business index data classification method and device based on data clustering | |
CN110443574A (en) | Entry convolutional neural networks evaluation expert's recommended method | |
CN117407681A (en) | Time sequence data prediction model establishment method based on vector clustering | |
CN116992155A (en) | User long tail recommendation method and system utilizing NMF with different liveness | |
CN111190896B (en) | Data processing method, device, storage medium and computer equipment | |
CN116933216A (en) | Management system and method based on flexible load resource aggregation feature analysis | |
CN116502131A (en) | Bearing fault diagnosis model training method and device based on transfer learning | |
CN116562113A (en) | Power consumption assessment method, system and medium based on clustering algorithm | |
CN114968992A (en) | Data identification cleaning and compensation method and device, electronic equipment and storage medium | |
CN114677233A (en) | Information recommendation method and device, storage medium and electronic equipment | |
CN113781110A (en) | User behavior prediction method and system based on multi-factor weighted BI-LSTM learning | |
CN112614006A (en) | Load prediction method, device, computer readable storage medium and processor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |