CN113194430A - Switch cabinet sensor network data compression method based on periodic transmission model - Google Patents

Switch cabinet sensor network data compression method based on periodic transmission model Download PDF

Info

Publication number
CN113194430A
CN113194430A CN202110469096.6A CN202110469096A CN113194430A CN 113194430 A CN113194430 A CN 113194430A CN 202110469096 A CN202110469096 A CN 202110469096A CN 113194430 A CN113194430 A CN 113194430A
Authority
CN
China
Prior art keywords
vector
reading
elements
sub
vectors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110469096.6A
Other languages
Chinese (zh)
Other versions
CN113194430B (en
Inventor
任新卓
王丽群
潘黄萍
钟恒强
许琴
诸葛嘉锵
黄娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Hangzhou Power Equipment Manufacturing Co Ltd
Original Assignee
Hangzhou Dianzi University
Hangzhou Power Equipment Manufacturing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University, Hangzhou Power Equipment Manufacturing Co Ltd filed Critical Hangzhou Dianzi University
Priority to CN202110469096.6A priority Critical patent/CN113194430B/en
Publication of CN113194430A publication Critical patent/CN113194430A/en
Application granted granted Critical
Publication of CN113194430B publication Critical patent/CN113194430B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/38Services specially adapted for particular environments, situations or purposes for collecting sensor information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/04Protocols for data compression, e.g. ROHC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention discloses a switch cabinet sensor network data compression method based on a periodic transmission model, which comprises the following steps: s1, collecting the reading of the current period by the sensor node, and constructing a reading vector R; s2, selecting a certain vector R from the reading vector set to be executed according to the adding orderiAnd according to the vector RiThe number of the medium elements divides the medium elements into two types; s3, regarding a reading element in the first two sub-vectors in the step S2 as a candidate outlier, and judging whether the reading element is an outlier: if yes, calculating and updating a reading vector; otherwise, keeping the original numerical value; s4, when the reading vector set to be executed is empty, counting the number of the same elements and the number of different elements in the reading vector set, and compiling a dictionary; s5, transmitting the dictionary and the R obtained in the S4 to the next sensor node; s6, enter the next period and pressThe cycle continues as in S1-S5. The method can greatly compress data and save energy consumption and storage space.

Description

Switch cabinet sensor network data compression method based on periodic transmission model
Technical Field
The invention relates to the technical field of sensors, in particular to a switch cabinet sensor network data compression method based on a periodic transmission model.
Background
The switch cabinet is one of the key main devices of the power system, and the operation state of the switch cabinet has important influence on the reliability of the whole power system. In recent years, power system accidents caused by switch cabinet faults frequently occur, and collecting and monitoring switch cabinet data through a wireless sensor network is an effective method for avoiding accidents. The wireless sensor network has limited resources such as energy consumption, storage space, communication bandwidth, processing speed and the like. How to save limited resources of the sensor is one of the popular research directions of the wireless sensor network. The energy consumption of sensor processing is far lower than that of sensor communication, and the sensing information has a large amount of data redundancy, so that compressing and then transmitting the data is an effective method for saving the energy consumption of the sensor. Meanwhile, the data compression can also save the storage space resources of the sensor. Compared with the traditional data compression method, the wireless sensor network data compression algorithm has the characteristics of low complexity and small size so as to achieve the effect of saving the storage space. In a wireless sensor network, most of the existing methods adopt a mode of continuously collecting, compressing and transmitting data, so that a large amount of energy loss and waste of communication resources are caused.
Disclosure of Invention
In order to overcome the defects of the technology, the invention provides a switch cabinet sensor network data compression method based on a periodic transmission model. Aiming at the problem of data deviation caused by interference and the like, the invention introduces the substitution outlier to process data, introduces the Pearson correlation coefficient to ensure that the data can keep time sequence while being compressed, and finally compiles a dictionary with the processed data to reduce the number of bits of the data and further compress the data.
The technical scheme adopted by the invention for overcoming the technical problems is as follows:
a switch cabinet sensor network data compression method based on a periodic transmission model comprises the following steps:
step S1, the sensor node collects the readings of the current period and constructs a reading vector R according to the time sequence of all the collected readings;
step S2, selecting a certain vector R from the reading vector set to be executed according to the adding orderiAnd according to the vector RiThe number of middle elements divides it into two categories: the first is to use the vector RiDividing the vector into two sub-vectors with equal element numbers, and calculating the Pearson correlation coefficient of the two sub-vectors and the absolute value of the element mean value difference in the two sub-vectors; the second type is that the absolute value of the difference between two elements in the vector is directly calculated;
step S3, regarding one of the reading elements in the two first-type subvectors in step S2 as a candidate outlier, and determining whether the candidate outlier is an outlier: if yes, calculating and updating a reading vector; otherwise, keeping the original numerical value;
step S4, when the reading vector set to be executed is empty, counting the number of the same elements and the number of different elements in the reading vector set, and compiling a dictionary;
step S5, transmitting the dictionary and reading vector set R obtained in the step S4 to the next sensor node;
step S6, entering the next period, and continuously circulating according to the steps S1-S5.
Further, step S1 specifically includes the following steps:
the sensor node collects readings in the current period, and when tau readings are obtained, a reading vector is constructed according to the time sequence by adding 1 to the number of the readings each time one reading is collected:
R=[r1,r2,...,rτ]
in the above formula, τ represents the total number of readings in the current cycle, and τ is 2NN belongs to Z, and Z is an integer set;
and adds R to the set of read vectors to be executed.
Further, step S2 specifically includes the following steps:
step S2.1, selecting a vector R from the reading vector set to be executed according to the adding orderiAnd determining the vector RiNumber of middle elements 2n equal to 2: if 2n ≠ 2, executing step S2.2; if 2n is 2, performing step S2.3;
step S2.2, the vector RiDividing into two sub-vectors of equal number of elements
Figure BDA0003044635980000021
And
Figure BDA0003044635980000022
and calculating the Pearson correlation coefficient of the two sub-vectors
Figure BDA0003044635980000023
And the absolute value of the difference between the mean values of the elements in the two subvectors
Figure BDA0003044635980000024
Figure BDA0003044635980000025
In the above formula, the first and second carbon atoms are,
Figure BDA0003044635980000031
Figure BDA0003044635980000032
representing the relevance of the data;
then, judge
Figure BDA0003044635980000033
And tpThe magnitude relation of (1)
Figure BDA0003044635980000034
And tmThe size relationship of (1):
(1) if it is
Figure BDA0003044635980000035
And is
Figure BDA0003044635980000036
Then it is considered that
Figure BDA0003044635980000037
And
Figure BDA0003044635980000038
are highly positive correlated and vector element means are similar; wherein the content of the first and second substances,
Figure BDA0003044635980000039
and
Figure BDA00030446359800000310
respectively representing the mean of two sub-vector elements, representing a highly correlated threshold number, tp∈[-1,1],tmRepresenting the number of close thresholds of the mean, tm≥0,tpHigher values the more accurate the data, lower compression ratios, tmThe lower the value, the more accurate the data, the higher the compression ratio;
the element value being the vector of the average of the corresponding element values of the two sub-vectors
Figure BDA00030446359800000311
Figure BDA00030446359800000312
Updating
Figure BDA00030446359800000313
And updates the reading vector RRiThe value of the corresponding position of (a); removing vector R from vector set to be executediAnd will be
Figure BDA00030446359800000314
Adding a data set to be executed;
(2) if it is
Figure BDA00030446359800000315
Or
Figure BDA00030446359800000316
Then it is considered that
Figure BDA00030446359800000317
And
Figure BDA00030446359800000318
not highly positive correlation or dissimilar vector element means;
when in use
Figure BDA00030446359800000319
The number of elements n is 2 and
Figure BDA00030446359800000320
the absolute value of the difference between the two elements is greater than tmIf yes, go to step S3; otherwise, it will be in order
Figure BDA00030446359800000321
And
Figure BDA00030446359800000322
adding a reading vector set to be executed;
step S2.3, calculating the absolute value of the difference between two elements in the vector
Figure BDA00030446359800000323
Judging again
Figure BDA00030446359800000324
And tmThe size relationship of (1):
(1) if it is
Figure BDA00030446359800000325
Then it is considered as RiThe values of the two elements are close, so that:
Figure BDA00030446359800000326
then updating R in reading vector RiThe value of the corresponding position of (a);
(2) if it is
Figure BDA00030446359800000327
The original value is maintained.
Further, step S3 specifically includes the following steps:
step S3.1, sub-vector
Figure BDA00030446359800000328
And
Figure BDA00030446359800000329
reading element r of (1)iRegarded as a candidate outlier and order
Figure BDA00030446359800000330
Step S3.2, calculate separately
Figure BDA00030446359800000331
And
Figure BDA00030446359800000332
and vector RiPearson's correlation coefficient
Figure BDA00030446359800000333
And
Figure BDA00030446359800000334
Figure BDA0003044635980000041
wherein R isi' the method of obtaining is as follows: taking reading vector set R in RiIf the remainder is 1, 4 elements are taken backward from the last element position to form a vector Ri' if the remainder is 5, then 4 elements are taken from the position immediately preceding the first element to form a vector Ri′;j=4,5,
Figure BDA0003044635980000042
rl∈Ri′
Step S3.3, respectively judging
Figure BDA0003044635980000043
And
Figure BDA0003044635980000044
and tpAnd the absolute value of the difference between the mean values of the elements in the two corresponding sub-vectors
Figure BDA0003044635980000045
And tmThe size relationship of (1):
(1) if it is
Figure BDA0003044635980000046
And correspond to
Figure BDA0003044635980000047
Then it is considered that
Figure BDA0003044635980000048
And Ri' highly correlated, close element mean and reading element riIs an outlier; if it is
Figure BDA0003044635980000049
And correspond to
Figure BDA00030446359800000410
Then it is considered that
Figure BDA00030446359800000411
And Ri' highly correlated, close element mean and reading element riIs an outlier;
secondly, if
Figure BDA00030446359800000412
And
Figure BDA00030446359800000413
if the calculated values of (A) all satisfy the above conditions, then the calculated values are taken
Figure BDA00030446359800000414
The conditions are satisfied: calculate correspondences
Figure BDA00030446359800000415
Updating
Figure BDA00030446359800000416
And updates R in the reading vector RiAnd Ri' value of the corresponding position;
(2) if at the same time satisfy
Figure BDA00030446359800000417
Or correspond to
Figure BDA00030446359800000418
And
Figure BDA00030446359800000419
or correspond to
Figure BDA00030446359800000420
Then the reading element r is considerediInstead of outliers, the original values are maintained.
Further, step S4 specifically includes the following steps:
when the reading vector set to be executed is empty, all data processing is finished; counting the number of the same elements and the number of different elements in the reading vector set, arranging the same elements from large to small according to the number of the same elements, distributing binary indexes according to the number of the different elements, compiling the binary indexes into a dictionary as follows, and finally replacing the element reading in the reading vector R with the binary indexes:
number n of non-identical elementsi Index binary representation s1 Corresponding element reading ri
1,2 0,1 r1,r2
3,4 00,01,10,11 r1,r2,r3,r4
5,6,7,8 000,001,...,111 r1,r2,...,r8
... ... ...
The invention has the beneficial effects that:
the invention provides a novel data compression method for a switch cabinet sensor network based on a periodic transmission model, introduces the concept of Pearson correlation coefficient, replaces outliers generated by factors such as interference, and finally is compiled into a dictionary, so that data are compressed more greatly, and the energy consumption and the storage space of a wireless sensor network are saved under the condition of keeping a data time sequence.
Drawings
Fig. 1 is a flowchart of a data compression method for a switch cabinet sensor network based on a periodic transmission model according to an embodiment of the present invention.
Detailed Description
In order to facilitate a better understanding of the invention for those skilled in the art, the invention will be described in further detail with reference to the accompanying drawings and specific examples, which are given by way of illustration only and do not limit the scope of the invention.
The method for compressing data of the switch cabinet sensor network based on the periodic transmission model disclosed by the embodiment, as shown in fig. 1, includes the following steps:
step S1, the sensor node collects the readings of the current period and constructs a reading vector R by all the collected readings in time sequence.
Specifically, the sensor node collects readings in the current period, and when τ readings are obtained, the reading vector is constructed in time order by adding "1" to the number of readings each time one reading is collected:
R=[r1,r2,...,rτ]
in the above formula, τ represents the total number of readings in the current cycle, and τ is 2NN belongs to Z, and Z is an integer set;
and adds R to the set of read vectors to be executed.
Step S2, selecting a certain vector R from the reading vector set to be executed according to the adding orderiAnd according to the vector RiThe number of middle elements divides it into two categories: the first is to use the vector RiDividing the vector into two sub-vectors with equal element numbers, and calculating the Pearson correlation coefficient of the two sub-vectors and the absolute value of the element mean value difference in the two sub-vectors; the second type is to directly calculate the absolute value of the difference between two elements in the vector.
Specifically, step S2 specifically includes the following steps:
step S2.1, selecting a vector R from the reading vector set to be executed according to the adding orderiAnd determining the vector RiNumber of middle elements 2n equal to 2: if 2n ≠ 2, executing step S2.2; if 2n is 2, performing step S2.3;
step S2.2, the vector RiDividing into two sub-vectors of equal number of elements
Figure BDA0003044635980000061
And
Figure BDA0003044635980000062
and calculating the Pearson correlation coefficient of the two sub-vectors
Figure BDA0003044635980000063
And the absolute value of the difference between the mean values of the elements in the two subvectors
Figure BDA0003044635980000064
Wherein, the Pearson correlation coefficient
Figure BDA0003044635980000065
The calculation formula of (a) is as follows:
Figure BDA0003044635980000066
in the above formula, the first and second carbon atoms are,
Figure BDA0003044635980000067
Figure BDA0003044635980000068
representing the relevance of the data;
then, judge
Figure BDA0003044635980000069
And tpThe magnitude relation of (1)
Figure BDA00030446359800000610
And tmThe size relationship of (1):
(1) if it is
Figure BDA00030446359800000611
And is
Figure BDA00030446359800000612
Then it is considered that
Figure BDA00030446359800000613
And
Figure BDA00030446359800000614
are highly positive correlated and vector element means are similar; wherein the content of the first and second substances,
Figure BDA00030446359800000615
and
Figure BDA00030446359800000616
respectively representing the mean of two sub-vector elements, representing a highly correlated threshold number, tp∈[-1,1],tmRepresenting the number of close thresholds of the mean, tm≥0,tpHigher values the more accurate the data, lower compression ratios, tmThen, conversely, tmLower values the more accurate the data, higher compression ratios, tpAnd tmThe value of (A) is reasonably adjusted according to specific conditions;
the element value being the vector of the average of the corresponding element values of the two sub-vectors
Figure BDA00030446359800000617
Figure BDA00030446359800000618
Updating
Figure BDA00030446359800000619
And updates R in the reading vector RiThe value of the corresponding position of (a); removing vector R from vector set to be executediAnd will be
Figure BDA00030446359800000620
Adding a data set to be executed;
(2) if it is
Figure BDA00030446359800000621
Or
Figure BDA00030446359800000622
Then it is considered that
Figure BDA00030446359800000623
And
Figure BDA00030446359800000624
not highly positive correlation or dissimilar vector element means;
when in use
Figure BDA00030446359800000625
The number n of the elements is 2 and
Figure BDA00030446359800000626
the absolute value of the difference between the two elements is greater than tmIf yes, go to step S3; otherwise, it will be in order
Figure BDA00030446359800000627
And
Figure BDA00030446359800000628
adding a reading vector set to be executed;
step S2.3, calculating the absolute value of the difference between two elements in the vector
Figure BDA0003044635980000071
Judging again
Figure BDA0003044635980000072
And tmThe size relationship of (1):
(1) if it is
Figure BDA0003044635980000073
Then it is considered as RiThe values of the two elements are close, so that:
Figure BDA0003044635980000074
then updating R in reading vector RiThe value of the corresponding position of (a);
(2) if it is
Figure BDA0003044635980000075
The original value is maintained.
Step S3, regarding one of the reading elements in the two first-type subvectors in step S2 as a candidate outlier, and determining whether the candidate outlier is an outlier: if yes, calculating and updating a reading vector; otherwise, the original value is kept.
In this embodiment, step S3 specifically includes the following steps:
step S3.1, sub-vector
Figure BDA0003044635980000076
And
Figure BDA0003044635980000077
reading element r of (1)iRegarded as a candidate outlier and order
Figure BDA0003044635980000078
Respectively calculate
Figure BDA0003044635980000079
And
Figure BDA00030446359800000710
and vector RiPearson's correlation coefficient
Figure BDA00030446359800000711
And
Figure BDA00030446359800000712
Figure BDA00030446359800000713
wherein R isi' the method of obtaining is as follows: taking reading vector set R in RiIf the remainder is 1, 4 elements are taken backward from the last element position to form a vector Ri' if the remainder is 5, then 4 elements are taken from the position immediately preceding the first element to form a vector Ri′;j=4,5,
Figure BDA00030446359800000714
rl∈Ri′
Step S3.3, respectively judging
Figure BDA00030446359800000715
And
Figure BDA00030446359800000716
and tpAnd the absolute value of the difference between the mean values of the elements in the two corresponding sub-vectors
Figure BDA00030446359800000717
And tmThe size relationship of (1):
(1) if it is
Figure BDA00030446359800000718
And correspond to
Figure BDA00030446359800000719
Then it is considered that
Figure BDA00030446359800000720
And Ri' highly correlated, close element mean and reading element riIs an outlier; if it is
Figure BDA00030446359800000721
And correspond to
Figure BDA00030446359800000722
Then it is considered that
Figure BDA00030446359800000723
And Ri' highly correlated, close element mean and reading element riIs an outlier;
secondly, if
Figure BDA0003044635980000081
And
Figure BDA0003044635980000082
the calculated values of (a) all satisfy the above condition, that is,
Figure BDA0003044635980000083
and correspond to
Figure BDA0003044635980000084
And
Figure BDA0003044635980000085
and correspond to
Figure BDA0003044635980000086
Then get
Figure BDA0003044635980000087
The conditions are satisfied: calculate correspondences
Figure BDA0003044635980000088
Updating
Figure BDA0003044635980000089
And updates R in the reading vector RiAnd Ri' value of the corresponding position;
(2) if at the same time satisfy
Figure BDA00030446359800000810
Or correspond to
Figure BDA00030446359800000811
And
Figure BDA00030446359800000812
or correspond to
Figure BDA00030446359800000813
Then the reading element r is considerediInstead of outliers, the original values are maintained.
And step S4, when the reading vector set to be executed is empty, counting the number of the same elements and the number of different elements in the reading vector set, compiling a dictionary to reduce the number of bits of data and further compress the data.
In this embodiment, step S4 specifically includes the following steps:
when the reading vector set to be executed is empty, all data processing is finished; counting the number of the same elements and the number of different elements in the reading vector set, arranging the same elements from large to small according to the number of the same elements, distributing binary indexes according to the number of the different elements, compiling the binary indexes into a dictionary as follows, and finally replacing the element reading in the reading vector R with the binary indexes:
TABLE 1 compiled dictionary
Number n of non-identical elementsi Index binary representation s1 Corresponding element reading ri
1,2 0,1 r1,r2
3,4 00,01,10,11 r1,r2,r3,r4
5,6,7,8 000,001,...,111 r1,r2,...,r8
... ... ...
And step S5, transmitting the dictionary and reading vector set R obtained in the step S4 to the next sensor node.
Step S6, entering the next period, and continuously circulating according to the steps S1-S5.
The data compression method can achieve good effect when processing the period switch cabinet sensing data with disturbance, can compress the switch cabinet sensor network data to 10% -30% of the original data, and ensures that the data distortion rate is within 0.5% -5%, and the more the total number of readings in a single reading period is, the higher the compression rate is. The method ensures the time sequence, so that the change trend of the switch cabinet sensor data to the time is also ensured to a certain extent. Meanwhile, two threshold values t are reasonably adjusted according to specific conditionspAnd tmThe compression method can achieve different effects and has certain flexibility.
The foregoing merely illustrates the principles and preferred embodiments of the invention and many variations and modifications may be made by those skilled in the art in light of the foregoing description, which are within the scope of the invention.

Claims (5)

1. A switch cabinet sensor network data compression method based on a periodic transmission model is characterized by comprising the following steps:
step S1, the sensor node collects the readings of the current period and constructs a reading vector R according to the time sequence of all the collected readings;
step S2, selecting a certain vector R from the reading vector set to be executed according to the adding orderiAnd according to the vector RiThe number of middle elements divides it into two categories: the first is to use the vector RiDividing the vector into two sub-vectors with equal element numbers, and calculating the Pearson correlation coefficient of the two sub-vectors and the absolute value of the element mean value difference in the two sub-vectors; the second type is that the absolute value of the difference between two elements in the vector is directly calculated;
step S3, regarding one of the reading elements in the two first-type subvectors in step S2 as a candidate outlier, and determining whether the candidate outlier is an outlier: if yes, calculating and updating a reading vector; otherwise, keeping the original numerical value;
step S4, when the reading vector set to be executed is empty, counting the number of the same elements and the number of different elements in the reading vector set, and compiling a dictionary;
step S5, transmitting the dictionary and reading vector set R obtained in the step S4 to the next sensor node;
step S6, entering the next period, and continuously circulating according to the steps S1-S5.
2. The method according to claim 1, wherein step S1 specifically comprises the following:
the sensor node collects readings in the current period, and when tau readings are obtained, a reading vector is constructed according to the time sequence by adding 1 to the number of the readings each time one reading is collected:
R=[r1,r2,…,rτ]
in the above formula, τ represents the total number of readings in the current cycle, and τ is 2NN belongs to Z, and Z is an integer set;
and adds R to the set of read vectors to be executed.
3. The method according to claim 2, wherein step S2 specifically includes the following:
step S2.1, selecting a vector R from the reading vector set to be executed according to the adding orderiAnd determining the vector RiNumber of middle elements 2n equal to 2: if 2n ≠ 2, executing step S2.2; if 2n is 2, performing step S2.3;
step S2.2, the vector RiDividing into two sub-vectors of equal number of elements
Figure FDA0003044635970000021
And
Figure FDA0003044635970000022
and calculating the Pearson correlation coefficient of the two sub-vectors
Figure FDA0003044635970000023
And the absolute value of the difference between the mean values of the elements in the two subvectors
Figure FDA0003044635970000024
Figure FDA0003044635970000025
In the above formula, the first and second carbon atoms are,
Figure FDA0003044635970000026
Figure FDA0003044635970000027
representing the relevance of the data;
then, judge
Figure FDA0003044635970000028
And tpThe magnitude relation of (1)
Figure FDA0003044635970000029
And tmThe size relationship of (1):
(1) if it is
Figure FDA00030446359700000210
And is
Figure FDA00030446359700000211
Then it is considered that
Figure FDA00030446359700000212
And
Figure FDA00030446359700000213
are highly positive correlated and vector element means are similar; wherein the content of the first and second substances,
Figure FDA00030446359700000214
and
Figure FDA00030446359700000215
respectively representing the mean of two sub-vector elements, representing a highly correlated threshold number, tp∈[-1,1],tmRepresenting the number of close thresholds of the mean, tm≥0,tpHigher values the more accurate the data, lower compression ratios, tmThe lower the value, the more accurate the data, the higher the compression ratio;
the element value being the vector of the average of the corresponding element values of the two sub-vectors
Figure FDA00030446359700000216
Figure FDA00030446359700000217
Updating
Figure FDA00030446359700000218
And updates R in the reading vector RiThe value of the corresponding position of (a); removing vector R from vector set to be executediAnd will be
Figure FDA00030446359700000219
Adding a data set to be executed;
(2) if it is
Figure FDA00030446359700000220
Or
Figure FDA00030446359700000221
Then it is considered that
Figure FDA00030446359700000222
And
Figure FDA00030446359700000223
not highly positive correlation or dissimilar vector element means;
when in use
Figure FDA00030446359700000224
And the number n of elements is 2 and
Figure FDA00030446359700000225
the absolute value of the difference between the two elements is greater than tmIf yes, go to step S3; otherwise, it will be in order
Figure FDA00030446359700000226
And
Figure FDA00030446359700000227
adding a reading vector set to be executed;
step S2.3, calculating the absolute value of the difference between two elements in the vector
Figure FDA00030446359700000228
Judging again
Figure FDA00030446359700000229
And tmThe size relationship of (1):
(1) if it is
Figure FDA00030446359700000230
Then it is considered as RiThe values of the two elements are close, so that:
Figure FDA00030446359700000231
then updating R in reading vector RiThe value of the corresponding position of (a);
(2) if it is
Figure FDA0003044635970000031
The original value is maintained.
4. The method according to claim 3, wherein step S3 specifically comprises the following:
step S3.1, sub-vector
Figure FDA0003044635970000032
And
Figure FDA0003044635970000033
reading element r of (1)iRegarded as a candidate outlier and order
Figure FDA0003044635970000034
Step S3.2, calculate separately
Figure FDA0003044635970000035
And
Figure FDA0003044635970000036
and vectorRiPearson's correlation coefficient
Figure FDA0003044635970000037
And
Figure FDA0003044635970000038
Figure FDA0003044635970000039
wherein R isi' the method of obtaining is as follows: taking reading vector set R in RiIf the remainder is 1, 4 elements are taken backward from the last element position to form a vector Ri' if the remainder is 5, then 4 elements are taken from the position immediately preceding the first element to form a vector Ri′;j=4,5,
Figure FDA00030446359700000310
rl∈Ri′
Step S3.3, respectively judging
Figure FDA00030446359700000311
And
Figure FDA00030446359700000312
and tpAnd the absolute value of the difference between the mean values of the elements in the two corresponding sub-vectors
Figure FDA00030446359700000313
And tmThe size relationship of (1):
(1) if it is
Figure FDA00030446359700000314
And correspond to
Figure FDA00030446359700000315
Then it is considered that
Figure FDA00030446359700000316
And Ri' highly correlated, close element mean and reading element riIs an outlier; if it is
Figure FDA00030446359700000317
And correspond to
Figure FDA00030446359700000318
Then it is considered that
Figure FDA00030446359700000319
And Ri' highly correlated, close element mean and reading element riIs an outlier;
secondly, if
Figure FDA00030446359700000320
And
Figure FDA00030446359700000321
if the calculated values of (A) all satisfy the above conditions, then the calculated values are taken
Figure FDA00030446359700000322
The conditions are satisfied: calculate correspondences
Figure FDA00030446359700000323
Updating
Figure FDA00030446359700000324
And updates R in the reading vector RiAnd Ri' value of the corresponding position;
(2) if at the same time satisfy
Figure FDA00030446359700000325
Or correspond to
Figure FDA00030446359700000326
And
Figure FDA00030446359700000327
or correspond to
Figure FDA00030446359700000328
Then the reading element r is considerediInstead of outliers, the original values are maintained.
5. The method according to claim 4, wherein step S4 specifically comprises the following steps:
when the reading vector set to be executed is empty, all data processing is finished; counting the number of the same elements and the number of different elements in the reading vector set, arranging the same elements from large to small according to the number of the same elements, distributing binary indexes according to the number of the different elements, compiling the binary indexes into a dictionary as follows, and finally replacing the element reading in the reading vector R with the binary indexes:
number n of non-identical elementsi Index binary representation s1 Corresponding element reading ri 1,2 0,1 r1,r2 3,4 00,01,10,11 r1,r2,r3,r4 5,6,7,8 000,001,…,111 r1,r2,…,r8
CN202110469096.6A 2021-04-28 2021-04-28 Switch cabinet sensor network data compression method based on periodic transmission model Active CN113194430B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110469096.6A CN113194430B (en) 2021-04-28 2021-04-28 Switch cabinet sensor network data compression method based on periodic transmission model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110469096.6A CN113194430B (en) 2021-04-28 2021-04-28 Switch cabinet sensor network data compression method based on periodic transmission model

Publications (2)

Publication Number Publication Date
CN113194430A true CN113194430A (en) 2021-07-30
CN113194430B CN113194430B (en) 2022-11-01

Family

ID=76980099

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110469096.6A Active CN113194430B (en) 2021-04-28 2021-04-28 Switch cabinet sensor network data compression method based on periodic transmission model

Country Status (1)

Country Link
CN (1) CN113194430B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011067769A1 (en) * 2009-12-03 2011-06-09 Infogin Ltd. Shared dictionary compression over http proxy
US20140093881A1 (en) * 2012-01-31 2014-04-03 Life Technologies Corporation Methods and Computer Program Products for Compression of Sequencing Data
CN108494408A (en) * 2018-03-14 2018-09-04 电子科技大学 While-drilling density logger underground high speed real-time compression method based on Hash dictionary
CN108990108A (en) * 2018-07-10 2018-12-11 西华大学 A kind of compression method and system of adaptive real time spectrum data
CN110870019A (en) * 2017-10-16 2020-03-06 因美纳有限公司 Semi-supervised learning for training deep convolutional neural network sets

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011067769A1 (en) * 2009-12-03 2011-06-09 Infogin Ltd. Shared dictionary compression over http proxy
US20140093881A1 (en) * 2012-01-31 2014-04-03 Life Technologies Corporation Methods and Computer Program Products for Compression of Sequencing Data
CN110870019A (en) * 2017-10-16 2020-03-06 因美纳有限公司 Semi-supervised learning for training deep convolutional neural network sets
CN108494408A (en) * 2018-03-14 2018-09-04 电子科技大学 While-drilling density logger underground high speed real-time compression method based on Hash dictionary
CN108990108A (en) * 2018-07-10 2018-12-11 西华大学 A kind of compression method and system of adaptive real time spectrum data

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ABDELOUAHAB KHELIFATI ET AL.: "CORAD: Correlation-Aware Compression of Massive Time Series using Sparse Dictionary Coding", 《2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA)》 *
刘智巍: "面向可视化的体数据压缩技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
叶娅兰 等: "面向压缩感知的基于相关性字典学习算法", 《电子科技大学学报》 *

Also Published As

Publication number Publication date
CN113194430B (en) 2022-11-01

Similar Documents

Publication Publication Date Title
CN111459135B (en) Intelligent home fault state tracing method based on Internet of things and central control center
CN109597757B (en) Method for measuring similarity between software networks based on multidimensional time series entropy
CN111104241A (en) Server memory anomaly detection method, system and equipment based on self-encoder
CN111062620A (en) Intelligent analysis system and method for electric power charging fairness based on hybrid charging data
CN116320042A (en) Internet of things terminal monitoring control system for edge calculation
CN109656887B (en) Distributed time series mode retrieval method for mass high-speed rail shaft temperature data
CN113194430B (en) Switch cabinet sensor network data compression method based on periodic transmission model
CN110972174A (en) Wireless network interruption detection method based on sparse self-encoder
CN116992155B (en) User long tail recommendation method and system utilizing NMF with different liveness
CN117290364A (en) Intelligent market investigation data storage method
CN104718706A (en) Format identification for fragmented image data
CN117278058A (en) Data acquisition and processing method for climate financing project
CN110851708B (en) Negative sample extraction method, device, computer equipment and storage medium
CN116841973A (en) Data intelligent compression method and system for embedded database
CN112865898A (en) Antagonistic wireless communication channel model estimation and prediction method
CN117294314B (en) Fruit and vegetable can production information data record management method
CN111782734B (en) Data compression and decompression method and device
CN111859301B (en) Data reliability evaluation method based on improved Apriori algorithm and Bayesian network reasoning
CN117041121B (en) Internet of Things anomaly monitoring method and system based on data mining
CN109409655B (en) MWO-based optimization method for reliability sampling acceptance test scheme
WO2023226831A1 (en) Method and apparatus for determining weight of prediction block of coding unit
CN117375626B (en) Intelligent heat supply abnormal data transmission method and system
CN113434436B (en) Test case generation method and device, electronic equipment and storage medium
CN117421481A (en) Crowd searching method, system, electronic device and computer readable storage medium
CN117373225A (en) Energy data acquisition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant