CN115866264B - Equipment operation data compression storage method for intelligent factory MES system - Google Patents

Equipment operation data compression storage method for intelligent factory MES system Download PDF

Info

Publication number
CN115866264B
CN115866264B CN202310148634.0A CN202310148634A CN115866264B CN 115866264 B CN115866264 B CN 115866264B CN 202310148634 A CN202310148634 A CN 202310148634A CN 115866264 B CN115866264 B CN 115866264B
Authority
CN
China
Prior art keywords
pixel point
edge pixel
edge
pixel points
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310148634.0A
Other languages
Chinese (zh)
Other versions
CN115866264A (en
Inventor
吴小松
樊姗琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong Shidao Intelligent Technology Co ltd
Original Assignee
Nantong Shidao Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong Shidao Intelligent Technology Co ltd filed Critical Nantong Shidao Intelligent Technology Co ltd
Priority to CN202310148634.0A priority Critical patent/CN115866264B/en
Publication of CN115866264A publication Critical patent/CN115866264A/en
Application granted granted Critical
Publication of CN115866264B publication Critical patent/CN115866264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the field of data compression and storage, in particular to a device operation data compression and storage method for an intelligent factory MES system, which comprises the following steps: obtaining all edge pixel points on an operation monitoring image; sequentially taking the edge pixel points as initial pixel points according to the order of the priority from large to small, obtaining all direction chain codes of all edge pixel points, and carrying out coding storage on all direction chain codes; performing repeated iterative decomposition on the operation monitoring image through a quadtree representation method without considering edge pixel points in the operation monitoring image until all image blocks meet homogeneity criteria; and storing all the image blocks of the obtained operation monitoring image. According to the method, the edge pixel points and the image blocks of the operation monitoring image are stored respectively, the influence of the edge pixel points on the size of the image blocks of the operation monitoring image is removed while the important edge information of the operation monitoring image is not lost, and the compression efficiency of the operation monitoring image is further guaranteed.

Description

Equipment operation data compression storage method for intelligent factory MES system
Technical Field
The invention relates to the field of image compression storage, in particular to a device operation data compression storage method for an intelligent factory MES system.
Background
The MES system is a production informatization management system facing the workshop execution layer of a manufacturing enterprise. The MES can provide management modules including data management, production process control, data integration analysis and the like for enterprises, and a firm, reliable, comprehensive and feasible manufacturing collaborative management platform is built for the enterprises. The production process control mainly comprises the steps of producing a monitoring video, analyzing the running state of production equipment by a manager through the running monitoring video in the running process of the production equipment, regulating and controlling the production equipment, optimizing a production manufacturing management mode, and achieving the purpose of building an intelligent factory MES system.
The operation state of the production equipment is analyzed through the operation monitoring video, the production equipment is regulated and controlled, a large amount of real-time operation monitoring videos are needed to be obtained by means of a large amount of real-time operation monitoring videos, and the collected operation monitoring videos are needed to be compressed.
The operation monitoring image mainly monitors the production equipment, the volume of the production equipment is larger, and the color of the production equipment is single, so that the operation monitoring image of the monitoring production equipment has stronger local similarity and redundancy; however, for the edge pixel points with abrupt change of the regional attribute, the edge pixel points cannot be divided into regular image blocks, the edge pixel points cannot be compressed through block coding, and meanwhile, the edge pixel points divide the original continuous region into a plurality of image blocks with smaller sizes when the region is partitioned, so that the size of the image blocks of the operation monitoring image is influenced, and the compression efficiency of the operation monitoring image is further influenced.
Disclosure of Invention
In order to solve the above problems, the present invention provides an equipment operation data compression storage method for an intelligent factory MES system, the method comprising:
acquiring an operation monitoring image, and acquiring all edge pixel points on the operation monitoring image;
calculating the priority of each edge pixel point, sequentially taking the edge pixel points as initial pixel points according to the sequence of the priority from large to small, and obtaining corresponding direction chain codes according to the initial pixel points; obtaining all direction chain codes of all edge pixel points, and carrying out coding storage on all direction chain codes;
and (3) carrying out repeated iterative decomposition on the operation monitoring image through a quadtree representation method without considering edge pixel points in the operation monitoring image, and judging whether the image blocks meet a homogeneity criterion for all the image blocks obtained by each decomposition: for the image blocks meeting the homogeneity criterion, decomposing the image blocks by a quadtree representation method until all the image blocks meet the homogeneity criterion; and storing all the image blocks of the obtained operation monitoring image.
Further, the step of calculating the priority of each edge pixel point includes:
for any one edge pixel point in the operation monitoring image, calculating the difference degree between any one edge pixel point and the center pixel point for all edge pixel points in the neighborhood of the edge pixel point serving as the center pixel point, and recording the number of edge pixel points with the difference degree in the neighborhood of the center pixel point being larger than a first threshold value as the priority of the edge pixel point corresponding to the center pixel point.
Further, the step of calculating the difference between any one of the edge pixel points and the center pixel point includes:
for the in-neighborhood third
Figure SMS_1
The computing formula of the difference degree between the edge pixel point and the central pixel point is as follows:
Figure SMS_2
in the method, in the process of the invention,
Figure SMS_3
representing the inside of the neighborhood
Figure SMS_7
The gray values of the individual edge pixels,
Figure SMS_9
the gray value representing the center pixel point,
Figure SMS_4
representing the inside of the neighborhood
Figure SMS_6
The gradient direction of the individual edge pixels,
Figure SMS_8
representing the gradient direction representing the center pixel point,
Figure SMS_10
represent the first
Figure SMS_5
The difference between the edge pixel point and the center pixel point.
Further, the step of sequentially taking the edge pixel points as the initial pixel points according to the order of the priority from the high to the low, and obtaining the corresponding direction chain codes according to the initial pixel points includes:
the set formed by all the edge pixel points is recorded as an edge set; maximizing priorityTaking the edge pixel point of the image as a starting pixel point, and obtaining the image taking the starting pixel point as the center
Figure SMS_11
In the neighborhood, the edge pixel point with the smallest difference value and the smallest direction value in all edge pixel points with the gray value difference value smaller than 5 from the starting pixel point is used as the second pixel point in the direction chain code of the starting pixel point; judging that an edge pixel point in the direction of which the absolute value of the direction difference between the direction value and the direction value of the second pixel point is less than or equal to 2 is used as a third pixel point in the direction chain code of the starting pixel point in 16 directions taking the second pixel point in the direction chain code of the starting pixel point as the center; similarly, judging that an edge pixel point in the direction of which the absolute value of the direction difference between the direction value and the direction value of the third pixel point is less than or equal to 2 is taken as a fourth pixel point in the direction chain code of the starting pixel point in 16 directions taking the third pixel point in the direction chain code of the starting pixel point as the center; and so on, all edge pixel points forming a direction chain code are obtained;
and removing all edge pixel points forming the direction chain code from the edge set, and repeating the steps until the edge set is empty or no new direction chain code is generated, so as to obtain all direction chain codes of all edge pixel points.
Further, the method for calculating the direction difference comprises the following steps:
for any two directions, the specific method for calculating the direction difference of the two directions is as follows: the direction value of the first direction of the two directions is recorded as
Figure SMS_12
The direction value of the second direction is recorded as
Figure SMS_13
Calculating an initial direction difference
Figure SMS_14
Wherein, the method comprises the steps of, wherein,
Figure SMS_15
the representation takes the absolute value of the value,
Figure SMS_16
the division is indicated to take the remainder,
Figure SMS_17
an initial direction difference representing two directions; judging whether the first direction is clockwise or counterclockwise relative to the second direction: the difference between the two directions is the initial difference if the first direction is clockwise in the second direction and the negative of the initial difference if the first direction is counterclockwise in the second direction.
Further, the step of determining whether the image block meets the homogeneity criterion comprises:
for any one image block, obtaining the maximum gray value and the minimum gray value of all other pixel points except the edge pixel point in the image block and the standard deviation of the gray values of all the pixel points; when the image block meets the homogeneity criterion, namely that the difference value between the maximum gray value and the minimum gray value in the image block is not greater than a second threshold value and the standard deviation of the gray value in the image block is not greater than a third threshold value, the image block is not continuously decomposed; and when the image block does not meet the homogeneity criterion, namely the difference value between the maximum gray value and the minimum gray value in the image block is larger than a second threshold value or the standard deviation of the gray values of all pixel points in the image block is larger than a third threshold value, continuing dividing the image block according to the quadtree representation.
The embodiment of the invention has at least the following beneficial effects:
1. according to the method, for the edge information in the operation monitoring image, the fact that the edge pixel points are areas with abrupt change of the area attribute is considered, so that the edge pixel points cannot be divided into regular image blocks, the edge pixel points cannot be compressed through block coding, meanwhile, the edge pixel points divide an original continuous area into a plurality of image blocks with smaller sizes when the edge pixel points are divided into blocks, the sizes of the image blocks of the operation monitoring image are influenced, and further the compression efficiency of the operation monitoring image is influenced, and therefore the method independently carries out lossy compression with smaller loss degree on the edge pixel points, and the influence of the edge pixel points on the sizes of the image blocks of the operation monitoring image is removed while the important edge information of the operation monitoring image is not lost, so that the size of the image blocks is increased, and further the compression efficiency of the operation monitoring image is guaranteed.
2. Considering that the edge pixel points are mostly arranged continuously along the vertical direction of the gradient direction and the gray values are the same or similar, the invention converts the position information of the edge pixel points with the same or similar gray values into 16-direction chain codes, further represents a plurality of edge pixel points with the same or similar continuous gray values by using (initial pixel point position, gray value and first-order search chain codes) and encodes (initial pixel point position, gray value and first-order search chain codes) the edge pixel points, thereby realizing lossy compression of the edge pixel points and ensuring that important edge information of an operation monitoring image is not lost.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of the steps of a method for compressing and storing equipment operating data for an intelligent factory MES system according to one embodiment of the present invention;
FIG. 2 is a diagram showing a direction distribution diagram of a direction chain code according to an embodiment of the present invention;
fig. 3 is a huffman coding table of first-order differential values according to an embodiment of the present invention;
fig. 4 is an exemplary image provided by one embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following description refers to the specific implementation, structure, characteristics and effects of the device operation data compression storage method for the intelligent factory MES system according to the invention in combination with the accompanying drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the device operation data compression storage method for the intelligent factory MES system provided by the invention with reference to the accompanying drawings.
Referring now to FIG. 1, a flowchart illustrating steps of a method for device operation data compression storage for an intelligent factory MES system according to one embodiment of the present invention is shown, the method comprising the steps of:
s001, acquiring an operation monitoring image.
And acquiring an operation monitoring video through a monitoring camera arranged in the factory, and taking each frame of image of the operation monitoring video as an operation monitoring image. The manager analyzes the operation state of the production equipment through an operation monitoring video in the operation process of the production equipment and regulates and controls the production equipment, so that the operation monitoring video mainly monitors the production equipment, and the operation monitoring image mainly comprises large-size and single-color production equipment; when the manager monitors the operation state of the production equipment through the operation monitoring image, the manager usually monitors the whole information of the production equipment, and does not pay special attention to the detailed information of the production equipment.
S002, edge detection is carried out on the operation monitoring image, edge pixel points are converted into direction chain codes, and all the direction chain codes are encoded, so that lossless compression of the edge pixel points is achieved.
It should be noted that, since the operation monitoring image mainly monitors the production equipment, the volume of the production equipment is larger, and the color of the production equipment is more single, the operation monitoring image of the monitoring production equipment has stronger local similarity and redundancy; when the manager monitors the operation state of the production equipment through the operation monitoring image, the manager usually monitors the whole information of the production equipment, and does not pay special attention to the detailed information of the production equipment. In view of the above characteristics, the method combines subjective fidelity, and performs lossy compression on the operation monitoring image through block coding, wherein the operation monitoring image has stronger local similarity and redundancy, so that the size of an obtained image block is large after the operation monitoring image is blocked, and the compression efficiency of the operation monitoring image is further ensured; the detail information of the operation monitoring image can be properly discarded, so that the size of the image block can be increased as long as the difference of gray values of all pixel points in the image block is ensured to be within the range accepted by human eyes, namely, the subjective fidelity of the image block after block coding meets the requirement, and the compression efficiency of the operation monitoring image is further ensured; therefore, the compression efficiency of lossy compression of the operation monitoring image by block coding in combination with subjective fidelity is high.
For edge information in an operation monitoring image, because edge pixel points are places with abrupt change of regional attributes, the edge pixel points cannot be divided into regular image blocks, the edge pixel points cannot be compressed through block coding, meanwhile, the edge pixel points can divide an original continuous region into a plurality of image blocks with smaller sizes when the edge pixel points are divided into blocks, the sizes of the image blocks of the operation monitoring image are influenced, and further the compression efficiency of the operation monitoring image is influenced, therefore, the method and the device independently perform lossy compression with smaller loss degree on the edge pixel points, and remove the influence of the edge pixel points on the sizes of the image blocks of the operation monitoring image while ensuring that important edge information of the operation monitoring image is not lost, and further ensure the compression efficiency of the operation monitoring image.
Because the edge pixel points are mostly arranged continuously along the vertical direction of the gradient direction and the gray values are the same or similar, the invention converts the position information of the edge pixel points with the same or similar gray values into 16-direction chain codes, further represents a plurality of edge pixel points with the same or similar continuous gray values by using (initial pixel point position, gray value and first-order search chain code) and encodes (initial pixel point position, gray value and first-order search chain code) to realize the lossy compression of the edge pixel points.
In this embodiment, edge detection is performed on an operation monitoring image, edge pixel points are converted into direction chain codes, all the direction chain codes are encoded, and the specific steps for realizing lossless compression of the edge pixel points are as follows:
1. and performing edge detection on the operation monitoring image to obtain all edge pixel points of the operation monitoring image.
It should be noted that, the edge information is the most basic feature of the operation monitoring image, is the place where the region attribute is suddenly changed, is the place where the uncertainty in the operation monitoring image is the largest, and is also the place where the operation monitoring image information is most concentrated, so that the edge pixel point of the operation monitoring image is the important information of the operation monitoring image, in order to ensure that the important information of the operation monitoring image is not lost, and in order to reduce the influence of the edge pixel point on the blocking result of the operation monitoring image, the invention performs the lossy compression with smaller loss degree on the edge pixel point of the operation monitoring image alone.
In this embodiment, edge detection is performed on the operation monitoring image by a Canny edge detection algorithm, all edge pixel points of the operation monitoring image are obtained, and a gradient direction of each edge pixel point is calculated.
2. And calculating the priority of each edge pixel point.
It should be noted that, the present invention converts the position information of the edge pixel points with the same or similar gray values into 16-direction chain codes, and further, a plurality of edge pixel points with the same or similar continuous gray values are represented by (initial pixel point position, gray value, first-order find chain codes), so that the initial pixel point of the 16-direction chain codes needs to be determined first; for the edge pixel point, the larger the number of the edge pixel points with larger difference from the gradient direction and the gray value in the neighborhood is, the position where the edge pixel point is intersected with a plurality of edges is indicated, so that the edge pixel point can be considered as the starting pixel point of the edges, the edge pixel point is taken as the starting pixel point, the obtained 16-direction chain code length is longer, and the compression efficiency is improved.
In this embodiment, for any edge pixel, the priority of the edge pixel is calculated according to the following specific calculation formula:
acquiring a pixel point taking the edge pixel point as a center
Figure SMS_18
All edge pixel points in the neighborhood are calculated
Figure SMS_19
The first in the neighborhood
Figure SMS_20
Degree of difference between each edge pixel point and the center pixel point, the first
Figure SMS_21
The calculation formula of the difference degree between each edge pixel point and the center pixel point is as follows:
Figure SMS_22
in the method, in the process of the invention,
Figure SMS_24
representing the inside of the neighborhood
Figure SMS_27
The gray values of the individual edge pixels,
Figure SMS_29
the gray value representing the center pixel point,
Figure SMS_23
representing the inside of the neighborhood
Figure SMS_26
The gradient direction of the individual edge pixels,
Figure SMS_28
representing the gradient direction representing the center pixel point,
Figure SMS_30
represent the first
Figure SMS_25
The difference between the edge pixel point and the center pixel point.
For the first
Figure SMS_31
The larger the difference between the gray value and the gradient direction of the edge pixel point and the central pixel point, the larger the difference between the edge pixel point and the central pixel point, the first
Figure SMS_32
The greater the degree of difference between the edge pixels and the center pixels.
And marking the number of the edge pixel points, which are in the neighborhood of the central pixel point and have the difference degree with the central pixel point larger than a first threshold value, as the priority of the edge pixel point corresponding to the central pixel point, and acquiring the priority of all the edge pixel points. In this embodiment, the first threshold is 0.1, and in other embodiments, the practitioner may set the first threshold as desired.
3. The direction and direction difference of the direction chain code are obtained.
In this embodiment, any one pixel is taken as the center to obtain the pixel
Figure SMS_33
The 16 directions of the neighborhood are respectively marked as 0 direction to 15 directions, and the direction values are respectively 0 to 15, as shown in fig. 2.
For any two directions, the specific method for calculating the direction difference of the two directions is as follows: the direction value of the first direction of the two directions is recorded as
Figure SMS_34
The direction value of the second direction is recorded as
Figure SMS_35
Calculating an initial direction difference
Figure SMS_36
Wherein, the method comprises the steps of, wherein,
Figure SMS_37
the representation takes the absolute value of the value,
Figure SMS_38
the division is indicated to take the remainder,
Figure SMS_39
an initial direction difference representing two directions; judging whether the first direction is clockwise or counterclockwise relative to the second direction: the difference between the two directions is the initial difference if the first direction is clockwise in the second direction and the negative of the initial difference if the first direction is counterclockwise in the second direction.
For example, the difference between the 1 direction and the 0 direction is 1, the difference between the 0 direction and the 1 direction is-1, the difference between the 15 direction and the 0 direction is-1, the difference between the 0 direction and the 15 direction is 1, the difference between the 8 direction and the 14 direction is-6, and the difference between the 14 direction and the 8 direction is 6.
4. And obtaining a starting pixel point according to the priority of the edge pixel point, and obtaining corresponding direction chain codes according to the starting pixel point to obtain all direction chain codes of all edge pixel points.
(1) The set formed by all the edge pixel points is recorded as an edge set; taking the edge pixel point with the largest priority in the edge set as a starting pixel point, judging whether the edge pixel point with the gray value difference value smaller than 5 from the starting pixel point exists in 16 directions taking the starting pixel point as the center, and if so, continuing the step (2); and (3) if the pixel points do not exist, taking the edge pixel point with the second highest priority in the edge set as a starting pixel point, and repeating the step (1).
(2) And obtaining an edge pixel point with the minimum difference value in all edge pixel points with the gray value smaller than 5 from the initial pixel point, if a plurality of edge pixel points with the minimum difference value exist, obtaining the direction value of the edge pixel points with the minimum difference value by taking the initial pixel point as the center, and taking the edge pixel point with the minimum direction value in the edge pixel points with the minimum difference value as the second pixel point in the direction chain code of the initial pixel point.
(3) Obtaining all edge pixel points with the difference value of the gray value of the second pixel point in the direction chain code smaller than 5, and judging whether the edge pixel points exist in a plurality of directions with the absolute value of the direction difference between the direction value and the direction value of the second pixel point not larger than 2 in 16 directions with the second pixel point in the direction chain code as the center according to the direction value of the second pixel point in the direction chain code: if not, stopping obtaining the current direction chain code, and repeating the step (1); if one exists, the edge pixel point is used as a third pixel point in the direction chain code of the initial pixel point; if there are a plurality of the pixels, the edge pixel with the smallest absolute value of the direction difference between the direction value and the direction value of the second pixel is used as the third pixel in the direction chain code of the starting pixel.
(4) Repeating the step (3) to obtain the number of all the pixels forming the direction chain code corresponding to the initial pixel: if the number is less than 4, the direction chain code is removed; if the number is not less than 4, the direction chain code is reserved, and edge pixel points corresponding to all pixel points forming the direction chain code are removed from the edge set to obtain a new edge set.
Repeating the steps (1) to (4) until the edge set is empty or no new direction chain code is generated.
For any one direction chain code, the position of the starting pixel point, the gray value of the starting pixel point and the direction value sequence are used for representing, wherein the direction value sequence is formed by the direction value of the last pixel point relative to the previous pixel point in all the pixel points forming the direction chain code.
For example, in a direction-chain code, the position of the initial pixel point is
Figure SMS_40
The gray value of the initial pixel point is
Figure SMS_41
The sequence of direction values is
Figure SMS_42
5. The direction chain code is encoded.
It should be noted that, since the edge pixels are mostly arranged continuously along the vertical direction of the gradient direction, and the gradient directions of the edge pixels forming the same edge are approximately the same, the difference of the direction values in the direction value sequence of the direction chain code obtained by the above steps is small, so that the probability of the first order difference value being 0 is the largest and the probabilities of the first order difference values being 1, -1, 2, -2 are sequentially reduced in the first order difference sequence of the direction value sequence, and therefore, the huffman coding is performed on the first order difference values in the first order difference sequence, thereby improving the compression efficiency of the edge pixels.
In this embodiment, a first-order differential calculation is performed on a direction value sequence of a direction chain code to obtain a first-order differential sequence of the direction value sequence, which is denoted as a first-order differential chain code, and the first-order differential chain code is encoded by huffman coding, where a huffman coding table of the first-order differential chain code is shown in fig. 3, specifically: the first order difference value is 0, and the corresponding code is
Figure SMS_43
The method comprises the steps of carrying out a first treatment on the surface of the The first order score is 1, and the corresponding code is
Figure SMS_44
The method comprises the steps of carrying out a first treatment on the surface of the The first order difference value is-1, and the corresponding code is
Figure SMS_45
The method comprises the steps of carrying out a first treatment on the surface of the The first order difference value is 2, and the corresponding code is
Figure SMS_46
The method comprises the steps of carrying out a first treatment on the surface of the The first order difference value is-2, and the corresponding code is
Figure SMS_47
For example, in the above example, the sequence of direction values of one direction chain code is
Figure SMS_48
The first-order differential chain code is
Figure SMS_49
Huffman coding is carried out to obtain
Figure SMS_50
The first 4 bits of the code are the codes of the direction value 10 in the first-order differential chain code, and the direction value is 0 to 15, so the codes are coded by 4-bit binary.
S003, the operation monitoring image is subjected to repeated iterative decomposition through a quadtree representation method without considering edge pixel points in the operation monitoring image until all image blocks meet homogeneity criteria, all image blocks of the operation monitoring image are obtained for blocking, and all image blocks are stored.
1. And decomposing the operation monitoring image through a quadtree representation method, and obtaining all image blocks of the operation monitoring image for blocking.
It should be noted that, for the edge information in the operation monitoring image, since the edge pixel point is a place where the region attribute is suddenly changed, the edge pixel point cannot be divided into regular image blocks, and the edge pixel point cannot be compressed by block coding, meanwhile, since the edge pixel point can divide the originally continuous region into a plurality of image blocks with smaller sizes when the region is divided into blocks, the size of the image blocks of the operation monitoring image is affected, and further the compression efficiency of the operation monitoring image is affected, the invention independently performs lossy compression with smaller loss degree on the edge pixel point, takes the edge pixel point of the operation monitoring image axis as a variable pixel point, further blocks the operation monitoring image, removes the influence of the edge pixel point on the size of the image blocks of the operation monitoring image while ensuring that the important edge information of the operation monitoring image is not lost, and further ensures the compression efficiency of the operation monitoring image.
In this embodiment, the image is decomposed by the quadtree representation, specifically including the steps of:
the image is stored by adopting a pyramid type data structure in a quadtree representation method, wherein the tree root of the quadtree corresponds to the whole image, leaf nodes correspond to single pixels or square matrixes formed by pixels with the same characteristics, each non-leaf node is provided with 4 child nodes, the quadtree representation method is used for decomposing the image into multiple stages, the tree root is of the 0 th stage, and the tree root is forked for multiple stages every time.
The method comprises the steps of dividing an operation monitoring image into 4 image blocks with equal size through a quadtree representation without considering edge pixel points in the operation monitoring image, then judging whether the 4 image blocks meet a given homogeneity criterion, if the current image block meets the criterion, keeping unchanged, otherwise, continuing to divide the operation monitoring image into the 4 image blocks, and judging whether the homogeneity criterion is met or not until all the image blocks meet the given criterion, wherein the edge pixel points in the image blocks do not participate in the judgment process of the homogeneity criterion.
The homogeneity criteria of the present invention can be expressed as:
Figure SMS_51
in the following
Figure SMS_52
Representing the maximum gray value and the minimum gray value in the image block respectively,
Figure SMS_53
representing the variance of the gray values of all pixels in the image block,
Figure SMS_54
a second threshold value is indicated and a second threshold value,
Figure SMS_55
representing a third threshold; when the image block meets the homogeneity criterion, namely that the difference value between the maximum gray value and the minimum gray value in the image block is not greater than a second threshold value and the standard deviation of the gray values of all pixel points in the image block is not greater than a third threshold value, the image block is not subjected toContinuing to decompose; when the image block does not meet the homogeneity criterion, namely the difference value between the maximum gray value and the minimum gray value in the image block is larger than a second threshold value or the standard deviation of the gray values of all pixel points in the image block is larger than a third threshold value, the image block is continuously divided according to the quadtree representation until all the image blocks meet the homogeneity criterion or the size of the image block is not larger than the minimum size, the invention provides that the minimum size of the image block is that
Figure SMS_56
In the present embodiment, the second threshold value
Figure SMS_57
Third threshold value
Figure SMS_58
In other embodiments, the practitioner may set the second and third thresholds as desired.
2. And compressing and storing all the image blocks.
When the operation monitoring image is decomposed by the quadtree representation method, judging whether the image block meets the homogeneity criterion, if yes, marking as 0, otherwise marking as 1, and ending the decomposition marked as 0, and continuing to decompose the marked as 1 so as to obtain the decomposition code of the operation monitoring image, and obtaining the gray level average value sequence of the operation monitoring image according to the decomposition code of the operation monitoring image.
For example, as shown in fig. 4, the image is decomposed by the quadtree representation method, and the decomposed code is 10010, and the gradation value average value sequence is 24,59,45,99,74,76,89.
In summary, the invention obtains all edge pixel points on the operation monitoring image; sequentially taking the edge pixel points as initial pixel points according to the order of the priority from large to small, obtaining all direction chain codes of all edge pixel points, and carrying out coding storage on all direction chain codes; performing repeated iterative decomposition on the operation monitoring image through a quadtree representation method without considering edge pixel points in the operation monitoring image until all image blocks meet homogeneity criteria; and storing all the image blocks of the obtained operation monitoring image. According to the method, the edge pixel points and the image blocks of the operation monitoring image are stored respectively, the influence of the edge pixel points on the size of the image blocks of the operation monitoring image is removed while the important edge information of the operation monitoring image is not lost, and the compression efficiency of the operation monitoring image is further guaranteed.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the scope of the embodiments of the present application, and are intended to be included within the scope of the present application.

Claims (3)

1. A method for compressing and storing equipment operation data for an intelligent factory MES system, the method comprising:
acquiring an operation monitoring image, and acquiring all edge pixel points on the operation monitoring image;
calculating the priority of each edge pixel point, sequentially taking the edge pixel points as initial pixel points according to the sequence of the priority from large to small, and obtaining corresponding direction chain codes according to the initial pixel points; obtaining all direction chain codes of all edge pixel points, and carrying out coding storage on all direction chain codes;
and (3) carrying out repeated iterative decomposition on the operation monitoring image through a quadtree representation method without considering edge pixel points in the operation monitoring image, and judging whether the image blocks meet a homogeneity criterion for all the image blocks obtained by each decomposition: for the image blocks meeting the homogeneity criterion, decomposing the image blocks by a quadtree representation method until all the image blocks meet the homogeneity criterion; storing all the image blocks of the obtained operation monitoring image;
the contents of the direction chain code are as follows: starting pixel position, gray value and first-order differential chain code;
the step of not considering the edge pixel points in the operation monitoring image refers to taking the edge pixel points in the operation monitoring image as variable pixel points;
the step of calculating the priority of each edge pixel point comprises the following steps:
for any one edge pixel point in the operation monitoring image, calculating the difference degree between any one edge pixel point and the center pixel point for all edge pixel points in the neighborhood of the edge pixel point serving as the center pixel point, and recording the number of edge pixel points with the difference degree in the neighborhood of the center pixel point being larger than a first threshold value as the priority of the edge pixel point corresponding to the center pixel point;
the step of sequentially taking the edge pixel points as the initial pixel points according to the order of the priority from large to small and obtaining the corresponding direction chain codes according to the initial pixel points comprises the following steps:
the set formed by all the edge pixel points is recorded as an edge set; taking the edge pixel point with the highest priority as a starting pixel point, and obtaining the pixel point with the starting pixel point as the center
Figure QLYQS_1
In the neighborhood, the edge pixel point with the smallest difference value and the smallest direction value in all edge pixel points with the gray value difference value smaller than 5 from the starting pixel point is used as the second pixel point in the direction chain code of the starting pixel point; judging that an edge pixel point in the direction of which the absolute value of the direction difference between the direction value and the direction value of the second pixel point is less than or equal to 2 is used as a third pixel point in the direction chain code of the starting pixel point in 16 directions taking the second pixel point in the direction chain code of the starting pixel point as the center; similarly, judging that an edge pixel point in the direction of which the absolute value of the direction difference between the direction value and the direction value of the third pixel point is less than or equal to 2 is taken as a fourth pixel point in the direction chain code of the starting pixel point in 16 directions taking the third pixel point in the direction chain code of the starting pixel point as the center; and so on, all edge pixel points forming a direction chain code are obtained;
removing all edge pixel points forming the direction chain code from the edge set, repeating the steps until the edge set is empty or no new direction chain code is generated, and obtaining all direction chain codes of all edge pixel points;
the method for calculating the direction difference comprises the following steps:
for any two directions, the specific method for calculating the direction difference of the two directions is as follows: the direction value of the first direction of the two directions is recorded as
Figure QLYQS_2
The direction value of the second direction is marked as +.>
Figure QLYQS_3
Calculating the initial direction difference +.>
Figure QLYQS_4
Wherein, the method comprises the steps of, wherein,
Figure QLYQS_5
representing absolute value>
Figure QLYQS_6
Representing division remainder,/->
Figure QLYQS_7
An initial direction difference representing two directions; judging whether the first direction is clockwise or counterclockwise relative to the second direction: the difference between the two directions is the initial difference if the first direction is clockwise in the second direction and the negative of the initial difference if the first direction is counterclockwise in the second direction. />
2. The method for compressing and storing equipment operation data for intelligent factory MES system according to claim 1, wherein the step of calculating a degree of difference between any one of the edge pixels and the center pixel comprises:
for the in-neighborhood third
Figure QLYQS_8
The computing formula of the difference degree between the edge pixel point and the central pixel point is as follows:
Figure QLYQS_9
in the method, in the process of the invention,
Figure QLYQS_11
representing the%>
Figure QLYQS_13
Gray values of the individual edge pixels, +.>
Figure QLYQS_16
Gray value representing center pixel, +.>
Figure QLYQS_12
Representing the%>
Figure QLYQS_14
Gradient direction of each edge pixel, +.>
Figure QLYQS_15
Represents the gradient direction representing the central pixel, < >>
Figure QLYQS_17
Indicate->
Figure QLYQS_10
The difference between the edge pixel point and the center pixel point.
3. The method for equipment operation data compression storage for intelligent factory MES system according to claim 1, wherein the step of determining whether an image block satisfies homogeneity criteria comprises:
for any one image block, obtaining the maximum gray value and the minimum gray value of all other pixel points except the edge pixel point in the image block and the standard deviation of the gray values of all the pixel points; when the image block meets the homogeneity criterion, namely that the difference value between the maximum gray value and the minimum gray value in the image block is not greater than a second threshold value and the standard deviation of the gray value in the image block is not greater than a third threshold value, the image block is not continuously decomposed; and when the image block does not meet the homogeneity criterion, namely the difference value between the maximum gray value and the minimum gray value in the image block is larger than a second threshold value or the standard deviation of the gray values of all pixel points in the image block is larger than a third threshold value, continuing dividing the image block according to the quadtree representation.
CN202310148634.0A 2023-02-22 2023-02-22 Equipment operation data compression storage method for intelligent factory MES system Active CN115866264B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310148634.0A CN115866264B (en) 2023-02-22 2023-02-22 Equipment operation data compression storage method for intelligent factory MES system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310148634.0A CN115866264B (en) 2023-02-22 2023-02-22 Equipment operation data compression storage method for intelligent factory MES system

Publications (2)

Publication Number Publication Date
CN115866264A CN115866264A (en) 2023-03-28
CN115866264B true CN115866264B (en) 2023-06-02

Family

ID=85658686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310148634.0A Active CN115866264B (en) 2023-02-22 2023-02-22 Equipment operation data compression storage method for intelligent factory MES system

Country Status (1)

Country Link
CN (1) CN115866264B (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2942222B2 (en) * 1997-08-11 1999-08-30 株式会社つくばソフト研究所 Communication device for color images and grayscale images
US7146057B2 (en) * 2002-07-10 2006-12-05 Northrop Grumman Corporation System and method for image analysis using a chaincode
CN115115641B (en) * 2022-08-30 2023-12-22 孙清珠 Pupil image segmentation method

Also Published As

Publication number Publication date
CN115866264A (en) 2023-03-28

Similar Documents

Publication Publication Date Title
US20200160565A1 (en) Methods And Apparatuses For Learned Image Compression
EP0777386B1 (en) Method and apparatus for encoding and decoding an image
WO2010050152A1 (en) Pixel prediction value generation procedure automatic generation method, image encoding method, image decoding method, devices using these methods, programs for these methods, and recording medium on which these programs are recorded
JP5414121B2 (en) Compression and coding of 3D mesh networks
US20140064612A1 (en) Apparatus and a method for coding an image
CN114286093A (en) Rapid video coding method based on deep neural network
WO2021103597A1 (en) Method and device for model compression of neural network
CN108650509B (en) Multi-scale self-adaptive approximate lossless coding and decoding method and system
CN116910285B (en) Intelligent traffic data optimized storage method based on Internet of things
CN116471412B (en) Self-adaptive image compression method and system based on density clustering
CN115882866A (en) Data compression method based on data difference characteristic
CN115866264B (en) Equipment operation data compression storage method for intelligent factory MES system
CN108429916B (en) Image coding method and device
Sasazaki et al. Vector quantization of images with variable block size
CN117422780A (en) Astronomical image lossless compression method based on deep learning
US7333659B2 (en) Picture encoder and picture encoding method
CN107682699A (en) A kind of nearly Lossless Image Compression method
WO2020202313A1 (en) Data compression apparatus and data compression method for neural network
CN115329112B (en) Efficient storage method for remote sensing images of unmanned aerial vehicle
US20230085142A1 (en) Efficient update of cumulative distribution functions for image compression
CN118202389A (en) Point cloud compression probability prediction method based on self-adaptive deep learning
CN116615753A (en) Palette mode video coding with hierarchical palette table generation
Kamal Iteration free fractal image compression for color images using vector quantization, genetic algorithm and simulated annealing
CN113225552B (en) Intelligent rapid interframe coding method
CN115361518B (en) Intelligent storage method for sewage biochemical treatment monitoring video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant