CN115866264A - Equipment operation data compression and storage method for intelligent factory MES system - Google Patents
Equipment operation data compression and storage method for intelligent factory MES system Download PDFInfo
- Publication number
- CN115866264A CN115866264A CN202310148634.0A CN202310148634A CN115866264A CN 115866264 A CN115866264 A CN 115866264A CN 202310148634 A CN202310148634 A CN 202310148634A CN 115866264 A CN115866264 A CN 115866264A
- Authority
- CN
- China
- Prior art keywords
- pixel point
- edge pixel
- edge
- difference
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to the field of data compression and storage, in particular to a method for compressing and storing equipment operation data of an intelligent factory MES system, which comprises the following steps: acquiring all edge pixel points on the operation monitoring image; according to the sequence of the priorities from small arrival, sequentially taking the edge pixel points as initial pixel points, obtaining all direction chain codes of all the edge pixel points, and coding and storing all the direction chain codes; performing iterative decomposition on the running monitoring image for multiple times by a quad-tree representation method without considering edge pixel points in the running monitoring image until all image blocks meet the homogeneity criterion; and storing all image blocks of the obtained running monitoring image. The invention respectively stores the edge pixel points and the image blocks of the running monitoring image, and removes the influence of the edge pixel points on the size of the image blocks of the running monitoring image while ensuring that the important edge information of the running monitoring image is not lost, thereby ensuring the compression efficiency of the running monitoring image.
Description
Technical Field
The invention relates to the field of image compression and storage, in particular to an equipment operation data compression and storage method for an intelligent factory MES system.
Background
The MES system is a production informatization management system facing to a workshop execution layer of a manufacturing enterprise. The MES can provide management modules including data management, production process control, data integration analysis and the like for enterprises, and creates a solid, reliable, comprehensive and feasible manufacturing cooperative management platform for the enterprises. The production process control mainly comprises the steps that a production monitoring video is adopted, a manager analyzes the running state of the production equipment through the running monitoring video in the running process of the production equipment, the production equipment is regulated, the production, manufacturing and management mode is optimized, and the purpose of creating an intelligent factory MES system is achieved.
The running state of the production equipment is analyzed through the running monitoring video, the production equipment is regulated and controlled, a large amount of real-time running monitoring videos are obtained by relying on the large amount of real-time running monitoring videos, and the collected running monitoring videos need to be compressed.
Because the operation monitoring image mainly monitors the production equipment, the volume of the production equipment is larger, and the color of the production equipment is single, so that the operation monitoring image for monitoring the production equipment has stronger local similarity and redundancy; however, edge pixels with abrupt change of region attributes cannot be divided into regular image blocks, and cannot be compressed by block coding, and meanwhile, the edge pixels divide originally continuous regions into a plurality of image blocks with smaller sizes when being blocked, so that the sizes of the image blocks of the running monitoring images are influenced, and further the compression efficiency of the running monitoring images is influenced.
Disclosure of Invention
In order to solve the problems, the invention provides a method for compressing and storing equipment operation data of a smart factory MES system, which comprises the following steps:
acquiring an operation monitoring image, and acquiring all edge pixel points on the operation monitoring image;
calculating the priority of each edge pixel point, taking the edge pixel points as initial pixel points in sequence according to the sequence of the priority from small to small, and obtaining corresponding direction chain codes according to the initial pixel points; acquiring all direction chain codes of all edge pixel points, and coding and storing all direction chain codes;
and (2) performing repeated iterative decomposition on the running monitoring image by a quad-tree representation method without considering edge pixel points in the running monitoring image, and judging whether the image blocks meet the homogeneity criterion for all image blocks obtained by decomposition each time: for the image blocks meeting the homogeneity criterion, no longer performing decomposition, and for the image blocks not meeting the homogeneity criterion, performing decomposition on the image blocks by a quad-tree representation method until all the image blocks meet the homogeneity criterion; and storing all image blocks of the obtained running monitoring image.
Further, the step of calculating the priority of each edge pixel point includes:
for any edge pixel point in the running monitoring image, calculating the difference degree of any edge pixel point and the center pixel point for all edge pixel points in the neighborhood taking the edge pixel point as the center pixel point, and recording the number of the edge pixel points with the difference degree in the neighborhood of the center pixel point larger than a first threshold as the preference degree of the edge pixel point corresponding to the center pixel point.
Further, the step of calculating the difference between any one edge pixel point and the center pixel point includes:
for the second in the neighborhoodThe calculation formula of the difference degree between the edge pixel point and the central pixel point is as follows:
in the formula (I), the compound is shown in the specification,indicating the ^ th or greatest in the neighborhood>The gray value of each edge pixel point is greater or less>Represents the gray value of the central pixel point, and>indicating a th @ina neighborhood>The gradient direction of each edge pixel point is combined>Representing the direction of the gradient representing the central pixel, < > or >>Indicates the fifth->The difference between each edge pixel point and the center pixel point.
Further, the step of sequentially taking the edge pixel points as initial pixel points according to the order of the priority from small arrival, and obtaining the corresponding direction chain code according to the initial pixel points comprises:
recording a set formed by all edge pixel points as an edge set; taking the edge pixel point with the maximum priority as an initial pixel point, and acquiring the edge pixel point with the initial pixel point as the centerIn the neighborhood, the edge pixel point with the minimum difference value and the minimum direction value in all the edge pixel points with the gray value difference value of the initial pixel point less than 5 is used as a second pixel point in the direction chain code of the initial pixel point; judging 16 directions taking a second pixel point in the direction chain code of the initial pixel point as a center, and taking an edge pixel point in the direction of which the absolute value of the direction difference between the direction value and the direction value of the second pixel point is less than or equal to 2 as a third pixel point in the direction chain code of the initial pixel point; similarly, in 16 directions centering on the third pixel point in the direction chain code of the initial pixel point, the edge pixel point in the direction of which the absolute value of the direction difference between the direction value and the direction value of the third pixel point is less than or equal to 2 is judged as the fourth pixel point in the direction chain code of the initial pixel point; by analogy, all edge pixel points forming the direction chain code are obtained;
and removing all edge pixel points forming the direction chain codes from the edge set, and repeating the steps until the edge set is empty or no new direction chain codes are generated, so as to obtain all direction chain codes of all edge pixel points.
Further, the method for calculating the direction difference includes:
for any two directions, a specific method for calculating the direction difference of the two directions is as follows: denote the direction value of the first of the two directions asThe direction value in the second direction is recorded as->Calculating an initial direction difference->In which>Represents taking absolute values, < '> or <' > based on>Represents a division, a remainder, or a combination thereof>Represents the initial direction difference of the two directions; determining whether the first direction is clockwise or counter-clockwise of the second relative to the second direction: if the first direction is clockwise of the second, the direction difference of the two directions is the initial direction difference, and if the first direction is counterclockwise of the second, the direction difference of the two directions is the negative of the initial direction difference.
Further, the step of determining whether the image block meets the homogeneity criterion includes:
for any image block, acquiring the maximum gray value and the minimum gray value of all other pixel points except the edge pixel point in the image block, and the standard deviation of the gray values of all the pixel points; when the image block meets the homogeneity criterion, namely the difference value between the maximum gray value and the minimum gray value in the image block is not larger than a second threshold value and the standard difference of the gray values in the image block is not larger than a third threshold value, the image block is not decomposed continuously; and when the image block does not meet the homogeneity criterion, namely the difference value between the maximum gray value and the minimum gray value in the image block is greater than a second threshold or the standard difference of the gray values of all pixel points in the image block is greater than a third threshold, continuously dividing the image block according to a quadtree representation method.
The embodiment of the invention at least has the following beneficial effects:
1. the invention considers that the edge pixel points are the places with the abrupt change of the area attribute for the edge information in the running monitoring image, so the edge pixel points can not be divided into regular image blocks, and can not compress the edge pixel points through block coding, and simultaneously, the edge pixel points can divide the originally continuous area into a plurality of image blocks with smaller sizes when being blocked, thereby influencing the size of the image blocks of the running monitoring image and further influencing the compression efficiency of the running monitoring image.
2. Considering that most edge pixels are continuously arranged along the vertical direction of the gradient direction and have the same or similar gray values, the invention converts the position information of the edge pixels with the same or similar gray values into 16-direction chain codes, further expresses a plurality of edge pixels with the same or similar gray values by using (the initial pixel point position, the gray values and the first-order check chain codes), and codes (the initial pixel point position, the gray values and the first-order check chain codes), thereby realizing the lossy compression of the edge pixels and ensuring that the important edge information of the running monitoring image is not lost.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart illustrating steps of a method for compressing and storing equipment operation data for a smart factory MES system according to an embodiment of the present invention;
fig. 2 is a direction distribution diagram of a direction chain code according to an embodiment of the present invention;
fig. 3 is a huffman coding table of first order difference values according to an embodiment of the present invention;
FIG. 4 is an exemplary image provided by one embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention for achieving the predetermined objects, the following detailed description will be given to the method for compressing and storing the operation data of the equipment used in the MES system of the smart factory according to the present invention with reference to the accompanying drawings and the preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of the equipment operation data compression and storage method for the intelligent factory MES system provided by the invention in detail with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of steps of a method for compressing and storing equipment operation data for an intelligent factory MES system according to an embodiment of the present invention is shown, the method includes the following steps:
and S001, acquiring an operation monitoring image.
And collecting operation monitoring videos through a monitoring camera arranged in a factory, and taking each frame image of the operation monitoring videos as operation monitoring images. A manager analyzes the running state of the production equipment and regulates the production equipment through a running monitoring video in the running process of the production equipment, so that the running monitoring video mainly monitors the production equipment, and the running monitoring image mainly comprises the production equipment with large volume and single color; when the operation state of the production equipment is monitored by the operation monitoring image, the manager generally monitors the whole information of the production equipment without paying special attention to the detailed information of the production equipment.
And S002, performing edge detection on the operation monitoring image, converting edge pixel points into direction chain codes, and encoding all the direction chain codes to realize lossless compression of the edge pixel points.
It should be noted that, because the operation monitoring image mainly monitors the production equipment, the volume of the production equipment is large, and the color of the production equipment is single, the operation monitoring image for monitoring the production equipment has strong local similarity and redundancy; when the operation state of the production equipment is monitored by the operation monitoring image, the manager generally monitors the whole information of the production equipment without paying special attention to the detailed information of the production equipment. By combining the characteristics, the method combines subjective fidelity and performs lossy compression on the running monitoring image through block coding, wherein the running monitoring image has stronger local similarity and redundancy, so that the size of an image block obtained after the running monitoring image is blocked can be ensured to be large, and the compression efficiency of the running monitoring image is further ensured; because the detail information of the running monitoring image can be properly discarded, the size of the image block can be increased as long as the difference of the gray values of all the pixel points in the image block is ensured to be within the range accepted by human eyes, namely the subjective fidelity of the image block after block coding meets the requirement, and the compression efficiency of the running monitoring image is further ensured; therefore, the compression efficiency of lossy compression of the running monitoring image by block coding in combination with subjective fidelity is high.
For the edge information in the running monitoring image, because the edge pixel points are the places with abrupt change of the region attributes, the edge pixel points cannot be divided into regular image blocks, and the edge pixel points cannot be compressed through block coding, meanwhile, because the edge pixel points can divide the originally continuous regions into a plurality of image blocks with smaller sizes when the regions are blocked, the sizes of the image blocks of the running monitoring image are influenced, and further the compression efficiency of the running monitoring image is influenced.
Because most edge pixel points are continuously arranged along the vertical direction of the gradient direction and have the same or similar gray values, the invention converts the position information of the edge pixel points with the same or similar gray values into 16-direction chain codes, and then expresses a plurality of edge pixel points with the same or similar gray values by using (the initial pixel point position, the gray values and the first-order check chain codes), and codes (the initial pixel point position, the gray values and the first-order check chain codes) to realize the lossy compression of the edge pixel points.
In this embodiment, carry out edge detection to operation monitoring image, convert edge pixel into direction chain code, encode all direction chain codes, realize the concrete step of the lossless compression to edge pixel does:
1. and performing edge detection on the running monitoring image to obtain all edge pixel points of the running monitoring image.
It should be noted that the edge information is the most basic feature of the running monitoring image, is a place where the regional attribute is mutated, is a place where uncertainty in the running monitoring image is the largest, and is also a place where the running monitoring image information is most concentrated, so that the edge pixel points of the running monitoring image are important information of the running monitoring image, and in order to ensure that the important information of the running monitoring image is not lost, and also in order to reduce the influence of the edge pixel points on the blocking result of the running monitoring image, the edge pixel points of the running monitoring image are subjected to lossy compression with a small loss degree individually.
In this embodiment, edge detection is performed on the running monitoring image through a Canny edge detection algorithm, all edge pixel points of the running monitoring image are obtained, and the gradient direction of each edge pixel point is calculated.
2. And calculating the priority of each edge pixel point.
It should be noted that, the present invention converts the position information of the edge pixel points with the same or similar gray value into the 16-direction chain code, and further expresses a plurality of edge pixel points with the same or similar continuous gray value by using (the initial pixel point position, the gray value, the first-order check chain code), so that the initial pixel point of the 16-direction chain code needs to be determined first; for an edge pixel point, the larger the number of edge pixel points which have larger differences between the gradient direction and the gray value in the neighborhood of the edge pixel point is, the larger the number of edge pixel points is, the edge pixel point is the intersected position of a plurality of edges, so that the edge pixel point can be considered as the initial pixel point of the plurality of edges, the obtained 16-direction chain code length is taken as the initial pixel point, and the compression efficiency is improved.
In this embodiment, for any edge pixel, the preference of the edge pixel is calculated, and the specific calculation formula is:
obtaining the edge pixel as the center pixelAll edge pixel points in the neighborhood are calculated>The fifth in the neighborhood>The difference between each edge pixel point and the center pixel point >>The calculation formula of the difference degree between each edge pixel point and the center pixel point is as follows:
in the formula (I), the compound is shown in the specification,indicating the ^ th or greatest in the neighborhood>The gray value of each edge pixel point is greater or less>Represents the gray value of the central pixel point, and>indicating a th @ina neighborhood>The gradient direction of each marginal pixel point>Representing the direction of the gradient representing the central pixel, < > or >>Indicates the fifth->The difference between each edge pixel point and the center pixel point.
For the firstThe greater the difference between the gray value and the gradient direction of each edge pixel point and the central pixel point is, the greater the difference between the edge pixel point and the central pixel point is, and the ^ greater the difference is>The greater the difference between each edge pixel point and the center pixel point.
Recording the number of edge pixels, the difference degree of which is greater than a first threshold value, of all edge pixels in the neighborhood taking the edge pixel as a center pixel as the preference degree of the edge pixel corresponding to the center pixel, and acquiring the preference degrees of all the edge pixels. In this embodiment, the first threshold is 0.1, and in other embodiments, the implementer may set the first threshold as needed.
3. The direction and the direction difference of the direction chain code are obtained.
In this embodiment, any one pixel point is taken as a center to obtain the pixel pointThe 16 directions of the neighborhood are respectively marked as 0 direction to 15 directions, and the direction values are respectively 0 to 15, as shown in FIG. 2.
For any two directions, a specific method for calculating the direction difference of the two directions is as follows: record the direction value of the first of the two directions asThe direction value in the second direction is recorded as->Calculating an initial direction difference->Wherein is present>Represents taking the absolute value, is selected>Represents a division, a remainder, or a combination thereof>Represents the initial direction difference of the two directions; determining whether the first direction is clockwise or counter-clockwise of the second relative to the second direction: if the first direction is clockwise of the second, the direction difference of the two directions is the initial direction difference, and if the first direction is counterclockwise of the second, the direction difference of the two directions is the negative of the initial direction difference.
For example, the direction difference between the 1 direction and the 0 direction is 1,0 direction and the 1 direction is-1, the direction difference between the 15 direction and the 0 direction is-1,0 direction and the 15 direction is 1,8 direction and the 14 direction is-6, and the direction difference between the 14 direction and the 8 direction is 6.
4. And acquiring initial pixel points according to the priority of the edge pixel points, acquiring corresponding direction chain codes according to the initial pixel points, and acquiring all direction chain codes of all edge pixel points.
(1) Recording a set formed by all edge pixel points as an edge set; taking the edge pixel point with the maximum priority in the edge set as an initial pixel point, judging whether an edge pixel point with the gray value difference value smaller than 5 with the initial pixel point exists in 16 directions taking the initial pixel point as the center, and if so, continuing the step (2); and (4) if the edge pixel points with the second highest priority in the edge set do not exist, taking the edge pixel points with the second highest priority in the edge set as initial pixel points, and repeating the step (1).
(2) And obtaining an edge pixel point with the minimum difference value among all edge pixel points with the gray value difference value of the initial pixel point smaller than 5, if a plurality of edge pixel points with the minimum difference value exist, obtaining the direction value of the edge pixel points with the minimum difference value by taking the initial pixel point as the center, and taking the edge pixel point with the minimum direction value in the edge pixel points with the minimum difference value as a second pixel point in the direction chain code of the initial pixel point.
(3) Obtaining all edge pixel points with the gray value difference of the second pixel point in the direction chain code being less than 5, judging whether edge pixel points exist in a plurality of directions in which the absolute value of the direction difference between the direction value and the direction value of the second pixel point is not more than 2 in 16 directions taking the second pixel point in the direction chain code as the center according to the direction value of the second pixel point in the direction chain code: if not, stopping obtaining the current direction chain code, and repeating the step (1); if one pixel exists, the edge pixel is taken as a third pixel in the direction chain code of the initial pixel; and if a plurality of pixel points exist, taking the edge pixel point with the minimum absolute value of the direction difference between the direction value and the direction value of the second pixel point as a third pixel point in the direction chain code of the initial pixel point.
(4) And (4) repeating the step (3) to obtain the number of all pixel points forming the direction chain code corresponding to the initial pixel point: if the number is less than 4, removing the direction chain code; if the number is not less than 4, the direction chain code is reserved, and edge pixel points corresponding to all pixel points forming the direction chain code are removed from the edge set to obtain a new edge set.
And (4) repeating the steps (1) to (4) until the edge set is empty or no new direction chain code is generated.
And for any direction chain code, expressing the position of an initial pixel point, the gray value of the initial pixel point and a direction value sequence, wherein the direction value sequence is formed by the direction values of a subsequent pixel point relative to a previous pixel point in all the pixel points forming the direction chain code.
For example, in a direction chain code, the position of the starting pixel point isThe gray value of the initial pixel point is->The direction value sequence is->
5. And encoding the direction chain code.
It should be noted that, most of the edge pixels are continuously arranged in the vertical direction of the gradient direction, and the gradient directions of the edge pixels forming the same edge are substantially the same, so that the difference of the direction values in the direction value sequence of the direction chain code obtained in the above steps is small, and therefore, in the first-order difference sequence of the direction value sequence, the probability that the first-order difference value is 0 is the largest, and the probabilities that the first-order difference values are 1, -1, 2, -2 are sequentially reduced, so that the huffman coding is performed on the first-order difference values in the first-order difference sequence, and the compression efficiency of the edge pixels is improved.
In this embodiment, a direction value sequence of a direction chain code is subjected to first order difference calculation to obtain a first order difference sequence of the direction value sequence, which is recorded as a first order difference chain code, and the first order difference chain code is encoded by huffman coding, where a huffman coding table of the first order difference chain code is shown in fig. 3, and specifically includes: the first order difference value is 0, corresponding to a code of(ii) a A first-order look-up value of 1, corresponding code of ^ 4>(ii) a A first order difference value of-1 and a corresponding code of->(ii) a A first order difference value of 2, corresponding to an encoding of ^ 4>(ii) a The first order difference value is-2, corresponding code is>。
For example, in the above example, the sequence of direction values of one directional chain code isIf the first-order difference chain code is greater than or equal to>Is subjected to Huffman coding and then is subjected to->The encoding in which the first 4 bits of encoding are direction values 10 in the first order differential chain code, since the direction values are 0 to 15, encoding is performed by 4-bit binary.
And S003, performing repeated iterative decomposition on the running monitoring image by a quad-tree representation method without considering edge pixel points in the running monitoring image until all the image blocks meet the homogeneity criterion, partitioning all the image blocks of the running monitoring image, and storing all the image blocks.
1. And decomposing the running monitoring image by a quad-tree representation method to obtain all image blocks of the running monitoring image for blocking.
It should be noted that, for the edge information in the running monitoring image, because the edge pixel points are the places where the region attribute is mutated, the edge pixel points cannot be divided into regular image blocks, and cannot be compressed by block coding, and meanwhile, because the edge pixel points divide the originally continuous region into a plurality of image blocks with smaller sizes when being blocked, the size of the image block of the running monitoring image is affected, and further the compression efficiency of the running monitoring image is affected.
In this embodiment, the image is decomposed by a quadtree representation, and the specific steps are as follows:
the image quad-tree representation method adopts a pyramid data structure to store an image, wherein the root of the quad-tree corresponds to the whole image, leaf nodes correspond to a single pixel or a square matrix composed of pixels with the same characteristics, each non-leaf node has 4 sub-nodes, and the image is decomposed into multiple stages by the quad-tree representation method, wherein the root is level 0, and the number of the sub-nodes is one more per minute.
The method comprises the steps of not considering edge pixel points in an operation monitoring image, dividing the operation monitoring image into 4 image blocks with equal size through a quadtree representation method, then judging whether the 4 image blocks meet a given homogeneity criterion, if the current image block meets the criterion, keeping the current image block unchanged, otherwise, continuously decomposing the current image block into 4 image blocks, judging whether the homogeneity criterion is met, and judging whether the homogeneity criterion is met until all the image blocks meet the given criterion, wherein the edge pixel points in the image blocks do not participate in the judgment process of the homogeneity criterion.
The homogeneity criterion of the present invention can be expressed as:in the formula>Respectively represent a maximum gray value and a minimum gray value in an image block->Representing the variance of the gray values of all the pixels in the image block,represents a second threshold value>Represents a third threshold; when the image block meets the homogeneity criterion, namely the difference value between the maximum gray value and the minimum gray value in the image block is not greater than a second threshold and the standard difference of the gray values of all pixel points in the image block is not greater than a third threshold, the image block is not continuously decomposed; when the image block does not meet the homogeneity criterion, namely the difference value between the maximum gray value and the minimum gray value in the image block is greater than a second threshold or the standard difference of the gray values of all pixel points in the image block is greater than a third threshold, continuously dividing the image block according to a quadtree representation method until all the image blocks meet the homogeneity criterion or the size of the image block is not greater than the minimum size, and the minimum size of the image block is determined to be ^ based on the standard size of the image block>。/>
In this embodiment, the second threshold valueA third threshold value->In other embodiments, the implementer may set the second threshold and the third threshold as desiredAnd (4) a threshold value.
2. And compressing and storing all the image blocks.
When the running monitoring image is decomposed through the quad-tree representation method, whether an image block meets the homogeneity criterion or not is judged, if yes, the image block is marked as 0, otherwise, the image block is marked as 1, the decomposition of the image block marked as 0 is finished, the image block marked as 1 is continuously decomposed, so that the decomposition code of the running monitoring image is obtained, and the gray level mean value sequence of the running monitoring image is obtained according to the decomposition code of the running monitoring image.
For example, as shown in fig. 4, the image is decomposed into 10010 by the quadtree representation method, and the gray-scale value average sequence is 24,59,45,99,74,76,89.
In summary, the present invention obtains all edge pixel points on the operation monitoring image; according to the sequence of the priority from small arrival, sequentially taking the edge pixel points as initial pixel points, obtaining all direction chain codes of all the edge pixel points, and coding and storing all the direction chain codes; performing iterative decomposition on the running monitoring image for multiple times by a quad-tree representation method without considering edge pixel points in the running monitoring image until all image blocks meet the homogeneity criterion; and storing all image blocks of the obtained running monitoring image. The invention respectively stores the edge pixel points and the image blocks of the running monitoring image, and removes the influence of the edge pixel points on the size of the image blocks of the running monitoring image while ensuring that the important edge information of the running monitoring image is not lost, thereby ensuring the compression efficiency of the running monitoring image.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; the modifications or substitutions do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present application, and are included in the protection scope of the present application.
Claims (6)
1. The equipment operation data compression storage method for the intelligent factory MES system is characterized by comprising the following steps:
acquiring an operation monitoring image, and acquiring all edge pixel points on the operation monitoring image;
calculating the priority of each edge pixel point, sequentially taking the edge pixel points as initial pixel points according to the sequence of the priorities from small to small, and obtaining corresponding direction chain codes according to the initial pixel points; acquiring all direction chain codes of all edge pixel points, and coding and storing all direction chain codes;
and (2) performing repeated iterative decomposition on the running monitoring image by a quad-tree representation method without considering edge pixel points in the running monitoring image, and judging whether the image blocks meet the homogeneity criterion for all image blocks obtained by decomposition each time: for the image blocks meeting the homogeneity criterion, no longer performing decomposition, and for the image blocks not meeting the homogeneity criterion, performing decomposition on the image blocks by a quad-tree representation method until all the image blocks meet the homogeneity criterion; and storing all image blocks of the obtained running monitoring image.
2. The method as claimed in claim 1, wherein the step of calculating the priority of each edge pixel comprises:
for any edge pixel point in the running monitoring image, calculating the difference degree of any edge pixel point and the center pixel point for all edge pixel points in the neighborhood taking the edge pixel point as the center pixel point, and recording the number of the edge pixel points with the difference degree in the neighborhood of the center pixel point larger than a first threshold as the preference degree of the edge pixel point corresponding to the center pixel point.
3. The method as claimed in claim 2, wherein the step of calculating the difference between any edge pixel and the center pixel comprises:
for the second in the neighborhoodThe calculation formula of the difference degree between the edge pixel point and the central pixel point is as follows:
in the formula (I), the compound is shown in the specification,indicating the ^ th or greatest in the neighborhood>The gray value of each edge pixel point is greater or less>Represents the gray value of the central pixel point, and>indicating a th @ina neighborhood>The gradient direction of each marginal pixel point>Representing the direction of the gradient representing the central pixel, < > or >>Indicates the fifth->The difference between each edge pixel point and the center pixel point.
4. The method as claimed in claim 1, wherein the step of sequentially using the edge pixels as the starting pixels in the order of priority from small arrival, and obtaining the corresponding direction chain codes according to the starting pixels comprises:
recording a set formed by all edge pixel points as an edge set; taking the edge pixel point with the maximum priority as an initial pixel point, and acquiring the edge pixel point with the initial pixel point as the centerIn the neighborhood, the edge pixel point with the minimum difference value and the minimum direction value in all the edge pixel points with the gray value difference value of the initial pixel point less than 5 is used as a second pixel point in the direction chain code of the initial pixel point; judging 16 directions taking a second pixel point in the direction chain code of the initial pixel point as a center, and taking an edge pixel point in the direction of which the absolute value of the direction difference between the direction value and the direction value of the second pixel point is less than or equal to 2 as a third pixel point in the direction chain code of the initial pixel point; similarly, in 16 directions centered on the third pixel in the direction chain code of the initial pixel, the edge pixel in the direction having the direction difference between the direction value and the direction value of the third pixel being less than or equal to 2 is determined as the fourth pixel in the direction chain code of the initial pixelEach pixel point; by analogy, all edge pixel points forming the direction chain code are obtained; />
And removing all edge pixel points forming the direction chain codes from the edge set, and repeating the steps until the edge set is empty or no new direction chain codes are generated, so as to obtain all direction chain codes of all edge pixel points.
5. The method as claimed in claim 4, wherein the method for calculating the direction difference comprises:
for any two directions, a specific method for calculating the direction difference of the two directions is as follows: denote the direction value of the first of the two directions asRecording a direction value in a second direction as +>Calculating an initial direction difference->Wherein is present>Represents taking the absolute value, is selected>Represents a division, a remainder, or a combination thereof>Represents the initial direction difference of the two directions; determining whether the first direction is clockwise or counter-clockwise of the second relative to the second direction: if the first direction is clockwise of the second, the directional difference of the two directions is the initial directional difference, and if the first direction is counterclockwise of the second, the directional difference of the two directions is the negative of the initial directional difference.
6. The method as claimed in claim 1, wherein the step of determining whether the image blocks satisfy the homogeneity criterion comprises:
for any image block, acquiring the maximum gray value and the minimum gray value of all other pixel points except the edge pixel point in the image block, and the standard deviation of the gray values of all the pixel points; when the image block meets the homogeneity criterion, namely the difference value between the maximum gray value and the minimum gray value in the image block is not larger than a second threshold value and the standard difference of the gray values in the image block is not larger than a third threshold value, the image block is not decomposed continuously; and when the image block does not meet the homogeneity criterion, namely the difference value between the maximum gray value and the minimum gray value in the image block is greater than a second threshold or the standard difference of the gray values of all pixel points in the image block is greater than a third threshold, continuously dividing the image block according to a quadtree representation method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310148634.0A CN115866264B (en) | 2023-02-22 | 2023-02-22 | Equipment operation data compression storage method for intelligent factory MES system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310148634.0A CN115866264B (en) | 2023-02-22 | 2023-02-22 | Equipment operation data compression storage method for intelligent factory MES system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115866264A true CN115866264A (en) | 2023-03-28 |
CN115866264B CN115866264B (en) | 2023-06-02 |
Family
ID=85658686
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310148634.0A Active CN115866264B (en) | 2023-02-22 | 2023-02-22 | Equipment operation data compression storage method for intelligent factory MES system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115866264B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6160916A (en) * | 1997-08-11 | 2000-12-12 | Tsukuba Software Laboratory Co., Ltd. | Communication apparatus and method of color pictures and continually-changing tone pictures |
US20040008890A1 (en) * | 2002-07-10 | 2004-01-15 | Northrop Grumman Corporation | System and method for image analysis using a chaincode |
CN115115641A (en) * | 2022-08-30 | 2022-09-27 | 江苏布罗信息技术有限公司 | Pupil image segmentation method |
-
2023
- 2023-02-22 CN CN202310148634.0A patent/CN115866264B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6160916A (en) * | 1997-08-11 | 2000-12-12 | Tsukuba Software Laboratory Co., Ltd. | Communication apparatus and method of color pictures and continually-changing tone pictures |
US20040008890A1 (en) * | 2002-07-10 | 2004-01-15 | Northrop Grumman Corporation | System and method for image analysis using a chaincode |
CN115115641A (en) * | 2022-08-30 | 2022-09-27 | 江苏布罗信息技术有限公司 | Pupil image segmentation method |
Non-Patent Citations (5)
Title |
---|
吴俊等: "《基于同质区域自动选取的各向异性扩散超声图像去噪》" * |
吴凤和;: "基于计算机视觉测量技术的图像轮廓提取方法研究" * |
张煜东;吴乐南;: "基于分割的彩色图像编码" * |
方兴林;余萍;: "一种基于链码向量的图像匹配算法" * |
杜梅等: "《面向边缘特征保持的图像压缩》" * |
Also Published As
Publication number | Publication date |
---|---|
CN115866264B (en) | 2023-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE112018005149T5 (en) | POINT CLOUD COMPRESSION | |
WO2010050152A1 (en) | Pixel prediction value generation procedure automatic generation method, image encoding method, image decoding method, devices using these methods, programs for these methods, and recording medium on which these programs are recorded | |
WO2015096822A1 (en) | Image coding and decoding methods and devices | |
CN112352431B (en) | Data encoding method, data decoding method, data encoding equipment, data decoding equipment and storage medium | |
CN116489369B (en) | Driving digital video compression processing method | |
CN103188494A (en) | Apparatus and method for encoding depth image by skipping discrete cosine transform (DCT), and apparatus and method for decoding depth image by skipping DCT | |
CN116910285B (en) | Intelligent traffic data optimized storage method based on Internet of things | |
CN113284203B (en) | Point cloud compression and decompression method based on octree coding and voxel context | |
CN115618051B (en) | Internet-based smart campus monitoring video storage method | |
CN115882866A (en) | Data compression method based on data difference characteristic | |
CN109474824B (en) | Image compression method | |
CN107682699B (en) | A kind of nearly Lossless Image Compression method | |
CN115866264A (en) | Equipment operation data compression and storage method for intelligent factory MES system | |
CN115474044B (en) | Bayer domain image lossy compression method | |
CN112329923A (en) | Model compression method and device, electronic equipment and readable storage medium | |
CN111263163A (en) | Method for realizing depth video compression framework based on mobile phone platform | |
CN115329112B (en) | Efficient storage method for remote sensing images of unmanned aerial vehicle | |
CN115913248A (en) | Live broadcast software development data intelligent management system | |
WO2023093377A1 (en) | Encoding method, decoding method and electronic device | |
WO2020202313A1 (en) | Data compression apparatus and data compression method for neural network | |
CN113721859B (en) | Image repeated data deleting method based on artificial intelligence | |
WO2022064420A1 (en) | Palette mode video encoding utilizing hierarchical palette table generation | |
CN114257808A (en) | Image block division prediction method, image block division prediction system, image block division decoding method, image block division decoding device, and medium | |
Ginesta et al. | Vector quantization of contextual information for lossless image compression | |
Pan et al. | Complexity-scalable transform coding using variable complexity algorithms |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |