LU504265B1 - Method and System for Managing Monitoring Data of Environmental Security Engineering - Google Patents

Method and System for Managing Monitoring Data of Environmental Security Engineering Download PDF

Info

Publication number
LU504265B1
LU504265B1 LU504265A LU504265A LU504265B1 LU 504265 B1 LU504265 B1 LU 504265B1 LU 504265 A LU504265 A LU 504265A LU 504265 A LU504265 A LU 504265A LU 504265 B1 LU504265 B1 LU 504265B1
Authority
LU
Luxembourg
Prior art keywords
super
pixel
grayscale
edge point
prediction
Prior art date
Application number
LU504265A
Other languages
German (de)
Inventor
Hao Zhou
Original Assignee
Suzhou Maichuang Information Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Maichuang Information Tech Co Ltd filed Critical Suzhou Maichuang Information Tech Co Ltd
Application granted granted Critical
Publication of LU504265B1 publication Critical patent/LU504265B1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)

Abstract

The application discloses a method and system for managing monitoring data of environmental security engineering, and relates to the technical field of data processing. The method includes that: a plurality of super-pixel blocks of each single channel image in each frame of image in surveillance video data are acquired; a fitting straight line of a grayscale difference of each edge point connecting line in the super-pixel blocks, and fitting quality of the fitting straight line are acquired; a fitting entropy value is obtained by using the fitting quality of the fitting straight lines of the super-pixel blocks in different directions; the super-pixel blocks are classified into a plurality of types by using the correlation of every two super-pixel blocks; A prediction model constructed by the application is more accurate, a value of the obtained prediction error is smaller, and the space occupied after coding is smaller.

Description

METHOD AND SYSTEM FOR MANAGING MONITORING DATA OF LU504265
ENVIRONMENTAL SECURITY ENGINEERING
TECHNICAL FIELD
The application relates to the technical field of data processing, in particular to a method and system for managmg monitoring data of environmental security engineering.
BACKGROUND
With the development of an information age, digital monitoring is becoming more and more popular in environmental security engineering. At the same time, with the improvement of requirements for the quality of a surveillance video, the resolution of a camera is constantly improved, so that the amount of video data collected by monitoring is greatly increased, and then a large amount of storage space 1s required to store the video data. In order to reduce the storage space of surveillance video data, the video data needs to be compressed and stored.
Lossy compression is generally adopted to compress and store the video data in the related art, which may greatly reduce the space occupied during storage. A commonly used lossy compression method is predictive coding compression, which classifies each frame of image into a plurality of rectangular macro blocks of random size, and then performs linear prediction by using feature values of pixels in each macro block. If the grayscale distribution in the same rectangular macro block is smooth and the grayscale correlation between the pixels is strong, a linear prediction result is relatively ideal. If the grayscale distribution in the same rectangular macro block is random and there are more mutation points, the accuracy of the linear prediction result is not high. A commonly used method of randomly classifying the rectangular macro block for performing predictive coding compression does not take into account the characteristics of the grayscale distribution of the pixels in each rectangular macro block, so that the prediction results of some macro blocks with larger randomness in grayscale distribution are inaccurate, resulting in a larger value of a prediction error, larger data occupation space after coding the prediction error, and low compression efficiency.
SUMMARY
The application provides a method and system for managing monitoring data of environmental security engineering, so as to solve the problem of inaccurate linear prediction results of some macro blocks with larger randomness in grayscale distribution in the existing random macro block classification, thereby resulting in low compression efficiency. LU504265
In the application, the method and system for managing the monitoring data of the environmental security engineering adopt the following technical solutions.
A grayscale image and a plurality of single channel images of each frame of image of a security monitoring video are acquired. A plurality of super-pixel blocks of each single channel image are acquired.
Edge point connecting lines in different directions are obtained by connecting each edge point in each super-pixel block of each single channel image to edge points in different directions, and a fitting straight line of a grayscale difference between adjacent pixels on each edge point connecting line in different directions and fitting quality of the fitting straight line are acquired.
A fitting quality entropy value of each super-pixel block in different directions is obtained by using the fitting quality of the fitting straight line corresponding to each edge point connecting line in different directions in each super-pixel block. The minimum fitting quality entropy value is selected as a fitting entropy value of the super-pixel block, and a direction in which the minimum fitting quality entropy value is obtained is taken as a target direction of the corresponding super-pixel block.
All the super-pixel blocks are classified by using the fitting entropy value and a gray average of every two super-pixel blocks in each single channel image to obtain a plurality of types of super-pixel blocks.
A prediction model coefficient of each type of super-pixel blocks is obtained by using grayscale values and a distance between each edge point in each type of super-pixel blocks of each single channel image and the corresponding edge point in the target direction.
A prediction offset of each type of super-pixel blocks is obtained by using the prediction model coefficient of each type of super-pixel blocks in each single channel image and the grayscale values of the pixels in each type of super-pixel blocks. A prediction grayscale value of each pixel in each type of super-pixel blocks in each single channel image is obtained according to the prediction offset, the prediction model coefficient and the grayscale values of the edge points of each type of super-pixel blocks in each single channel image.
A prediction error of each pixel in the grayscale image is obtained by using the grayscale value of each pixel in the grayscale image and the prediction grayscale value of each pixel in each single channel image, and the prediction error of each pixel is coded and stored.
Further, a method for obtaining the prediction model coefficient of each type of super-pixel blocks includes the following operations.
A grayscale difference and a distance between each edge point of each super-pixel block LU504265 in each type of super-pixel blocks of each single channel image and the corresponding edge point in the target direction are acquired.
A ratio of the obtained grayscale difference to the distance between each edge point and the corresponding edge point in the target direction is calculated to obtain a grayscale change rate between each edge point and the corresponding edge point in the target direction.
The prediction model coefficient of each super-pixel block is obtained by using a mean value of the grayscale change rates between each edge point in each super-pixel block and the corresponding edge point in the target direction.
A mean value of the prediction model coefficients of all the super-pixel blocks in each type of super-pixel blocks is taken as the prediction model coefficient of each type of super-pixel blocks.
Further, a method for obtaining the prediction offset of each type of super-pixel blocks includes the following operations.
Each edge point in each super-pixel block of each single channel image and the corresponding edge point in the target direction are recorded as an edge point pair, and each edge point pair corresponds to one edge point connecting line.
The edge point with the smaller grayscale value in each edge point pair is marked as a target point.
A prediction value of each pixel except for the edge point on each edge point connecting line is obtained according to a grayscale value of the target point on each edge point connecting line in each super-pixel block and the prediction model coefficient of the type of the super-pixel block.
The prediction offset of each pixel is obtained by using a difference between the prediction value and the grayscale value of each pixel in each super-pixel block.
A mode of the prediction offsets of all the pixels in each type of super-pixel blocks is taken as the prediction offset of each type of super-pixel blocks.
Further, a formula for acquiring the prediction offset of each pixel in each super-pixel block is: where Pr represents the prediction offset of the pixel k; Gp represents the grayscale value of the target point on the edge point connecting line where the pixel k is located; am represents the prediction model coefficient of the M-th type of super-pixel blocks to which the super-pixel block where the pixel k is located belongs; K represents that k is a value LU504265 corresponding to the K-th pixel after the target point on the edge point connecting line; and Gr represents the grayscale value of the pixel k.
Further, a method for obtaining the prediction grayscale value of each pixel includes the following operations.
A distance between each pixel and the target point on the located edge point connecting line is acquired.
A product corresponding to each pixel is obtained by multiplying the obtained distance by the prediction model coefficient of the super-pixel block type of the super-pixel block where the pixel is located.
The prediction grayscale value of each pixel is obtained by adding the obtained product of each pixel to the grayscale value of the target point on the located edge point connecting line, and then adding the prediction offset of the super-pixel block type.
Further, a method for obtaining the fitting quality of the fitting straight line includes the following operations.
A variance of a distance from the grayscale difference between each pair of adjacent pixels on the edge point connecting line to the fitting straight line is acquired, and the variance obtained by each edge point connecting line is taken as the fitting quality of the fitting straight line corresponding to the edge point connecting line.
Further, a method of obtaining the prediction error of each pixel in the grayscale image includes the following operations.
A target prediction grayscale value of each pixel in the grayscale image is obtained by using the prediction grayscale value of each pixel in the plurality of single channel images.
An absolute value of a difference between the target prediction grayscale value and the grayscale value of each pixel in the grayscale image is taken as the prediction error of each pixel in the grayscale image.
Further, a method of obtaining the plurality of types of super-pixel blocks includes the following operations.
A feature vector of the corresponding super-pixel block is formed by using the fitting entropy value and the gray average of each super-pixel block.
A cosine similarity of the feature vectors of every two super-pixel blocks is calculated as a correlation between the two corresponding super-pixel blocks.
A classification principle is that: the correlation between any two super-pixel blocks in the same type is in a preset correlation threshold interval.
The super-pixel blocks in each single channel image are classified by using the LU504265 classification principle to obtain the plurality of types of super-pixel blocks.
A system for managing monitoring data of environmental security engineering includes a data acquisition unit, a data analysis unit and a data coding unit. 5 The data acquisition unit is configured to acquire a grayscale image and a plurality of single channel images of each frame of image of a security monitoring video; and acquire a plurality of super-pixel blocks of each single channel image.
The data analysis unit is configured to obtain edge point connecting lines in different directions by connecting each edge point in each super-pixel block of each single channel image to edge points in different directions, and acquire a fitting straight line of a grayscale difference between adjacent pixels on each edge point connecting line in different directions and fitting quality of the fitting straight line; obtain a fitting quality entropy value of each super-pixel block in different directions by using the fitting quality of the fitting straight line corresponding to each edge point connecting line in different directions in each super-pixel block; select the minimum fitting quality entropy value as a fitting entropy value of the super-pixel block, and take a direction in which the minimum fitting quality entropy value is obtained as a target direction of the corresponding super-pixel block; and classify all the super-pixel blocks by using the fitting entropy value and a gray average of every two super-pixel blocks in each single channel image to obtain a plurality of types of super-pixel blocks.
The data coding unit is configured to obtain a prediction model coefficient of each type of super-pixel blocks by using grayscale values and a distance between each edge point in each type of super-pixel blocks of each single channel image and the corresponding edge point in the target direction; obtain a prediction offset of each type of super-pixel blocks by using the prediction model coefficient of each type of super-pixel blocks in each single channel image and the grayscale values of the pixels in each type of super-pixel blocks; obtain a prediction grayscale value of each pixel in each type of super-pixel blocks in each single channel image according to the prediction offset, the prediction model coefficient and the grayscale values of the edge points of each type of super-pixel blocks in each single channel image; and obtain a prediction error of each pixel in the grayscale image by using the grayscale value of each pixel in the grayscale image and the prediction grayscale value of each pixel in each single channel image, and code and store the prediction error of each pixel.
The application has the following beneficial effects that: according to the method and LU504265 system for managing the monitoring data of the environmental security engineering of the application, since the grayscale correlation between the pixels in each single channel in the image is closer, independent analysis is performed on each single channel of the image, so that the linear prediction result is more accurate. The super-pixel block in the grayscale image is acquired, the pixels with similar textures, similar grayscales and close distances in the grayscale image are classified into the same super-pixel block, and subsequently a linear prediction model is analyzed according to pixel features in the same super-pixel block.
Compared with the random macro block classification, the prediction model coefficient obtained by this method is more accurate. The fitting quality entropy value of the super-pixel block is obtained by acquiring the fitting straight line of the grayscale difference on the edge point connecting line in each direction in the super-pixel block, and the minimum fitting quality entropy value is selected from the plurality of directions as the fitting entropy value of the super-pixel block, that is, the direction with the most regular grayscale change is selected as a direction for performing linear prediction, which is more consistent with a grayscale change rule of the pixel, so that the subsequently obtained prediction grayscale value is closer to the grayscale value of the pixel in the single channel image, and then the prediction grayscale value in the single channel image is converted to the target prediction grayscale value in the grayscale image, so that the obtained prediction error is smaller, the space occupied after coding is smaller and the compression effect is better. In addition, considering the influence of the prediction offset on the prediction error, compared with calculating the prediction grayscale value by directly using the prediction model coefficient, the obtained prediction grayscale value is more accurate, and the smaller the subsequently obtained prediction error, the better the compression effect.
BRIEF DESCRIPTION OF THE DRAWINGS
In order to more clearly illustrate the embodiments of the application or the technical solutions in the related art, the drawings used in the description of the embodiments or the related art will be briefly described below. It is apparent that the drawings described below are only some embodiments of the application. Other drawings may further be obtained by those of ordinary skill in the art according to these drawings without creative efforts.
Fig. 1 is a flowchart showing overall steps of an embodiment of a method for managing monitoring data of environmental security engineering according to the application.
DETAILED DESCRIPTION OF THE EMBODIMENTS LU504265
The technical solutions in the embodiments of the application will be clearly and completely described in conjunction with the drawings in the embodiments of the application.
It is apparent that the described embodiments are only a part of the embodiments of the application, and not all of them. All other embodiments obtained by those of ordinary skill in the art based on the embodiments of the application without creative efforts are within the scope of protection of the application.
An embodiment of a method for managing monitoring data of environmental security engineering of the application aims at images collected by environmental security monitoring, most of which are fire-fighting apparatuses and fixed buildings, and a large amount of spatially redundant pixel data exists in the images, so that a prediction model is established to compress the data, as shown in Fig. 1, and the method includes the following operations.
At S1, a grayscale image and a plurality of single channel images of each frame of image of a security monitoring video are acquired. A plurality of super-pixel blocks of each single channel image are acquired.
Specifically, each frame of image in the security monitoring video is acquired, where the obtained each frame of image is a Red Green Blue (RGB) image. Since the correlation between pixels in the same channel is closer, a prediction model for subsequently predicting values of the pixels in the same channel may be more accurate, and therefore R, G and B channels in each frame of image are individually processed to acquire three single channel images in each frame of image. The grayscale image of each frame of image is acquired for subsequently calculating a prediction error of the pixel.
In the embodiment, one single channel image in one frame of image in the security monitoring video is analyzed and processed, and each single channel of other frame of images is processed in the same method.
An image data compression method selected in the solution is predictive coding compression, that is, a prediction grayscale value of each pixel is obtained by establishing a linear prediction model, the prediction error is obtained according to the prediction grayscale value and a grayscale value of the pixel itself, and the prediction error is coded and stored. In order to make the linear prediction result more accurate, the pixels with similar features need to be classified into one type, and the pixels with larger feature differences are classified into different parts to construct the linear prediction model. Therefore, a method for preliminarily classifying the pixels in the single channel image is that super-pixel segmentation is performed on the single channel image to obtain a plurality of super-pixel blocks, the features of the super-pixel blocks are quantized, and the super-pixel blocks are classified into types LU504265 according to the correlation between the quantized super-pixel blocks. The adjacent super-pixel blocks may or may not have a similarity relationship, and the non-adjacent super-pixel blocks may also have the correlation. The same prediction model may be used for prediction. Therefore, the same prediction model is established for the same type of super-pixel block with high correlation, and predictive coding and storage are performed on each frame of image.
Specifically, super-pixel segmentation is performed on the single channel image to obtain the plurality of super-pixel blocks. The super-pixel segmentation refers to classifying a series of adjacent pixels with similar colors, brightness and texture features into the same super-pixel block region. The super-pixel segmentation takes into account edge information in the image, and may segment along the pixels with a larger grayscale gradient in the image during segmentation. In the super-pixel block, the difference between the grayscale values of the pixels is very small.
At S2, edge point connecting lines in different directions are obtained by connecting each edge point in each super-pixel block of each single channel image to edge points in different directions, and a fitting straight line of a grayscale difference between the adjacent pixels on each edge point connecting line in different directions and fitting quality of the fitting straight line are acquired.
Quantitative analysis, including analysis of a brightness value and a grayscale change rule of the super-pixel block, is performed on the features of each super-pixel block in the single channel image. When the grayscale change rule in the super-pixel block is analyzed, the grayscale change rule of the edge point in a certain direction may be analyzed, that is, the grayscale difference between the adjacent pixels follows a linear rule, and during storage, the difference may be further compressed and stored to save more storage space.
Specifically, the edge point of each super-pixel block is acquired, and the grayscale change rule of the pixel on the connecting line of the edge point in 0°, 45°, 90° and 135° directions is analyzed. In the embodiment, taking the 0° direction as an example, the processing methods in the 45°, 90° and 135° directions are the same.
Taking the m-th super-pixel block as an analysis object, the i-th edge point corresponding to the i-th edge point in the 0° direction in the super-pixel block (the corresponding edge point refers to an intersection point of a straight line passing through the 0° direction of the i-th edge point and an edge line of the super-pixel block, that is, the i-th edge point and the corresponding i'-th edge point are on the same straight line in the 0° direction) is acquired, a connecting line of the i-th edge point and the corresponding 1'-th edge LU504265 point is marked as the i-th edge point connecting line, and the 1'-th edge point and the i-th edge point are marked as an edge point pair.
The grayscale change rule of all the adjacent pixels between the i-th edge point and the corresponding 1'-th edge point on the i-th edge point connecting line is acquired. Specifically, the grayscale difference between the adjacent pixels on the i-th edge point connecting line is calculated, and linear fitting is performed on all the grayscale differences on the edge point connecting line to obtain the fitting straight line of the grayscale difference of the i-th edge point connecting line (the linear fitting is the related art, which will not be elaborated here), and a linear equation of the fitting straight line is kx—y+bi=0.
A distance from the grayscale difference of each adjacent pixel on the edge point connecting line to the fitting straight line corresponding to the edge point connecting line is calculated according to the following formula:
I, = [Rix — ya + byl where JL, represents the distance from the a-th grayscale difference on the edge point connecting line to the fitting straight line corresponding to the edge point connecting line; k; and b; are parameters of the linear equation of the fitting straight line of the grayscale difference of the i-th edge point connecting line; (Xa, ya) are horizontal and vertical coordinates of the a-th grayscale difference on the i-th edge point connecting line, that is, Xa represents the a-th grayscale difference on the i-th edge point connecting line, and ya represents the specific value of the a-th grayscale difference; and a calculation formula for the distance from the point to the straight line is the existing formula, which will not elaborated here.
The distance from the grayscale difference between the adjacent pixels on each edge point connecting line to the corresponding fitting straight line is acquired according to the method.
A variance of the distance from all the grayscale differences on the edge point connecting line to the fitting straight line is taken as the fitting quality of the fitting straight line. A formula for specifically calculating the fitting quality of the fitting straight line corresponding to the i-th edge point connecting line is:
8. 8 2 LU504265
ZL = = x 2 ( Ly <= x 2, 1) where ZL; represents the fitting quality of the fitting straight line corresponding to the i-th edge point connecting line; u represents the number of pixels on the i-th edge point connecting line; and JL, represents the distance from the a-th grayscale difference on the edge point connecting line to the fitting straight line corresponding to the edge point connecting line. ” oo represents a mean value of the distances from all the grayscale differences on the edge point connecting line to the corresponding fitting straight line, and represents the variance of the distances from all the grayscale differences on the edge point connecting line to the fitting straight line. The larger the variance, the more dispersed the distribution of the actual grayscale differences around the fitting straight line, and the worse the quality of the fitting straight line. The smaller the variance, the more consistent the actual grayscale difference with the fitting straight line, the better the quality of the fitting straight line, and the closer the grayscale change relationship on the edge point connecting line to the linear relationship.
It is to be noted that the linear change of the grayscale difference may be described by performing linear fitting on the grayscale difference between the adjacent pixels on the edge point connecting line, and when the grayscale difference on the edge point connecting line is closer to the linear change, the similar grayscale differences may be compressed again during storage, so that the compression space may be better saved. The fitting quality of the fitting straight line is calculated by using the variance of the distance from the grayscale difference to the fitting straight line, and the linear fitting quality of the grayscale difference may be evaluated. After linear fitting, the distance from the grayscale difference to the fitting straight line may reflect the correlation between the grayscale differences, that is, the grayscale differences may be integrated by using the same linear relationship.
At S3, a fitting quality entropy value of each super-pixel block in different directions is obtained by using the fitting quality of the fitting straight line corresponding to each edge point connecting line in different directions in each super-pixel block. The minimum fitting quality entropy value is selected as a fitting entropy value of the super-pixel block, and a direction in which the minimum fitting quality entropy value is obtained is taken as a target direction of the corresponding super-pixel block. LU504265
Specifically, taking the m-th super-pixel block as an example, the fitting quality entropy value of the super-pixel block is calculated according to the following formula:
Sp = > Pay, log Par, fwd where Sm represents the fitting quality entropy value of the m-th super-pixel block; Pz; represents the probability that the fitting straight line with the fitting quality of ZL; appears in all the fitting straight lines of the m-th super-pixel block; n represents the number of fitting straight lines in the m-th super-pixel block, namely, the number of edge point connecting lines; and 1 represents the fitting straight line corresponding to the i-th edge point connecting line. a Zin Pa, log: Pa is a calculation formula of information entropy in the related art, which is configured to represent the confusion degree of the fitting quality of all the fitting straight lines in the super-pixel block here. The larger the fitting quality entropy value, the more confused the changes in the fitting quality of the fitting straight line in the m-th super-pixel block, that is, the more confused the changes in the grayscale value in the super-pixel block, the smaller the entropy value Sm, the closer the fitting quality of all the fitting straight lines in the super-pixel block, that is, the more stable the change rule of the grayscale value in the m-th pixel block.
So far, the fitting quality entropy value of the m-th super-pixel block in the 0° direction is obtained.
The fitting quality entropy values in 45°, 90°, and 135° directions are respectively acquired according to a method of obtaining the fitting quality entropy value in the 0° direction, the minimum value is selected from the fitting quality entropy values obtained in the four directions as the fitting entropy value of the super-pixel block, the direction in which the minimum fitting quality entropy value is obtained is taken as the target direction, and it is considered that the grayscale change rule in the target direction is the most stable and may be taken as the optimal prediction direction.
At S4, all the super-pixel blocks are classified by using the fitting entropy value and a gray average of every two super-pixel blocks in each single channel image to obtain a plurality of types of super-pixel blocks.
If the brightness of any two super-pixel blocks in the single channel image is closer to the confusion degree of the grayscale change rule, it is considered that the correlation between LU504265 the two super-pixel blocks is higher, the brightness of the super-pixel block may be represented by the grayscale value of the pixel, and the confusion degree of the grayscale change rule may be represented by the fitting entropy value.
Specifically, a gray average of the pixels in each super-pixel block in each single channel image is acquired, a feature vector of the corresponding super-pixel block is formed by using the fitting entropy value and the gray average of each super-pixel block. A cosine similarity of the feature vectors of every two super-pixel blocks is calculated as a correlation between the two corresponding super-pixel blocks.
Taking the c-th super-pixel block and the d-th super-pixel block as an example, a formula for calculating the correlation between the two super-pixel blocks is: ; LD, x LD, + 8, X Sa
Egy 0 pS
J LDS 455 4 J LD + 86° where LX 4) represents the correlation between the c-th super-pixel block and the d-th super-pixel block; LD. represents the gray average of the pixels in the c-th super-pixel block;
LDa4 represents the gray average of the pixels in the d-th super-pixel block; Sc represents the fitting entropy value of the c-th super-pixel block, namely, the confusion degree of the grayscale change rule on the edge point connecting line in the c-th super-pixel block; and Sa represents the fitting entropy value of the d-th super-pixel block, namely, the confusion degree of the grayscale change rule on the edge point connecting line in the d-th super-pixel block. If the c-th super-pixel block and the d-th super-pixel block are more similar, it represents that the brightness and the grayscale change rule tend to be equal in value, that is, the higher the cosine similarity between the feature vector (he Sed of the c-th super-pixel block and dcLDa Sa) the feature vector “of the d-th super-pixel block (a cosine similarity calculation formula is the existing formula, which will not be elaborated here), and the value of LX gq) approaches 1.
So far, the correlation between every two super-pixel blocks is obtained, and the super-pixel blocks are classified according to the correlation between every two super-pixel blocks.
The reason for selecting and classifying all the super-pixel blocks in the single channel image instead of merging is that the grayscale change rule and brightness value features of the LU504265 pixels in each super-pixel block are similar, and the distribution features of the pixels are similar. However, if the super-pixel blocks are merged, a distribution rule of the original pixels in the merged super-pixel block changes, which may enlarge an error of a prediction model originally constructed to satisfy the distribution rule of the pixels in the super-pixel block before un-merging, thereby leading to long coding and poor compression effect.
All the super-pixel blocks in the single channel image according to the correlation between the super-pixel blocks, a correlation threshold interval may be set according to actual situations, and the number of types of the super-pixel blocks obtained according to the correlation threshold interval should be reasonable, but not too much or too little, so that not only the calculation amount and the model storage space may be reduced, but also the pixels of each super-pixel block may be predicted according to a feature change rule within the super-pixel block, and the prediction result is more accurate.
A classification principle is that: the correlation between any two super-pixel blocks in the same type is in a preset correlation threshold interval. According to the classification principle, the super-pixel blocks in each single channel image are classified to obtain a plurality of types of super-pixel blocks in each single channel image, and the correlation threshold interval between the super-pixel blocks is set to be [0.7,1]. The super-pixel blocks within this interval are considered to be closely correlated, at this time, there is no need to pay attention to the sizes, shapes and positions of the super-pixel blocks, but only to pay attention to the correlation between different super-pixel blocks in the security monitoring video image.
The same prediction model is used for predicting the same type of super-pixel blocks, and there is a prediction direction, namely, a target direction, belonging to the current super-pixel block in each super-pixel block. There are one or more super-pixel blocks in each type of super-pixel block.
The super-pixel blocks with high correlation are classified into the same type, that is, according to the grayscale values and the grayscale change rules of the super-pixel blocks, the super-pixel blocks with similar gray averages and similar grayscale change rules are classified into the same type. Compared with constructing the linear prediction model for each super-pixel block, the classification method may ensure the accuracy of a prediction model coefficient while reducing the calculation amount.
At S5, a prediction model coefficient of each type of super-pixel blocks is obtained by using grayscale values and a distance between each edge point in each type of super-pixel blocks of each single channel image and the corresponding edge point in the target direction.
Specifically, a grayscale difference and a distance between each edge point of each LU504265 super-pixel block in each type of super-pixel blocks in each single channel image and the corresponding edge point in the target direction are acquired. A ratio of the obtained grayscale difference to the distance between each edge point and the corresponding edge point in the target direction is calculated to obtain a grayscale change rate between each edge point and the corresponding edge point in the target direction. The grayscale change rate between each edge point and the corresponding edge point in the target direction is calculated according to the following formula: yw mill
Yad kan where Vmi represents the grayscale change rate between the i-th edge point in the m-th super-pixel block and the corresponding i'-th edge point in the target direction; Im; represents the grayscale value of the i-th edge point in the m-th super-pixel block; Im; represents the grayscale value of the i-th edge point corresponding to the i-th edge point in the m-th super-pixel block in the target direction; and Lm; represents the distance between the i-th edge point and the corresponding 1'-th edge point in the target direction. That is, the grayscale change rate on the edge point connecting line is obtained by a ratio of an absolute value of an overall grayscale change difference on the edge point connecting line to the length of the edge point connecting line. (The edge point in the super-pixel block and the corresponding edge point, and the edge point connecting line in SS-S6 all refer to the edge point connecting line in the target direction of the super-pixel block, and the edge point and the corresponding edge point in the target direction.)
Since the pixels in the super-pixel block are pixels with relatively similar feature values, and the correlation between the super-pixel blocks in the same type is relatively high, the same linear prediction model coefficient is used for all the pixels in the super-pixel blocks in the same type.
The prediction model coefficient of each super-pixel block is obtained by using a mean value of the grayscale change rates between each edge point in each super-pixel block and the corresponding edge point in the target direction.
Specifically, taking the m-th super-pixel block as an example, the prediction model coefficient of each super-pixel block is calculated according to the following formula:
ft LU504265
Up = > Vind fw where om represents the prediction model coefficient of the m-th super-pixel block, namely, the prediction model coefficient of the linear prediction model between the edge point in the m-th super-pixel block and the corresponding edge point: Vmi represents the grayscale change rate between the i-th edge point in the m-th super-pixel block and the corresponding i'-th edge point in the target direction, and n represents the number of edge point connecting lines in the m-th super-pixel block.
The prediction model coefficient of each type of super-pixel blocks is obtained by using a mean value of the prediction model coefficients of all the super-pixel blocks in each type of super-pixel blocks.
At S6, a prediction offset of each type of super-pixel blocks is obtained by using the prediction model coefficient of each type of super-pixel blocks in each single channel image and the grayscale values of the pixels in each type of super-pixel blocks. A prediction grayscale value of each pixel in each type of super-pixel blocks in each single channel image is obtained according to the prediction offset, the prediction model coefficient and the grayscale values of the edge points of each type of super-pixel blocks in each single channel image.
Each edge point in each super-pixel block and the corresponding edge point in the target direction are recorded as an edge point pair, and each edge point pair corresponds to one edge point connecting line. The edge point with the smaller grayscale value in each edge point pair is marked as a target point. A prediction value of each pixel except for the edge point on each edge point connecting line is obtained according to a grayscale value of the target point on each edge point connecting line in each super-pixel block and the prediction model coefficient of the type of the super-pixel block. The prediction offset of each pixel is obtained by using a difference between the prediction value and the grayscale value of each pixel in each super-pixel block. A mode of the prediction offsets of all the pixels in each type of super-pixel blocks is taken as the prediction offset of each type of super-pixel blocks.
Specifically, when each pixel in the super-pixel block is predicted, different prediction values are obtained, that is, different prediction offsets also exist, and the prediction offset of each pixel in the super-pixel block is calculated according to the following formula:
Bi = Gy + ay XK — Gy 10006608 where Pr represents the prediction offset of the pixel; Gy represents the grayscale value of the target point on the edge point connecting line where the pixel is located; am represents the prediction model coefficient of the M-th type of super-pixel blocks to which the super-pixel block where the pixel k is located belongs; K represents that the pixel is a value corresponding to the K-th pixel after the target point on the edge point connecting line, for example, the fifth pixel after the target point, and K is equal to 5; and Gx represents the grayscale value of the pixel k. Since am is obtained according to the grayscale change rule between the edge point pairs on the edge point connecting line, the grayscale value of each pixel on the edge point connecting line is obtained according to the grayscale change rule, and the prediction value of the pixel k is obtained by adding the grayscale value of the target point on the edge point connecting line to a product by + ay x K of the linear model coefficient and the distance from the pixel k to the target point. Since the target point is one with the smaller grayscale value in the edge point pair, the target point is added to the grayscale change rule. If the target point selects the edge point with the larger grayscale value, the grayscale change rule is subtracted from the target point. The prediction offset of the pixel is obtained according to the difference between the prediction value and the grayscale value of the pixel k.
The mode of the prediction offsets of all the pixels in each type of super-pixel blocks is taken as the prediction offset of each type of super-pixel blocks, and compared with storing each prediction offset, the storage space is reduced by directly storing the prediction offset of each type.
A distance between each pixel and the target point on the located edge point connecting line is acquired. A product corresponding to each pixel is obtained by multiplying the obtained distance by the prediction model coefficient of the super-pixel block type of the super-pixel block where the pixel is located. The prediction grayscale value of each pixel is obtained by adding the obtained product of each pixel to the grayscale value of the target point on the located edge point connecting line, and then adding the prediction offset of the super-pixel block type. Specifically, the prediction grayscale value of each pixel is calculated according to the following formula:
Io = Gy bay X K + Bry where Ix represents the prediction grayscale value of the pixel k; Gp represents the LU504265 grayscale value of the target point on the edge point connecting line where the pixel k is located; am represents the prediction model coefficient of the M-th type of super-pixel blocks to which the super-pixel block where the pixel k is located belongs; K represents that the pixel is a value corresponding to the pixel k after the target point on the edge point connecting line, for example, the fifth pixel after the target point is equal to 5, and K is equal to 5, that is, the distance between the pixel and the target point is 5; and Bir represents the prediction offset of the f-th type of super-pixel block to which the pixel k belongs. The prediction value of the pixel k is obtained by adding the grayscale value of the target point on the edge point connecting line to a product by + ay x K of the linear model coefficient and the distance from the pixel k to the target point. Since the target point is one with the smaller grayscale value in the edge point pair, the target point is added to the grayscale change rule. If the target point selects the edge point with the larger grayscale value in the edge point pair, the grayscale change rule is subtracted from the target point. Considering the influence of the prediction offset on the prediction value, and adding the prediction offset of the super-pixel block type, compared with calculating the prediction grayscale value by directly using the prediction model coefficient, the obtained prediction grayscale value is more accurate, and the smaller the subsequently obtained prediction error, the better the compression effect.
At S7, a prediction error of each pixel in the grayscale image is obtained by using the grayscale value of each pixel in the grayscale image and the prediction grayscale value of each pixel in each single channel image, and the prediction error of each pixel is coded and stored.
Specifically, the target prediction grayscale value of each pixel in the grayscale image is obtained by using the prediction grayscale value of each pixel in the plurality of single channel images (converting the grayscale value of the pixel in the plurality of single channel images into the grayscale value of the pixel in the grayscale image is the related art, which will not be elaborated here). The difference is obtained by subtracting the grayscale value of the pixel in the grayscale image from the target prediction grayscale value of each pixel in the grayscale image, and the prediction error of each pixel in the grayscale image is obtained by calculating the absolute value of the difference.
The prediction error of each pixel in the grayscale image of each frame of image in the surveillance video is coded and stored, and a coding mode of the predictive coding may select run coding.
A system for managing monitoring data of environmental security engineering includes a LU504265 data acquisition unit, a data analysis unit and a data coding unit. Specifically, the data acquisition unit is configured to acquire a grayscale image and a plurality of single channel images of each frame of image of a security monitoring video; and acquire a plurality of super-pixel blocks of each single channel image.
The data analysis unit is configured to obtain edge point connecting lines in different directions by connecting each edge point in each super-pixel block of each single channel image to edge points in different directions, and acquire a fitting straight line of a grayscale difference between adjacent pixels on each edge point connecting line in different directions and fitting quality of the fitting straight line; obtain a fitting quality entropy value of each super-pixel block in different directions by using the fitting quality of the fitting straight line corresponding to each edge point connecting line in different directions in each super-pixel block; select the minimum fitting quality entropy value as a fitting entropy value of the super-pixel block, and take a direction in which the minimum fitting quality entropy value is obtained as a target direction of the corresponding super-pixel block; and classify all the super-pixel blocks by using the fitting entropy value and a gray average of every two super-pixel blocks in each single channel image to obtain a plurality of types of super-pixel blocks.
The data coding unit is configured to obtain a prediction model coefficient of each type of super-pixel blocks by using grayscale values and a distance between each edge point in each type of super-pixel blocks of each single channel image and the corresponding edge point in the target direction; obtain a prediction offset of each type of super-pixel blocks by using the prediction model coefficient of each type of super-pixel blocks in each single channel image and the grayscale values of the pixels in each type of super-pixel blocks; obtain a prediction grayscale value of each pixel in each type of super-pixel blocks in each single channel image according to the prediction offset, the prediction model coefficient and the grayscale values of the edge points of each type of super-pixel blocks in each single channel image; and obtain a prediction error of each pixel in the grayscale image by using the grayscale value of each pixel in the grayscale image and the prediction grayscale value of each pixel in each single channel image, and code and store the prediction error of each pixel.
In summary, the application provides the method and system for managing the monitoring data of the environmental security engineering, since the grayscale correlation between the pixels in each single channel in the image is closer, independent analysis is LU504265 performed on each single channel of the image, so that the linear prediction result is more accurate. The super-pixel block in the grayscale image is acquired, the pixels with similar textures, similar grayscales and close distances in the grayscale image are classified into the same super-pixel block, and subsequently the linear prediction model is analyzed according to the pixel features in the same super-pixel block. Compared with the random macro block classification, the prediction model coefficient obtained by this method is more accurate. The fitting quality entropy value of the super-pixel block is obtained by acquiring the fitting straight line of the grayscale difference on the edge point connecting line in each direction in the super-pixel block, and the minimum fitting quality entropy value is selected from the plurality of directions as the fitting entropy value of the super-pixel block, that is, the direction with the most regular grayscale change is selected as a direction for performing linear prediction, which is more consistent with the grayscale change rule of the pixel, so that the subsequently obtained prediction grayscale value is closer to the grayscale value of the pixel in the single channel image, and then the prediction grayscale value in the single channel image is converted to the target prediction grayscale value in the grayscale image, so that the obtained prediction error is smaller, the space occupied after coding is smaller and the compression effect is better. In addition, considering the influence of the prediction offset on the prediction error, compared with calculating the prediction grayscale value by directly using the prediction model coefficient, the obtained prediction grayscale value is more accurate, and the smaller the subsequently obtained prediction error, the better the compression effect.
The description above is only the preferred embodiment of the application and is not intended to limit the application. Any modifications, equivalent replacements, improvements and the like made within the spirit and principle of the application shall fall within the scope of protection of the application.

Claims (8)

CLAIMS LU504265
1. A method for managing monitoring data of environmental security engineering, comprising: acquiring a grayscale image and a plurality of single channel images of each frame of image of a security monitoring video; acquiring a plurality of super-pixel blocks of each single channel image; obtaining edge point connecting lines in different directions by connecting each edge point in each super-pixel block of each single channel image to edge points in different directions, and acquiring a fitting straight line of a grayscale difference between adjacent pixels on each edge point connecting line in different directions and fitting quality of the fitting straight line; obtaining a fitting quality entropy value of each super-pixel block in different directions by using the fitting quality of the fitting straight line corresponding to each edge point connecting line in different directions in each super-pixel block; selecting the minimum fitting quality entropy value as a fitting entropy value of the super-pixel block, and taking a direction in which the minimum fitting quality entropy value is obtained as a target direction of the corresponding super-pixel block; classifying all the super-pixel blocks by using the fitting entropy value and a gray average of every two super-pixel blocks in each single channel image to obtain a plurality of types of super-pixel blocks; acquiring a grayscale difference and a distance between each edge point of each super-pixel block in each type of super-pixel blocks of each single channel image and the corresponding edge point in the target direction; calculating a ratio of the obtained grayscale difference to the distance between each edge point and the corresponding edge point in the target direction to obtain a grayscale change rate between each edge point and the corresponding edge point in the target direction; obtaining a prediction model coefficient of each super-pixel block by using a mean value of the grayscale change rates between each edge point in each super-pixel block and the corresponding edge point in the target direction; taking a mean value of the prediction model coefficients of all the super-pixel blocks in each type of super-pixel blocks as the prediction model coefficient of each type of super-pixel blocks; obtaining a prediction offset of each type of super-pixel blocks by using the prediction model coefficient of each type of super-pixel blocks in each single channel image and the grayscale values of the pixels in each type of super-pixel blocks; obtaining a prediction grayscale value of each pixel in each type of super-pixel blocks in each single channel image LU504265 according to the prediction offset, the prediction model coefficient and the grayscale values of the edge points of each type of super-pixel blocks in each single channel image; and obtaining a prediction error of each pixel in the grayscale image by using the grayscale value of each pixel in the grayscale image and the prediction grayscale value of each pixel in each single channel image, and coding and storing the prediction error of each pixel.
2. The method for managing the monitoring data of the environmental security engineering as claimed in claim 1, wherein a method for obtaining the prediction offset of each type of super-pixel blocks comprises: recording each edge point in each super-pixel block of each single channel image and the corresponding edge point in the target direction as an edge point pair, each edge point pair corresponding to one edge point connecting line; marking the edge point with the smaller grayscale value in each edge point pair as a target point; obtaining a prediction value of each pixel except for the edge point on each edge point connecting line according to a grayscale value of the target point on each edge point connecting line in each super-pixel block and the prediction model coefficient of the type of the super-pixel block; obtaining a prediction offset of each pixel by using a difference between the prediction value and the grayscale value of each pixel in each super-pixel block; and taking a mode of the prediction offsets of all the pixels in each type of super-pixel blocks as the prediction offset of each type of super-pixel block.
3. The method for managing the monitoring data of the environmental security engineering as claimed in claim 2, wherein a formula for acquiring the prediction offset of each pixel in each super-pixel block is: Ër = Ga Au X EG; where Pr represents the prediction offset of the pixel k; Gp represents the grayscale value of the target point on the edge point connecting line where the pixel k is located; om represents the prediction model coefficient of the M-th type of super-pixel blocks to which the super-pixel block where the pixel k is located belongs; K represents that k is a value corresponding to the K-th pixel after the target point on the edge point connecting line; and Gr represents the grayscale value of the pixel k.
4. The method for managing the monitoring data of the environmental security engineering as claimed in claim 2, wherein a method for obtaining the prediction grayscale LU504265 value of each pixel comprises: acquiring a distance between each pixel and the target point on the located edge point connecting line; obtaining a product corresponding to each pixel by multiplying the obtained distance by the prediction model coefficient of the super-pixel block type of the super-pixel block where the pixel 1s located; and obtaining the prediction grayscale value of each pixel by adding the obtained product of each pixel to the grayscale value of the target point on the located edge point connecting line, and then adding the prediction offset of the super-pixel block type.
5. The method for managing the monitoring data of the environmental security engineering as claimed in claim 1, wherein a method for obtaining the fitting quality of the fitting straight line comprises: acquiring a variance of a distance from the grayscale difference between each pair of adjacent pixels on the edge point connecting line to the fitting straight line, and taking the variance obtained by each edge point connecting line as the fitting quality of the fitting straight line corresponding to the edge point connecting line.
6. The method for managing the monitoring data of the environmental security engineering as claimed in claim 1, wherein a method of obtaining the prediction error of each pixel in the grayscale image comprises: obtaining a target prediction grayscale value of each pixel in the grayscale image by using the prediction grayscale value of each pixel in the plurality of single channel images; and taking an absolute value of a difference between the target prediction grayscale value and the grayscale value of each pixel in the grayscale image as the prediction error of each pixel in the grayscale image.
7. The method for managing the monitoring data of the environmental security engineering as claimed in claim 1, wherein a method of obtaining the plurality of types of super-pixel blocks comprises: forming a feature vector of the corresponding super-pixel block by using the fitting entropy value and the gray average of each super-pixel block; and calculating a cosine similarity of the feature vectors of every two super-pixel blocks as a correlation between the two corresponding super-pixel blocks; a classification principle is that: the correlation between any two super-pixel blocks in the same type is in a preset correlation threshold interval; LU504265 the super-pixel blocks in each single channel image are classified by using the classification principle to obtain the plurality of types of super-pixel blocks.
8. A system for managing monitoring data of environmental security engineering, comprising a data acquisition unit, a data analysis unit and a data coding unit, wherein the data acquisition unit is configured to acquire a grayscale image and a plurality of single channel images of each frame of image of a security monitoring video; and acquire a plurality of super-pixel blocks of each single channel image; the data analysis unit is configured to obtain edge point connecting lines in different directions by connecting each edge point in each super-pixel block of each single channel image to edge points in different directions, and acquire a fitting straight line of a grayscale difference between adjacent pixels on each edge point connecting line in different directions and fitting quality of the fitting straight line; obtain a fitting quality entropy value of each super-pixel block in different directions by using the fitting quality of the fitting straight line corresponding to each edge point connecting line in different directions in each super-pixel block; select the minimum fitting quality entropy value as a fitting entropy value of the super-pixel block, and taking a direction in which the minimum fitting quality entropy value is obtained as a target direction of the corresponding super-pixel block; and classify all the super-pixel blocks by using the fitting entropy value and a gray average of every two super-pixel blocks in each single channel image to obtain a plurality of types of super-pixel blocks; the data coding unit is configured to obtain a prediction model coefficient of each type of super-pixel blocks by using grayscale values and a distance between each edge point in each type of super-pixel blocks of each single channel image and the corresponding edge point in the target direction; obtain a prediction offset of each type of super-pixel blocks by using the prediction model coefficient of each type of super-pixel blocks in each single channel image and the grayscale values of the pixels in each type of super-pixel blocks; obtain a prediction grayscale value of each pixel in each type of super-pixel blocks in each single channel image according to the prediction offset, the prediction model coefficient and the grayscale values of the edge points of each type of super-pixel blocks in each single channel image; and obtain a prediction error of each pixel in the grayscale image by using the grayscale value of each pixel in the grayscale image and the prediction grayscale value of each pixel in each single channel image, and code and store the prediction error of each pixel.
LU504265
LU504265A 2022-12-16 2023-04-03 Method and System for Managing Monitoring Data of Environmental Security Engineering LU504265B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211619607.9A CN115914634A (en) 2022-12-16 2022-12-16 Environmental security engineering monitoring data management method and system

Publications (1)

Publication Number Publication Date
LU504265B1 true LU504265B1 (en) 2023-07-31

Family

ID=86471077

Family Applications (1)

Application Number Title Priority Date Filing Date
LU504265A LU504265B1 (en) 2022-12-16 2023-04-03 Method and System for Managing Monitoring Data of Environmental Security Engineering

Country Status (3)

Country Link
CN (1) CN115914634A (en)
LU (1) LU504265B1 (en)
WO (1) WO2023134791A2 (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115914634A (en) * 2022-12-16 2023-04-04 苏州迈创信息技术有限公司 Environmental security engineering monitoring data management method and system
CN116233479B (en) * 2023-04-28 2023-07-21 中译文娱科技(青岛)有限公司 Live broadcast information content auditing system and method based on data processing
CN116320042B (en) * 2023-05-16 2023-08-04 陕西思极科技有限公司 Internet of things terminal monitoring control system for edge calculation
CN116665137B (en) * 2023-08-01 2023-10-10 聊城市彩烁农业科技有限公司 Livestock breeding wastewater treatment method based on machine vision
CN116703787B (en) * 2023-08-09 2023-10-31 中铁建工集团第二建设有限公司 Building construction safety risk early warning method and system
CN116778431B (en) * 2023-08-25 2023-11-10 青岛娄山河水务有限公司 Automatic sludge treatment monitoring method based on computer vision
CN116863253B (en) * 2023-09-05 2023-11-17 光谷技术有限公司 Operation and maintenance risk early warning method based on big data analysis
CN117079197B (en) * 2023-10-18 2024-03-05 山东诚祥建设集团股份有限公司 Intelligent building site management method and system
CN117115196B (en) * 2023-10-25 2024-02-06 东莞雕宝自动化设备有限公司 Visual detection method and system for cutter abrasion of cutting machine
CN117221609B (en) * 2023-11-07 2024-03-12 深圳微云通科技有限公司 Centralized monitoring check-in system for expressway toll service
CN117237339B (en) * 2023-11-10 2024-02-27 山东多沃基础工程有限公司 Ground screw punching point position selection method and system based on image processing
CN117351433B (en) * 2023-12-05 2024-02-23 山东质能新型材料有限公司 Computer vision-based glue-cured mortar plumpness monitoring system
CN117478891B (en) * 2023-12-28 2024-03-15 辽宁云也智能信息科技有限公司 Intelligent management system for building construction
CN117615088B (en) * 2024-01-22 2024-04-05 沈阳市锦拓电子工程有限公司 Efficient video data storage method for safety monitoring
CN117692011B (en) * 2024-01-29 2024-04-30 航天亮丽电气有限责任公司 Monitoring data early warning method for firefighting rescue environment monitoring system
CN117831135B (en) * 2024-03-04 2024-05-10 陕西一览科技有限公司 Human trace detection method based on image processing
CN117853933B (en) * 2024-03-07 2024-05-17 山东矿通智能装备有限公司 Coal bed identification method for open pit coal mining

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5482007B2 (en) * 2008-10-08 2014-04-23 株式会社ニコン Image processing method
WO2018176185A1 (en) * 2017-03-27 2018-10-04 中国科学院深圳先进技术研究院 Texture synthesis method, and device for same
US20230289979A1 (en) * 2020-11-13 2023-09-14 Zhejiang University A method for video moving object detection based on relative statistical characteristics of image pixels
CN115439474B (en) * 2022-11-07 2023-01-24 山东天意机械股份有限公司 Rapid positioning method for power equipment fault
CN115914634A (en) * 2022-12-16 2023-04-04 苏州迈创信息技术有限公司 Environmental security engineering monitoring data management method and system

Also Published As

Publication number Publication date
WO2023134791A3 (en) 2023-09-21
WO2023134791A2 (en) 2023-07-20
CN115914634A (en) 2023-04-04

Similar Documents

Publication Publication Date Title
LU504265B1 (en) Method and System for Managing Monitoring Data of Environmental Security Engineering
CN111147867B (en) Multifunctional video coding CU partition rapid decision-making method and storage medium
CN115297289B (en) Efficient storage method for monitoring video
CN110198444B (en) Video frame encoding method, video frame encoding apparatus, and device having storage function
WO2021032205A1 (en) System, method, codec device for intra frame and inter frame joint prediction
US8254702B2 (en) Image compression method and image processing apparatus
US20230082561A1 (en) Image encoding/decoding method and device for performing feature quantization/de-quantization, and recording medium for storing bitstream
CN110087083B (en) Method for selecting intra chroma prediction mode, image processing apparatus, and storage apparatus
CN105608461A (en) Method of identifying relevant areas in digital images, method of encoding digital images, and encoder system
CN112291562B (en) Fast CU partition and intra mode decision method for H.266/VVC
CN115589493B (en) Satellite transmission data compression method for ship video return
US20240105193A1 (en) Feature Data Encoding and Decoding Method and Apparatus
CN116996673A (en) Intelligent cloud management system based on passing in and out management and equipment running state
CN115618051A (en) Internet-based smart campus monitoring video storage method
US20230396780A1 (en) Illumination compensation method, encoder, and decoder
CN117499655A (en) Image encoding method, apparatus, device, storage medium, and program product
CN110493599B (en) Image recognition method and device
JP6613842B2 (en) Image encoding apparatus, image encoding method, and image encoding program
CN115278225A (en) Method and device for selecting chroma coding mode and computer equipment
CN113365080B (en) Encoding and decoding method, device and storage medium for string coding technology
CN112188212B (en) Intelligent transcoding method and device for high-definition monitoring video
CN116547968A (en) Prediction method, encoder, decoder, and computer storage medium
CN112509107A (en) Point cloud attribute recoloring method, device and encoder
CN114882390B (en) Video frame type decision method based on CTU histogram in VVC coding standard
KR101930389B1 (en) Video File Compression Method, Device and Computer Program Thereof

Legal Events

Date Code Title Description
FG Patent granted

Effective date: 20230731