CN116600132B - Coal mine video data self-adaptive compression method - Google Patents

Coal mine video data self-adaptive compression method Download PDF

Info

Publication number
CN116600132B
CN116600132B CN202310882458.3A CN202310882458A CN116600132B CN 116600132 B CN116600132 B CN 116600132B CN 202310882458 A CN202310882458 A CN 202310882458A CN 116600132 B CN116600132 B CN 116600132B
Authority
CN
China
Prior art keywords
data
video data
data sequence
category
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310882458.3A
Other languages
Chinese (zh)
Other versions
CN116600132A (en
Inventor
顾军
张永福
赵金升
程训龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huayang Communication Technology Co ltd
Original Assignee
Huayang Communication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huayang Communication Technology Co ltd filed Critical Huayang Communication Technology Co ltd
Priority to CN202310882458.3A priority Critical patent/CN116600132B/en
Publication of CN116600132A publication Critical patent/CN116600132A/en
Application granted granted Critical
Publication of CN116600132B publication Critical patent/CN116600132B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/129Scanning of coding units, e.g. zig-zag scan of transform coefficients or flexible macroblock ordering [FMO]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/93Run-length coding
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of video data compression, and provides a coal mine video data self-adaptive compression method, which comprises the following steps: collecting video data of a coal mine; acquiring a plurality of categories of each video frame image in the coal mine video data, acquiring a video data sequence and a reference data sequence of each video frame image, and acquiring the fixity characteristic of each data in each video data sequence; obtaining the matching category of each category in each video data sequence, and obtaining the variability characteristic of each data in each category to obtain the information loss acceptance degree of each data; acquiring an adjustment reference degree and an initial reference point, and acquiring an adjustment acceptance degree of the initial reference point and a data sequence to be compressed of each video frame image; and compressing the data sequence to be compressed of each video frame image to finish the self-adaptive compression of the coal mine video data. The invention aims to adaptively compress different frames of coal mine video data so as to improve compression efficiency.

Description

Coal mine video data self-adaptive compression method
Technical Field
The invention relates to the technical field of video data compression, in particular to a coal mine video data self-adaptive compression method.
Background
The coal mine industry has an important role in all the world, is one of the most widely used energy sources in the world, and is widely used in the fields of electric power, steel, chemical industry and the like; because the coal mine is a high-risk workplace, and a plurality of dangerous factors such as geological disasters, gas explosion and the misuse of electrical equipment endanger the safety of coal mine workers, a coal mine video monitoring system is very necessary for preventing accidents as much as possible; however, the amount of data generated in the coal mine video data is very large, a large amount of storage space is required for storing the coal mine video data, and a large burden is caused on the storage space of the server and the bandwidth of data transmission, so that the coal mine video data needs to be compressed and retransmitted.
In the prior art, compression of coal mine video data is performed by adopting a string table compression (LZW) algorithm; however, since the LZW algorithm is a dictionary-based compression algorithm, in the process of compressing video data of a coal mine, the size of the dictionary is greatly increased due to the change of video data of different frames, and the retrieval time in the dictionary is increased along with the increase of the dictionary, so that the compression time and the compression rate of the LZW algorithm are greatly influenced, and therefore, an adaptive LZW compression method suitable for the video data of the coal mine is needed to reduce the compression time of the video data of the coal mine and improve the data compression rate.
Disclosure of Invention
The invention provides a coal mine video data self-adaptive compression method, which aims to solve the problem that the compression efficiency is low due to the fact that the dictionary is required to be increased for different frames of video data by the existing LZW compression algorithm, and the adopted technical scheme is as follows:
the embodiment of the invention provides a coal mine video data self-adaptive compression method, which comprises the following steps:
collecting video data of a coal mine;
clustering pixel points in each video frame image in the coal mine video data according to gray values to obtain a plurality of categories of each video frame image, obtaining a video data sequence and a reference data sequence of each video frame image, obtaining a plurality of categories of each video data sequence according to the categories of each video frame image, and obtaining the fixity characteristic of each data in each video data sequence according to the video data sequence, the reference data sequence and the category to which each data belongs;
acquiring the category similarity of each category in each video data sequence and each category in the adjacent previous frame video data sequence according to the data in each category in each video data sequence and the data in each category in the adjacent previous frame video data sequence, acquiring the distribution interval mean value of each category in each video data sequence according to the category similarity, acquiring the variability feature of each category according to the distribution interval mean value, the category and the category similarity between the matching categories, endowing the variability feature of each category to the corresponding data of each category to acquire the variability feature of each data in each video data sequence, and acquiring the information loss acceptance degree of each data in each video data sequence according to the stationarity feature and the variability feature;
Acquiring a plurality of segments of each category in each video data sequence according to the category, marking the data of dividing the initial data and the termination data of each segment as the internal data of each segment, taking the initial data and the termination data of each segment as initial reference points of corresponding categories, acquiring the adjustment reference degree of each internal data according to the data value and the information loss acceptance degree, acquiring the initial reference point of each category according to the adjustment reference degree, acquiring a plurality of subsections of each category according to the initial reference point, acquiring the adjustment acceptance degree of each initial reference point according to the initial reference point and the subsections, and acquiring the data sequence to be compressed of each video data sequence according to the adjustment acceptance degree to obtain the data sequence to be compressed of each video frame image;
and compressing the data sequence to be compressed of each video frame image to finish the compression of the coal mine video data.
Optionally, the method for acquiring the video data sequence and the reference data sequence of each video frame image includes the following specific steps:
scanning each pixel point of each video frame image to obtain a data sequence corresponding to each video frame image, and recording the data sequence as a video data sequence of each video frame image;
And acquiring a reference video image of each video frame image, and scanning each pixel point of the reference video image to obtain a reference data sequence of each video frame image.
Optionally, the method for acquiring the stationarity characteristic of each data in each video data sequence includes the following specific steps:
wherein (1)>Representing the +.>The stationarity of the individual data,/->Representing the +.>Data amount of local range of individual data, +.>Representing the +.>Local range of personal data +.>Data value of individual data->Representing the +.>Local range of personal data +.>Data values of the same position in the reference data sequence of the current video data sequence, the position of the data +.>Expressed in natureExponential function with constant as bottom, +.>Representing absolute value;
the local scope represents the firstThe data are forward and backward each of a predetermined number of the same category data.
Optionally, the method for obtaining the category similarity between each category in each video data sequence and each category in the video data sequence of the adjacent previous frame includes the following specific steps:
recording the video data sequence of the adjacent previous frame of the current video data sequence as the previous video data sequence, and obtaining the category in the current video data sequence Class +.>Category similarity->The calculation method of (1) is as follows:
wherein (1)>Representing class +.>Data value mean,/-, of (2)>Representing class +.>Data value mean,/-, of (2)>Representing class +.>Class +.>Covariance of data values of>Representing class +.>Data value variance,/-, of (2)>Representing class +.>Data value variance,/-, of (2)>And->To calculate the constant.
Optionally, the method for acquiring the variability feature of each category includes the following specific steps:
wherein (1)>Representing class in the current video data sequence +.>Is characterized by variability of->Representing class +.>Degree of distribution difference from matching category in previous video data sequence, +.>Representing class +.>Matching category +.>Category similarity, ->And->Representing the reference weights;
the distribution difference degree is obtained through a small value and a large value through a distribution interval mean value of the two categories.
Optionally, the method for obtaining the information loss acceptance degree of each data in each video data sequence includes the following specific steps:
Wherein (1)>Representing the +.>Information loss acceptance of individual data, +.>Representing the +.>The stationarity of the individual data,/->Indicating whenFirst>Variability characteristics of the data.
Optionally, the method for obtaining the adjustment reference degree of each internal data according to the data value and the information loss acceptance degree includes the following specific steps:
wherein (1)>Representing the +.>No. H of the segment>The degree of adjustment reference of the individual internal data, +.>Representing the +.>No. H of the segment>Information loss acceptance of individual internal data, +.>Representing the +.>No. H of the segment>Data value of the internal data->Representing the +.>No. H of the segment>Data value of the internal data->Representing the +.>No. H of the segment>Data value of the internal data->Representing absolute values.
Optionally, the method for obtaining the plurality of subsections of each category according to the initial datum point includes the following specific steps:
acquiring categories of a current video data sequenceAcquisition category->Obtaining the interval value of every adjacent two initial datum points in each segment, wherein the interval value is obtained by the position of the initial datum points in the current video data sequence, and the category +. >The minimum of all interval values of (2) as category +.>Is a subrange of (2);
pairs of categories according to sub-rangesDividing the data between every two adjacent initial reference points in each segment to obtainCategory of current video data sequence->Is included.
Optionally, the method for obtaining the adjustment acceptance degree of each initial reference point according to the initial reference point and the subsections includes the following specific steps:
wherein (1)>Class representing the current video data sequence>Middle->Adjustment acceptance of the initial reference points, +.>Class representing the current video data sequence>Number of subsections, +.>Class representing the current video data sequence>Middle->The data values in the first subsection after the initial reference point are all adjusted to the category of the current video data sequence>Middle->After the data values of the initial reference points, the current view is providedClass of frequency data sequence->The number of occurrences in all sub-segments; />Class representing the current video data sequence>Is used in the range of (a) and (b),class representing the current video data sequence>Middle->First sub-segment after initial reference point +.>Absolute value of difference between data values before and after data adjustment,/->Class representing the current video data sequence >Middle->First sub-segment after initial reference point +.>Normalized value of the information loss acceptance level of the individual data.
Optionally, the method for obtaining the data sequence to be compressed of each video data sequence according to the adjustment acceptance degree includes the following specific steps:
acquiring categories of a current video data sequenceAcquisition category->Taking any two adjacent initial datum points in the target point pair as target point pairs, taking a section of data between the target point pairs as target section data, obtaining the maximum value of the adjustment acceptance degree of the two initial datum points in the target point pairs, taking the initial datum point corresponding to the maximum value as the reference data of the target section data, and adjusting the data value of each datum of the target section data as the data value of the reference data;
class of current video data sequenceThe data in all the segments of the initial datum point are adjusted according to the adjustment acceptance degree of the initial datum point; adjusting all data in the current video data sequence, and marking the obtained result as a data sequence to be compressed of the current video data sequence; a data sequence to be compressed for each video data sequence is acquired.
The beneficial effects of the invention are as follows: according to the method, a video data sequence is obtained by scanning a coal mine monitoring video, the data in the video data sequence is required to be adjusted according to the information loss acceptance degree of each data in each video data sequence and the idea of run-length coding, in the adjustment process, according to the adjustment reference degree and the adjustment acceptance degree of the data in the video data sequence, the maximum compression rate can be achieved according to which adjustment reference point is determined, and then a final data sequence to be compressed is obtained, and the self-adaptive LZW algorithm compression is carried out according to the data sequence to be compressed; the defects that the dictionary is large due to the content change of continuous frame video data in the coal mine monitoring video in the traditional LZW algorithm, and the retrieval time in the dictionary is increased at the same time, so that the compression time and the compression rate of the LZW algorithm are greatly influenced are overcome; the LZW algorithm can greatly reduce the capacity of a dictionary, increase the compression rate and the compression efficiency, and ensure the retention degree of important information of the coal mine monitoring video.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
Fig. 1 is a schematic flow chart of a method for adaptively compressing video data of a coal mine according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a flowchart of a method for adaptively compressing video data of a coal mine according to an embodiment of the invention is shown, the method includes the following steps:
And S001, collecting coal mine video data.
The purpose of this embodiment is to adaptively compress coal mine video data, so that the coal mine video data needs to be acquired first; the embodiment collects coal mine video data through the monitoring cameras in all areas in the coal mine, wherein the monitoring cameras adopt high-resolution industrial CCD cameras with strong interference capability, specific models of the embodiment are not set, and an implementer can be determined according to actual conditions; after the coal mine video data are collected, each monitoring camera transmits the video data to a server in real time for storage, a large amount of historically collected coal mine video data are stored in the server, the real-time coal mine video data are compressed in the server, and then the compressed video data are transmitted to a coal mine video monitoring system terminal; in this embodiment, compression analysis is performed on the coal mine video data collected by any one monitoring camera, that is, compression of the coal mine video data is performed on the basis of the coal mine video data of the same monitoring camera.
Thus, the coal mine video data is obtained.
Step S002, clustering pixel points in each video frame image in the coal mine video data according to gray values to obtain a plurality of categories of each video frame image, obtaining a plurality of video data sequences according to each video frame image, obtaining a reference data sequence of each video data sequence, and obtaining the stationarity characteristic of each data in each video data sequence according to the video data sequence, the reference data sequence and the category to which each data belongs.
It should be noted that, in the compression process of the video data of the coal mine, due to the characteristics of the monitoring video data, the content in the video frame images of the continuous frames is changed frequently, such as construction of workers, movement of equipment and the like, and meanwhile, fixed background content exists, such as wall background except personnel and equipment, transportation pipelines and the like, the dictionary of the obtained LZW algorithm is larger, the searching time is longer in the compression process due to the larger dictionary, and the compression time is greatly increased; considering the stationarity of the background of the monitoring video, the content information in the area where the part exists in the image is more regular; the run length coding is a data processing method suitable for content with high repeatability, so that according to the characteristics of the coal mine monitoring video, the pixel point information in the video image is processed by combining the idea of the run length coding, thereby greatly reducing the capacity of a dictionary and increasing the compression rate.
It should be further noted that the background in the coal mine monitoring video has a characteristic of stationarity, and the relative motion content exists in the video frame images of the continuous frames; because the background environment in the coal mine monitoring video is mostly in a dark area, a plurality of monitoring targets are in darkness, so that a plurality of detail information is covered, if the gray values are adjusted to be the same according to the similarity of the gray values of adjacent pixel points, the effect of a run coding idea is achieved, more detail information is lost, a great amount of information is lost by compression according to the result, and the establishment of a subsequent dictionary is also influenced; therefore, firstly, the fixed characteristics of the background in each frame of video image are considered, then the information loss acceptance degree of each data is carried out by combining the variable characteristics of the motion content of the continuous frame of video data, and further, the parameter basis is provided for the construction of the dictionary and the update of the dictionary.
Specifically, taking any video frame image as an example, performing DBSCAN clustering on each pixel point in the video frame image according to gray values to obtain a plurality of categories, wherein the radius in the DBSCAN clustering is set to be 5 in the embodiment, the minimum point number is set to be 8, and an implementer can be determined according to actual conditions; and clustering the pixel points in each video frame image according to the method to obtain a plurality of categories, wherein each category comprises a plurality of pixel points corresponding to one video frame image.
Further, scanning each pixel point by a raster scanning method on each video frame image to obtain a data sequence corresponding to each video frame image, and recording the data sequence as a video data sequence of each video frame image; it should be noted that, each video data sequence has a video frame image scanned, that is, the video data sequence and the video frame image are in a one-to-one correspondence, and the raster scanning is in the prior art, which is not described in detail in this embodiment, and an implementer can select other scanning modes to perform conversion of the data sequence according to actual situations; it should be noted that, the number of elements in the video data sequence is the number of pixels in the video frame image; after the video data sequences are obtained by scanning, a plurality of data corresponding to the pixel points belonging to the same category in the video frame images in the video data sequences also belong to the same category, and then each video data sequence is divided into a plurality of categories.
Further, for any video frame image, a reference video image of the video frame image is obtained in a server in a manual mode, wherein the reference video image is an image collected by the same monitoring camera only comprising a background part, and no change parts such as coal mine workers or mobile equipment exist in the reference video image, and the reference video image is obtained through screening in a manual mode; acquiring a reference video image of each video frame image according to the method; it should be noted that, the same reference video image may be used as a reference video image of a plurality of video frame images because there is no change portion; and (3) obtaining a data sequence for each reference video image in a raster scanning mode, and recording the data sequence as a reference data sequence, so as to obtain the reference data sequence of each video data sequence.
Further, the fixed characteristic of each data in each video data sequence is obtained according to the video data sequence, the reference data sequence and the category to which each data belongs, the video data sequence of the current video frame image is recorded as the current video data sequence, the current video frame image is the latest collected video frame image, and the third video frame image is the current video data sequence Immobilization characteristics of individual data->The calculation method of (1) is as follows:
wherein (1)>Representing the +.>The data amount of the partial range of the data, the partial range is +.>The number of data is the same category data with preset number forward and backward, wherein the preset number is set to be 5 in the embodiment, and the data amount of the local range is 10; it should be noted that the local scope only includes +.>The data are forward and backward respectively preset quantity of the same category data, not including different categoriesOther data; />Representing the +.>Local range of personal data +.>Data value of individual data->Representing the +.>Local range of personal data +.>Data values of the same position in the reference data sequence of the current video data sequence, the position of the data +.>Represents an exponential function based on natural constants, < ->Representing absolute value; first->Local in-range data of the data in the current video data sequence and the reference data sequence, the smaller the difference of the data values of the same position, the +.>The smaller the variation of the adjacent data of the same class of data in the video frame image, the greater the likelihood of being background portion data, the greater the stationarity feature, the present embodiment employs +. >The functions are used for representing inverse proportion relation and normalization processing, and an implementer can select inverse according to actual conditionsA scaling function and a normalization function; the stationarity characteristic of each data in each video data sequence is obtained as described above.
The pixel points of the video frame images are clustered according to the gray values to obtain a plurality of categories, video data sequences corresponding to the video frame images are obtained, the fixed characteristics of each data in each video data sequence are obtained by combining the reference data sequences, and the possibility that the pixel points corresponding to each data are background portions is represented.
Step S003, obtaining category similarity between each video data sequence and different categories in the video data sequence of the adjacent previous frame, obtaining a matching category of each category in each video data sequence according to the category similarity, obtaining a variability feature of each data in each category according to the distribution and the matching category of the data in the category, and obtaining the information loss acceptance degree of each data in each video data sequence according to the stationarity feature and the variability feature.
It should be noted that, after the fixed characteristic of each data in each video data sequence is obtained, quantization of the variable characteristic of each data is also required according to video frame images of continuous frames; meanwhile, the video frame images are scanned and converted into video data sequences, the categories of two adjacent frames of video data sequences are different, and firstly, the matching category of each category in the previous frame of video data sequence in each video data sequence is required to be obtained through the similarity between the categories; and combining the uniform distribution condition of a plurality of data of each category in the video data sequence, reflecting the change of the data of the same category in two adjacent frames of video data sequences, further quantifying to obtain a variability characteristic, and obtaining the information loss acceptance degree of each data through the stationarity characteristic and the variability characteristic.
Specifically, taking the current video data sequence as an example, the video data sequence of the frame adjacent to the previous frame of the current video data sequence is recorded as the previous video data sequence, and the category in the current video data sequence is usedClass +.>For example, get category->Category->Category similarity->The calculation method of (1) is as follows:
wherein (1)>Representing class +.>Data value mean,/-, of (2)>Representing class +.>Data value mean,/-, of (2)>Representing class +.>Class +.>Covariance of data values of>Representing class +.>Data values of (2)Variance (L)/(L)>Representing class +.>Data value variance,/-, of (2)>And->To calculate the constant, the present embodiment usesAnd->Calculating; calculating the category similarity of different categories in two adjacent frames of video data sequences by a calculation method based on the structural similarity, wherein the structural similarity is the prior art, and the embodiment is not repeated; obtaining the category +.>Setting a preset first threshold for judging the similar category with the category similarity of each category in the previous video data sequence, and calculating the category ∈in the current video data sequence by adopting 0.65 for the preset first threshold in the embodiment >The category with the highest category similarity in the previous video data sequence and the category with the category similarity larger than a preset first threshold value is used as the category +.>Is a matching category of (2); according to the method, according to different categories in each video data sequence and the video data sequence of the adjacent previous frame, a matching category of each category in each video data sequence is obtained; note that the existence category does not match the categoryIn the present embodiment, a description will be given later of a category in which no matching category exists.
Further, in categories in the current sequence of video dataFor example, get category->The position of each data in the current video data sequence, and each two adjacent categories are acquired according to the position>Interval value between data of (a) by +/for each category>Subtracting the previous adjacent category +.>The difference value obtained from the position of the data of (a) indicates that the mean value of all interval values is taken as the class +.>Is a distribution interval average value of (1); according to the method, the distribution interval average value of each category in each video data sequence is obtained, the distribution interval average value represents the distribution distances of different pixel points in the same category in video frame images, and the distribution interval average value can be used for quantifying the variability characteristics of the category.
Further, in categories in the current sequence of video dataFor example, the variability features thereof>The calculation method of (1) is as follows:
wherein (1)>Representing class +.>The distribution difference degree of the matching category in the previous video data sequence is obtained by the average value of the distribution intervals of the two categories through a small value to a large value,/->Representing class +.>Matching category +.>Category similarity, ->And->Representing the reference weight, the embodiment considers that the degree of similarity of categories is as important as the degree of difference of distribution, and adopts +.>Calculating; the larger the distribution interval mean value difference between the category and the matching category is, the larger the distribution difference degree is, the larger the distribution change of the pixel points of the category and the matching category in two adjacent frames of video frame images is, and the larger the variability characteristic of the category is; the smaller the category similarity is, the larger the difference between the category and the matching category in two adjacent frames of video data sequences is, and the larger the category variability is; acquiring the variability characteristics of each category in each video data sequence according to the method; specifically, the present embodiment considers that the variability characteristics of the categories for which no matching category exists are maximized, and sets the variability characteristics of these categories to 1.
It should be further noted that, if the variability feature of all the data in each category is quantified based on the category, the variability feature of each data in each category may be characterized by the variability feature of the category; specifically, the variability feature of each category is given to the corresponding data of each category, so that the variability feature of each data in each video data sequence is obtained.
Further, by the first in the current video data sequenceThe data are exemplified by the information loss acceptance degree +.>The calculation method of (1) is as follows:
wherein (1)>Representing the +.>The stationarity of the individual data,/->Representing the +.>A variability feature of the individual data; the larger the stationarity characteristic is, the larger the possibility that the pixel point corresponding to the data represents the coal mine background information which is invariable is, and the larger the corresponding information loss acceptance degree is; the larger the variability feature is, the larger the possibility that the pixel points corresponding to the data represent the variable motion content is, and the smaller the corresponding information loss acceptance degree is; the information loss acceptance level of each data in each video data sequence is obtained according to the above method.
Thus, the variability characteristics of each category in each video data sequence are obtained, and the information loss acceptance degree of each data is obtained.
Step S004, segmenting each video data sequence according to the categories, obtaining the internal data of each segment, obtaining the adjustment reference degree of each internal data according to the data value and the information loss acceptance degree, obtaining the initial reference point according to the adjustment reference degree, obtaining a plurality of subsections of each segment in each category of each video data sequence according to the initial reference point, obtaining the adjustment acceptance degree of each initial reference point according to the initial reference point and the subsections, and obtaining the data sequence to be compressed of each video data sequence according to the adjustment acceptance degree.
It should be noted that, after obtaining the information loss acceptance degree of each data in each video data sequence, in combination with the idea of run-length encoding, in order to increase the compression rate and reduce the search time, it is necessary to adjust the data in the video data sequence according to the information loss acceptance degree of the data; in the adjustment process, an initial datum point needs to be determined firstly, namely, adjustment is performed according to which data, and adjustment is performed after the datum point is determined; since the same type of video data sequence and multiple initial reference points exist in the same segment, the first reference point needs to be obtained according to the initial reference points and the data values in the sub-segments, and then the data sequence to be compressed of each video data sequence is obtained according to the first reference points.
Specifically, taking the current video data sequence as an example, each category is not intensively distributed in the current video data sequence, namely, each category corresponds to multiple pieces of data in the current video data sequence, one piece of data of each category is recorded as one segment, and a plurality of segments of each category in the current video data sequence are obtained; the data of dividing the initial data and the end data of each segment is recorded as the internal data of each segment, and the initial data and the end data of each segment are used as initial datum points of corresponding categories; obtaining the adjustment reference degree of each internal data according to the information loss acceptance degree and the data value, thereby obtaining the first video data sequenceNo. H of the segment>For example, the internal data, which adjusts the reference level +.>The calculation method of (1) is as follows:
wherein (1)>Representing the +.>No. H of the segment>Information loss acceptance of individual internal data, +.>Representing the +.>No. H of the segment>Data value of the internal data->Representing the +.>The first segment ofData value of the internal data->Representing the +.>No. H of the segment >Data value of the internal data->Representing absolute value; the larger the information loss acceptance degree is, the larger the possibility that the corresponding pixel point is a change part is, the smaller the possibility that the data value of other data is adjusted as an initial reference point is, and the smaller the adjustment reference degree is; the larger the difference between the data value of the data and the adjacent data is, the smaller the information loss acceptance degree is, the greater the possibility that the data belongs to the same class of internal dividing points is, the smaller the possibility that the data is adjusted, the other data can be used as a reference to be adjusted, and the greater the adjustment reference degree is; the adjustment reference degree of each internal data of each segment of each category in each video data sequence is obtained according to the above method.
Further, the adjustment reference degrees of all the internal data of each segment are respectively subjected to linear normalization, the obtained record is recorded as the adjustment reference probability of each internal data, a preset second threshold is given for judging the initial reference point, the preset second threshold in the embodiment is calculated by adopting 0.65, and the internal data with the adjustment reference probability larger than the preset second threshold is used as the initial reference point, so that a plurality of initial reference points including the initial data and the termination data of each segment in each video data sequence are obtained.
It should be further noted that, there are multiple initial reference points in the internal data of each segment, and the same category includes multiple segments, so that it is required to obtain a sub-range of each category according to the initial reference points, divide each segment into multiple sub-segments according to the sub-range and the initial reference points, and then obtain an adjustment acceptance degree of each initial reference point according to the data value of each initial reference point and the distribution frequency of the sub-segments after adjusting the data in the same category, so as to obtain a first reference point and complete data adjustment, thereby obtaining a final data sequence to be compressed.
In particular, in the category of the current video data sequenceFor example, get category->Obtaining the interval value of every adjacent two initial datum points in each segment, wherein the interval value is obtained by the position of the initial datum points in the current video data sequence, and the category +.>The minimum of all interval values of (2) as category +.>The minimum value of the sub-range is 2; when the segment length is 1, the start data and the end data of the segment are the same data, when the segment length is 1, the start data and the end data are immediately adjacent to each other, and when the segment length is 2, immediately adjacent initial reference points may exist in the segment, and the interval value between the immediately adjacent initial reference points is 1, which is meaningless as a sub-range, and when the segment length is 1, the interval value is 0, which is meaningless as a sub-range, and therefore, the minimum value of the sub-range is set to 2.
Further, in the category of the current video data sequenceFor example, according to the sub-range, class +.>Dividing data between every two adjacent initial datum points in each segment to obtain a plurality of subsections; it should be noted that, there is no other data between the initial datum points with the segment length of 1 and the segment length of 2 and the adjacent initial datum points, the sub-segments are not needed to be obtained by division, the number of the sub-segments is not counted, and each sub-segment does not contain the initial datum point; meanwhile, the sub-ranges are divided in a non-overlapping way, and the data between two adjacent initial reference points cannot be completely divided, namely, a plurality of data with insufficient data quantity in the sub-ranges existTaking the last data (in a left-to-right order) between two adjacent initial datum points as the last data of the sub-range, and extracting the data forwards until the data quantity in the sub-range is met, so as to obtain a sub-segment; in category of the current video data sequence->Middle->For example, the initial reference point is adjusted to accept +.>The calculation method of (1) is as follows:
wherein (1)>Class representing the current video data sequence>Number of subsections, +.>Class representing the current video data sequence>Middle->The data values in the first subsection after the initial reference point are all adjusted to the category of the current video data sequence >Middle->After the data values of the initial reference points, the category of the current video data sequence is +.>The number of occurrences in all sub-segments; />Class representing the current video data sequence>A sub-range of (a), i.e., a data amount within the sub-range; />Class representing the current video data sequence>Middle->First sub-segment after initial reference point +.>Absolute value of difference between data values before and after data adjustment,/->Class representing the current video data sequence>Middle->First sub-segment after initial reference point +.>Normalized value of the information loss acceptance level of the data, wherein the normalization uses a softmax normalization function, the normalization process is only performed in the category +.>Middle->First after initial datum pointsIn the sub-section; after the data in the first subsegment is adjusted based on the initial reference point, the larger the distribution frequency of the data in all subsegments is, the larger the redundancy of the data is adjusted based on the initial reference point, the larger the compression rate is, and the larger the adjustment acceptance degree is; the smaller the absolute value of the difference value of the data before and after adjustment is, the larger the information loss acceptance degree is, the larger the data adjustability in the subsections is, and the larger the adjustment acceptance degree of the corresponding initial datum point is; obtaining a plurality of subsections of each category in each video data sequence according to the method, and obtaining the adjustment acceptance degree of each initial datum point according to the subsections; it should be noted that there are initial reference points having no sub-segments, that is, there is no first sub-segment after the initial reference point, and the adjustment acceptance degree of these initial reference points is 0.
Further, in the category of the current video data sequenceFor example, for a piece of data between any two adjacent initial reference points, obtaining the maximum value of the adjustment acceptance degree of the two initial reference points, taking the initial reference point corresponding to the maximum value as the reference data of the piece of data, and adjusting the data value of each piece of data of the piece of data as the data value of the reference data; category of the current video data sequence according to the above method->The data in all the segments of the initial datum point are adjusted according to the adjustment acceptance degree of the initial datum point; adjusting all data in the current video data sequence according to the method, and marking the obtained result as a data sequence to be compressed of the current video data sequence; it should be noted that, the data of the initial reference points in the video data sequence will not be adjusted, and only the data between two adjacent initial reference points of the same type will be adjusted; and acquiring the data sequence to be compressed of each video data sequence according to the method, and obtaining the data sequence to be compressed of each video frame image.
So far, the data sequence to be compressed of each video frame image is obtained through the acquisition and adjustment of the acceptance degree of the initial reference point.
And S005, compressing the data sequence to be compressed of each video frame image to finish the self-adaptive compression of the coal mine video data.
Performing self-adaptive LZW compression according to the acquired data sequence to be compressed of each video frame image; taking a data sequence to be compressed of a current frame image as an example, marking the data sequence as a current data sequence to be compressed, obtaining the data quantity of the current data sequence to be compressed, marking the data quantity as the current data quantity, giving an initial preset segmentation length, calculating the initial preset segmentation length by 15, dividing the current data sequence to be compressed into a plurality of segments according to the initial preset segmentation length, counting the occurrence times of each segment in all segments, and taking the segment with the largest occurrence times as an optimal segment under the initial preset segmentation length; the preset division length is increased by the preset increase length, the preset increase length is calculated by 1, the maximum preset division length is equal to the current data quantity, the optimal division section under each preset division length is obtained according to the method, the optimal division sections under all preset division lengths are used as dictionaries in an initial LZW compression algorithm of the current data sequence to be compressed, the current data sequence to be compressed is compressed according to the dictionaries, and the compressed data sequence of the current video frame image is obtained; according to the method, a dictionary in an initial LZW compression algorithm is obtained for each video frame image, and compressed data sequences are obtained; and transmitting the compressed data sequence of each video frame image, simultaneously transmitting a dictionary in a final LZW compression algorithm of each video frame image, and transmitting the dictionary to a coal mine video monitoring system terminal from a server through a network to realize real-time monitoring of a coal mine, namely completing self-adaptive compression and transmission of the coal mine video data.
Further, after receiving the compressed data sequence of the current video frame image, the coal mine video monitoring system terminal compares the first data in the compressed data sequence with the dictionary according to the final dictionary in the LZW algorithm, adds the converted data into the decompressed data sequence, and repeatedly executes the steps to obtain the decompressed data sequence; according to the obtained decompressed data sequence, performing data conversion according to a scanning sequence by utilizing a previous scanning mode, and obtaining a final result, namely a current video frame image; according to the method, the compressed data sequence of each video frame image is decompressed to obtain the corresponding video frame image, each video frame image is arranged according to the time sequence, the terminal decompresses to obtain the coal mine video data, and the decompression of the coal mine video data is completed.
Thus, the self-adaptive compression of the coal mine video data is completed.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (4)

1. The adaptive compression method for the video data of the coal mine is characterized by comprising the following steps of:
collecting video data of a coal mine;
clustering pixel points in each video frame image in the coal mine video data according to gray values to obtain a plurality of categories of each video frame image, obtaining a video data sequence and a reference data sequence of each video frame image, obtaining a plurality of categories of each video data sequence according to the categories of each video frame image, and obtaining the stationarity characteristic of each data in each video data sequence according to the video data sequence, the reference data sequence and the category to which each data belongs, wherein the stationarity characteristic represents the possibility that the data is background part data;
acquiring the category similarity of each category in each video data sequence and each category in the adjacent previous frame video data sequence according to the data in each category in each video data sequence and the data in each category in the adjacent previous frame video data sequence, acquiring the distribution interval mean value of each category in each video data sequence according to the category similarity, acquiring the variability feature of each category according to the distribution interval mean value, the category and the category similarity between the matching categories, endowing the variability feature of each category to the corresponding data of each category to acquire the variability feature of each data in each video data sequence, and acquiring the information loss acceptance degree of each data in each video data sequence according to the stationarity feature and the variability feature; the information loss acceptance degree is the importance degree corresponding to each data, and the variability characteristic is the distribution variation difference of each category and the matching category in two adjacent frames of video data sequences;
Acquiring a plurality of segments of each category in each video data sequence according to the category, marking the data of dividing the initial data and the termination data of each segment as the internal data of each segment, taking the initial data and the termination data of each segment as initial reference points of corresponding categories, acquiring the adjustment reference degree of each internal data according to the data value and the information loss acceptance degree, acquiring the initial reference point of each category according to the adjustment reference degree, acquiring a plurality of subsections of each category according to the initial reference point, acquiring the adjustment acceptance degree of each initial reference point according to the initial reference point and the subsections, and acquiring the data sequence to be compressed of each video data sequence according to the adjustment acceptance degree to obtain the data sequence to be compressed of each video frame image;
compressing the data sequence to be compressed of each video frame image to finish the compression of the coal mine video data;
the method for acquiring the stationarity characteristic of each data in each video data sequence comprises the following specific steps:
wherein (1)>Representing the +.>The stationarity of the individual data,/->Representing the +.>Data amount of local range of individual data, +. >Representing the +.>Local range of personal data +.>Data value of individual data->Representing the +.>Local range of personal data +.>Data values of the same position in the reference data sequence of the current video data sequence, the position of the data +.>Represents an exponential function based on natural constants, < ->Representing absolute value;
the local scope represents the firstThe data are forward and backward respectively preset quantity of the same category data;
the method for obtaining the category similarity between each category in each video data sequence and each category in the video data sequence of the adjacent previous frame comprises the following specific steps:
recording the video data sequence of the adjacent previous frame of the current video data sequence as the previous video data sequence, and obtaining the category in the current video data sequenceClass +.>Category similarity->The calculation method of (1) is as follows:
wherein (1)>Representing class +.>Data value mean,/-, of (2)>Representing class +.>Data value mean,/-, of (2)>Representing class +.>Class +.>Is used to determine the data value covariance of (c),/>representing class +. >Data value variance,/-, of (2)>Representing class +.>Data value variance,/-, of (2)>And->To calculate a constant;
the method for acquiring the variability characteristics of each category comprises the following specific steps:
wherein (1)>Representing class in the current video data sequence +.>Is characterized by variability of->Representing class +.>Degree of distribution difference from matching category in previous video data sequence, +.>Representing class +.>Matching category +.>Category similarity, ->And->Representing the reference weights;
the distribution difference degree is obtained through a small value and a large value through a distribution interval mean value of two categories;
the method for obtaining the information loss acceptance degree of each data in each video data sequence comprises the following specific steps:
wherein (1)>Representing the +.>Information loss acceptance of individual data, +.>Representing the +.>The stationarity of the individual data,/->Representing the +.>A variability feature of the individual data;
the method for acquiring the adjustment reference degree of each internal data according to the data value and the information loss acceptance degree comprises the following specific steps:
Wherein (1)>Representing the +.>No. H of the segment>The degree of adjustment reference of the individual internal data, +.>Representing the +.>No. H of the segment>Information loss acceptance of individual internal data, +.>Representing the +.>No. H of the segment>Data value of the internal data->Representing the +.>No. H of the segment>The data value of the individual internal data,representing the +.>No. H of the segment>Data value of the internal data->Representing absolute value;
the method for acquiring the adjustment acceptance degree of each initial datum point according to the initial datum point and the subsections comprises the following specific steps:
wherein (1)>Class representing the current video data sequence>Middle->Adjustment acceptance of the initial reference points, +.>Class representing the current video data sequence>Number of subsections, +.>Class representing the current video data sequence>Middle->The data values in the first subsection after the initial reference point are all adjusted to the category of the current video data sequence>Middle->After the data values of the initial reference points, the category of the current video data sequence is +.>The number of occurrences in all sub-segments; / >Class representing the current video data sequence>Sub-ranges of>Class representing the current video data sequence>Middle->First sub-segment after initial reference point +.>Absolute value of difference between data values before and after data adjustment,/->Class representing the current video data sequence>Middle->First sub-segment after initial reference point +.>Normalized value of the information loss acceptance level of the individual data.
2. The adaptive compression method for video data of coal mine according to claim 1, wherein the steps of obtaining the video data sequence and the reference data sequence of each video frame image comprise the following specific steps:
scanning each pixel point of each video frame image to obtain a data sequence corresponding to each video frame image, and recording the data sequence as a video data sequence of each video frame image;
and acquiring a reference video image of each video frame image, and scanning each pixel point of the reference video image to obtain a reference data sequence of each video frame image.
3. The adaptive compression method of coal mine video data according to claim 1, wherein the obtaining the sub-segments of each category according to the initial reference point comprises the following specific steps:
Acquiring categories of a current video data sequenceAcquisition category->Obtaining the interval value of every adjacent two initial datum points in each segment, wherein the interval value is obtained by the position of the initial datum points in the current video data sequence, and the category +.>The minimum of all interval values of (2) as category +.>Is a subrange of (2);
pairs of categories according to sub-rangesDividing the data between every two adjacent initial reference points in each segment to obtain the class +.>Is included.
4. The adaptive compression method of coal mine video data according to claim 1, wherein the step of obtaining the data sequence to be compressed of each video data sequence according to the adjustment acceptance degree comprises the following specific steps:
acquiring categories of a current video data sequenceAcquisition category->Taking any two adjacent initial datum points as a target point pair, taking a piece of data between the target point pair as target segment data, obtaining the maximum value of the adjustment acceptance degree of the two initial datum points in the target point pair, taking the initial datum point corresponding to the maximum value as the reference data of the target segment data, and adjusting the data value of each datum of the target segment data to be A data value of the reference data;
class of current video data sequenceThe data in all the segments of the initial datum point are adjusted according to the adjustment acceptance degree of the initial datum point; adjusting all data in the current video data sequence, and marking the obtained result as a data sequence to be compressed of the current video data sequence; a data sequence to be compressed for each video data sequence is acquired.
CN202310882458.3A 2023-07-19 2023-07-19 Coal mine video data self-adaptive compression method Active CN116600132B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310882458.3A CN116600132B (en) 2023-07-19 2023-07-19 Coal mine video data self-adaptive compression method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310882458.3A CN116600132B (en) 2023-07-19 2023-07-19 Coal mine video data self-adaptive compression method

Publications (2)

Publication Number Publication Date
CN116600132A CN116600132A (en) 2023-08-15
CN116600132B true CN116600132B (en) 2023-10-31

Family

ID=87606630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310882458.3A Active CN116600132B (en) 2023-07-19 2023-07-19 Coal mine video data self-adaptive compression method

Country Status (1)

Country Link
CN (1) CN116600132B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116828211B (en) * 2023-08-30 2023-11-14 华洋通信科技股份有限公司 Wireless transmission self-adaptive coding method for video under mine
CN116823975B (en) * 2023-08-31 2023-12-12 华洋通信科技股份有限公司 Coal mine data optimized storage method
CN117394866B (en) * 2023-10-07 2024-04-02 广东图为信息技术有限公司 Intelligent flap valve system based on environment self-adaption
CN117318729A (en) * 2023-11-27 2023-12-29 山东济宁运河煤矿有限责任公司 Parameter management system for underground explosion-proof electrical equipment of coal mine

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1251449A (en) * 1998-10-18 2000-04-26 华强 Combined use with reference of two category dictionary compress algorithm in data compaction
WO2004039081A1 (en) * 2002-10-24 2004-05-06 Boram C& C Co., Ltd Real time lossless compression and restoration method of multi-media data and system thereof
CN111062314A (en) * 2019-12-13 2020-04-24 腾讯科技(深圳)有限公司 Image selection method and device, computer readable storage medium and electronic equipment
CN115914649A (en) * 2023-03-01 2023-04-04 广州高通影像技术有限公司 Data transmission method and system for medical video
CN116383704A (en) * 2023-04-17 2023-07-04 中煤科工集团上海有限公司 LIBS single spectral line-based coal and rock identification method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1251449A (en) * 1998-10-18 2000-04-26 华强 Combined use with reference of two category dictionary compress algorithm in data compaction
WO2004039081A1 (en) * 2002-10-24 2004-05-06 Boram C& C Co., Ltd Real time lossless compression and restoration method of multi-media data and system thereof
CN111062314A (en) * 2019-12-13 2020-04-24 腾讯科技(深圳)有限公司 Image selection method and device, computer readable storage medium and electronic equipment
CN115914649A (en) * 2023-03-01 2023-04-04 广州高通影像技术有限公司 Data transmission method and system for medical video
CN116383704A (en) * 2023-04-17 2023-07-04 中煤科工集团上海有限公司 LIBS single spectral line-based coal and rock identification method

Also Published As

Publication number Publication date
CN116600132A (en) 2023-08-15

Similar Documents

Publication Publication Date Title
CN116600132B (en) Coal mine video data self-adaptive compression method
CN110198444B (en) Video frame encoding method, video frame encoding apparatus, and device having storage function
EP0482180B1 (en) Block adaptive linear predictive coding with adaptive gain and bias
CN116760952B (en) Unmanned aerial vehicle-based electric power iron tower maintenance inspection method
CN113436184B (en) Power equipment image defect discriminating method and system based on improved twin network
CN115914649A (en) Data transmission method and system for medical video
CN103501438B (en) A kind of content-adaptive method for compressing image based on principal component analysis
CN116095347B (en) Construction engineering safety construction method and system based on video analysis
CN112565777B (en) Deep learning model-based video data transmission method, system, medium and device
CN116980629B (en) Automatic fault detection system for large-scale lighting system
CN116723251B (en) Intelligent boiler automatic monitoring system based on sensor network
CN116828210A (en) Intelligent transmission method and system for submerged video acquisition
CN117640932B (en) Neurology image compression transmission method for telemedicine
CN102215385B (en) Real-time lossless compression method for image
CN117692649A (en) Ship remote monitoring video efficient transmission method based on image feature matching
CN115456868A (en) Data management method for fire drill system
CN116567269A (en) Spectrum monitoring data compression method based on signal-to-noise separation
CN117221609B (en) Centralized monitoring check-in system for expressway toll service
CN113507611B (en) Image storage method and device, computer equipment and storage medium
CN112770116B (en) Method for extracting video key frame by using video compression coding information
CN113205010B (en) Intelligent disaster-exploration on-site video frame efficient compression system and method based on target clustering
CN117896482A (en) Intelligent data storage method of vehicle event data recorder
CN116777845B (en) Building site safety risk intelligent assessment method and system based on artificial intelligence
CN111479286B (en) Data processing method for reducing communication flow of edge computing system
CN111145276A (en) Hyperspectral image compression method based on deep learning and distributed source coding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant