CN116055722A - Data storage method for automatic white spirit production system - Google Patents

Data storage method for automatic white spirit production system Download PDF

Info

Publication number
CN116055722A
CN116055722A CN202310201820.6A CN202310201820A CN116055722A CN 116055722 A CN116055722 A CN 116055722A CN 202310201820 A CN202310201820 A CN 202310201820A CN 116055722 A CN116055722 A CN 116055722A
Authority
CN
China
Prior art keywords
image
pixel
labeled
area
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310201820.6A
Other languages
Chinese (zh)
Other versions
CN116055722B (en
Inventor
马广圣
马广含
宋词
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Liangshan Distillery Co ltd
Original Assignee
Shandong Liangshan Distillery Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Liangshan Distillery Co ltd filed Critical Shandong Liangshan Distillery Co ltd
Priority to CN202310201820.6A priority Critical patent/CN116055722B/en
Publication of CN116055722A publication Critical patent/CN116055722A/en
Application granted granted Critical
Publication of CN116055722B publication Critical patent/CN116055722B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • H04N19/426Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements using memory downsizing methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a data storage method for an automatic white spirit production system, and belongs to the technical field of data processing; the method comprises the following steps: acquiring an unlabeled image of each wine bottle and a video frame image of the first integral appearance of the labeled label in a monitoring video; acquiring a label area corresponding to each pixel area; acquiring the attention degree of each pixel region according to the average coding value and the average edge chain code value of each pixel region and the average coding value and the average edge chain code value of the corresponding label region; acquiring the importance degree of each pixel region; and compressing and storing the video frame image when the labeled whole first appears in the monitoring video according to the importance degree of each pixel area. According to the invention, the importance degrees of different pixel areas are obtained, so that the complete video image can be reserved during video compression, and meanwhile, the compression of a larger degree is realized, and the storage space is occupied as small as possible.

Description

Data storage method for automatic white spirit production system
Technical Field
The invention relates to the technical field of data processing, in particular to a data storage method for a white spirit automatic production system.
Background
In the automatic white spirit production process, each production link needs to be monitored, and various sensors are installed on production and processing equipment, for example: temperature sensors, pressure sensors, monitoring systems, etc., generate large amounts of data and require storage of such data to facilitate finding the corresponding data in subsequent quality determinations. The monitoring data needs to occupy a large storage space, so that compression storage of the monitoring data is necessary.
In the prior art, when monitoring data is compressed and stored, the method is mainly used for compressing in a form of predictive coding according to the characteristics of an image, pixels to be compressed are predicted by using coded pixels to achieve a lossy compression method, and the compression of a monitoring video needs to consider the change between frames, so that the correlation between frames is poor when the video frames and the images are compressed by using the method, and the compression effect cannot be well achieved. In the automatic production process of a winery, each link is required to be monitored, and particularly, monitoring videos marked on wine bottles in the automatic production line of white wine are compressed and stored, in the compression and storage process, a pyramid sampling algorithm is adopted in the prior art to compress video frame images, and in the compression process, the same sampling size is generally adopted, so that the videos are not completely compressed, and the compression degree is not great; in order to occupy a smaller storage space, a larger sampling size is selected, and information of important areas is easily lost in the compression process; while smaller sample sizes are chosen to retain important information, this occupies more memory space.
Disclosure of Invention
In order to solve the problem that in the prior art, a pyramid sampling algorithm is adopted to compress video frame images, the same sampling size is generally adopted in the compression process, so that video compression is not thorough, and the compression degree is not great; in order to occupy a smaller storage space, a larger sampling size is selected, and information of important areas is easily lost in the compression process; in order to preserve important information, a smaller sampling size is selected, but a larger storage space is occupied, and the invention provides a data storage method for a white spirit automatic production system.
The invention aims to provide a data storage method for an automatic white spirit production system, which comprises the following steps of:
acquiring an unlabeled image of each wine bottle and a video frame image of the first integral appearance of the labeled label in a monitoring video; respectively encoding the unlabeled image and the labeled video frame image to obtain an unlabeled encoded image and a labeled encoded image;
presetting a plurality of label areas on an unlabeled coded image; differentiating the unlabeled coded image and the labeled coded image to obtain a plurality of pixel areas in the labeled coded image, which are opposite to the coding change in the unlabeled coded image; acquiring a label area corresponding to each pixel area according to the area of each pixel area and the area of each label area;
acquiring the attention degree of each pixel region according to the average coding value and the average edge chain code value of each pixel region and the average coding value and the average edge chain code value of the corresponding label region; acquiring the importance degree of each pixel region according to the attention degree of each pixel region and the edge difference of the pixel regions in the coded image of the continuous multi-frame labeling;
the coded image of the continuous multi-frame labeling refers to the coded image which comprises a video frame image when the labeled first appears in the monitoring video and the continuous multi-frame labeling corresponding to the multi-frame video frame image after the labeled first appears in the monitoring video;
and compressing and storing the video frame image when the labeled whole first appears in the monitoring video according to the importance degree of each pixel area.
In one embodiment, the attention of each pixel region is obtained according to the following steps:
acquiring the difference between the average coding value of each pixel region and the average coding value of the corresponding tag region; obtaining the difference between the average edge chain code value of each pixel region and the average edge chain code value of the corresponding label region;
and according to the difference between the average coding value of each pixel region and the average coding value of the corresponding label region, adding the difference between the average edge chain code value of each pixel region and the average edge chain code value of the corresponding label region, multiplying the added value by the ratio of the number of pixel points in each pixel region to the data of all pixel points on the labeled coded image, and obtaining the attention degree of each pixel region.
In one embodiment, the edge differences of the pixel regions in the coded images of the continuous multi-frame labeling are obtained by:
and taking the sum of the differences of the average edge chain code values of the pixel areas in all adjacent two frames of labeled coded images in the continuous multi-frame labeled coded images as the edge differences of the pixel areas in the continuous multi-frame labeled coded images.
In one embodiment, the importance level of each pixel region is obtained according to the following steps:
and taking the product of the attention degree of each pixel area and the value obtained by carrying out negative correlation mapping and normalization on the absolute value of the sum value as the importance degree of each pixel area.
In an embodiment, the label area corresponding to each pixel area is obtained according to the following steps: the label region corresponding to the area of the label region equal to the area of each pixel region is taken as the label region corresponding to each pixel region.
In one embodiment, the unlabeled encoded image and the labeled encoded image are obtained by:
and respectively acquiring the unlabeled coded image and the labeled coded image by adopting hexadecimal coding on the gray value of each pixel point in the unlabeled image and the labeled video frame image.
In one embodiment, compressing and storing the video frame image when the labeled whole first appears in the monitoring video according to the importance degree of each pixel area includes:
converting the coded values in the corresponding labeled coded images into binary values to obtain an image to be compressed;
acquiring the sampling size of each pixel region corresponding to the image to be compressed according to the importance degree of each pixel region;
pyramid sampling is carried out on the image to be compressed to obtain a compressed image; storing the compressed image;
when pyramid sampling is performed on each pixel area, sampling is performed according to the sampling size of each pixel area.
The beneficial effects of the invention are as follows: the invention provides a data storage method for a white spirit automatic production system, which is characterized in that an unlabeled coded image and a labeled coded image thereof are differentiated to obtain a plurality of pixel areas which are in the labeled coded image and have coding change relative to the unlabeled coded image, and the coded image is artificially assigned, so that the importance degree of different areas of the image can be effectively represented; then, a plurality of label areas are preset on the coded image which is not labeled, and a label area corresponding to each pixel area is obtained; the method is convenient for distinguishing the video frame images after labeling from the images before labeling, and the attention degree of different pixel areas is obtained by calculating the change between the coding values of the video frame images after labeling and the images before labeling and the edge chain codes, and the attention degree of the areas is larger as the coding values and the chain codes change, which means that the attention degree of the label skew is larger. Then calculating the importance degree of each pixel region through the attention degree of the pixel region and the change of the average edge chain code value in the coded images of the continuous frames, and overcoming the false region change caused by the change of the coded value due to shaking of an object moving on a conveyor belt; and finally, compressing and storing the video frame image when the labeled whole first appears in the monitoring video according to the importance degree of each pixel area. In the compression process, whether the pixel area is changed or not needs to be judged, if the pixel area is changed, important area information is not lost when video is compressed by obtaining the importance degrees of different areas, and a complete video image can be reserved; the compression of a larger degree is realized, and the storage space can be greatly saved; saving a large amount of memory space while preserving important area information in the compression process is achieved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart illustrating the overall steps of an embodiment of a data storage method for an automatic white spirit production system according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention aims at the following situations: in the automated production process of wineries, each link needs to be monitored. However, the monitoring video needs to occupy a larger storage space, especially for a large-scale monitoring system in a factory, and the monitoring video needs to be stored for a longer time to be automatically covered, so that a larger storage space is needed, and therefore, the compression of the monitoring video is very necessary. Specifically, compression storage is carried out on a monitoring video for labeling a wine bottle in a white spirit automatic production line, and the compression storage is used for recording whether the monitoring video is labeled on a preset position of the wine bottle when the monitoring video is labeled on the wine bottle.
The invention completes the compression of the video in the process of monitoring video acquisition, rather than compressing the monitoring video after storing the monitoring video, thus being a dynamic compression process, namely compressing the fixed area and the dynamic area in the monitoring process to different degrees according to the change of continuous frames of the video, and realizing the self-coding compression of the video.
The invention provides a data storage method for an automatic white spirit production system, which is shown in fig. 1 and comprises the following steps:
s1, acquiring an unlabeled image of each wine bottle and a video frame image of the first integral appearance of the labeled monitoring video.
The obtained video frame image of the label on the wine bottle refers to a video frame image only containing the wine bottle area, and the label is attached on the wine bottle; likewise, an unlabeled image on a wine bottle also refers to a video frame image that contains only the area of the wine bottle, but the wine bottle is unlabeled.
In the automatic production process of the winery, a mechanical arm is used for labeling the untagged wine bottles; the method comprises the steps that a wine bottle with a label is arranged and conveyed from left to right through a conveying belt, if a picture in a monitoring video only displays one wine bottle with the label, a video frame image of the wine bottle with the label, which is the first time the whole wine bottle with the label appears in the monitoring video, is directly obtained, and the video frame image only containing the wine bottle area is used as a video frame image of the label on the wine bottle; wherein, the video frame image only containing the wine bottle area only comprises the wine bottle area and does not display other areas; if the picture in the monitoring video displays a plurality of wine bottles, acquiring video frame images of the labeled wine bottles which are first integrally displayed on the leftmost side of the monitoring video, and taking the video frame images of the labeled wine bottles which are first integrally displayed on the monitoring video as video frame images of the labeled wine bottles; in addition, when collecting the video frame image only containing the wine bottle area, continuously collecting the video frame image only containing a plurality of frames of the wine bottle area, namely, after collecting the video frame image when the whole of the wine bottle with the label is monitored for the first time, continuously collecting the video frame image of the plurality of frames of the wine bottle with the label; it should be noted that, the video frame image of continuously collecting multiple frames of wine bottles with labels already attached only contains the wine bottle area. In addition, the wine bottle area in the video frame image is identified by utilizing the DNN semantic segmentation neural network when the wine bottle with the label is integrally displayed in the monitoring video for the first time, and the video frame image only containing the wine bottle area is obtained. The DNN semantic segmentation neural network recognition method is a known technology, and will not be described in detail here.
In this embodiment, the unlabeled image of the wine bottle refers to a video frame image of the area of the wine bottle that only contains the wine bottle when the wine bottle is to be labeled. In order to calculate the areas of the wine bottle, which are changed in the images before and after labeling, only a video frame image of the area only containing the wine bottle is acquired when the unlabeled image is acquired; and when the video frame image of the label is collected, collecting the video frame image of the whole wine bottle with the label when the monitoring video appears.
The method mainly aims at judging whether the label is attached to the wine bottle according to a preset position or not by collecting the labeled video frame image, if the label is attached askew, a change area exists on the collected labeled video frame image, and therefore special attention is required to the video frame image. Therefore, the unlabeled image and the labeled video frame image on the wine bottle need to be obtained, the importance degree of the changed area is obtained through the changed areas in the images before and after labeling, and the video frame image when the whole wine bottle with the labeled wine bottle appears in the monitoring video is compressed by adopting different sampling sizes.
S2, respectively encoding the unlabeled image and the labeled video frame image to obtain an unlabeled encoded image and a labeled encoded image.
Specifically, the unlabeled coded image and the labeled coded image are obtained according to the following steps: and respectively acquiring the unlabeled coded image and the labeled coded image by adopting hexadecimal coding on the gray value of each pixel point in the unlabeled image and the labeled video frame image. And similarly, acquiring labeled coded images corresponding to the continuous multi-frame video frame images after the labeled video frame images which are first and integrally appear in the monitoring video.
In this embodiment, the unlabeled image and the labeled video frame image are encoded, and the encoding rule is to encode according to the gray level of the image, and the specific encoding process is as follows:
carrying out graying treatment on the image by adopting a weighted average method to obtain a corresponding gray image; wherein, the image refers to an untagged image or a tagged video frame image; the gray scale range is distributed in
Figure SMS_1
Then grading according to the gray scale of the image, if the gray scale of the image is distributed
Figure SMS_2
In between, the hexadecimal coding rule is adopted to divide the gray level of the image into sixteen gray levels as follows:
Figure SMS_3
in the method, in the process of the invention,
Figure SMS_4
a gray span value representing each gray level,
Figure SMS_5
represents the maximum gray value of the image,
Figure SMS_6
representing a minimum gray value of the image; and then coding the image according to the gray distribution of each pixel point in the image, and finally obtaining a coded image.
S3, acquiring a label area corresponding to each pixel area;
presetting a plurality of label areas on an unlabeled coded image; differentiating the unlabeled coded image and the labeled coded image to obtain a plurality of pixel areas in the labeled coded image, which are opposite to the coding change in the unlabeled coded image; and acquiring a label area corresponding to each pixel area according to the area of each pixel area and the area of each label area.
It should be noted that the number of preset label areas corresponds to the number of labels to be applied, and each label area corresponds to one label to be applied.
In this embodiment, according to the change between the unlabeled encoded image and the labeled encoded image, a key region is obtained, where the key region is the labeled region; because in the encoded image, encoding is performed according to the gray level of the image, the obtained encoded image is equivalent to reassigning the pixel points of the image, and is mainly used for facilitating description of the key areas of the image. Because the pixels in the image do not have a priority order, all the pixels are identical, and the whole image is characterized. However, in the coded image, the coded image is artificially assigned to obtain the priority order of the data, and the numbers in hexadecimal are arranged according to the order
Figure SMS_7
The 0 table has the lowest priority, and F represents the highest priority. The purpose of this arrangement is to obtain encoded images of different regions in the image by which the importance of the different regions of the image is characterized.
In order to change between the unlabeled coded image and the labeled coded image, a plurality of label areas are preset on the unlabeled coded image, namely, labeled areas are preset on the unlabeled wine bottle, and the labeled areas are marked as label areas; wherein, more than one area to be labeled is needed, and a plurality of label areas are preset.
Since the labeled coded image refers to a coded image corresponding to a video frame image that has been labeled on a wine bottle, the wine bottle includes a plurality of labeled areas, and the coded value between the labeled coded image and the unlabeled coded image changes. The untagged wine bottle and the labeled wine bottle are the same wine bottle.
In this embodiment, the non-labeled coded image and the labeled coded image are differentiated to obtain a plurality of pixel areas in the labeled coded image, which are changed in coding relative to the non-labeled coded image; each pixel region refers to a region in the labeled coded image, which changes in coding relative to the unlabeled coded image, the region changing due to labeling, but the pixel region is not specifically known to correspond to the label region by differentiation, so that the label region corresponding to each pixel region is obtained according to the area of each pixel region and the area of each label region; in this embodiment, the areas of the different labels are different, and thus the labels can be distinguished by the areas. Specifically, the label area corresponding to each pixel area is obtained according to the following steps: the label region corresponding to the area of the label region equal to the area of each pixel region is taken as the label region corresponding to each pixel region.
S4, acquiring the attention degree of each pixel region according to the average coding value and the average edge chain code value of each pixel region and the average coding value and the average edge chain code value of the corresponding label region.
It should be noted that, in this embodiment, the attention degree of different objects is determined mainly by the overall encoding of the image and the change of the chain code of the edge.
In this embodiment, the average edge chain code value of each pixel region is obtained mainly by the frieman chain code algorithm in the process of obtaining the average edge chain code value of each pixel region, and then the average edge chain code value of each pixel region is obtained by calculating the average value of the edge chain codes of all the pixel regions. Similarly, in the process of acquiring the average edge chain code value of the corresponding tag region, firstly acquiring the edge chain code of the preset tag region, and then acquiring the average edge chain code value of the tag region by calculating the average value of the edge chain codes of the tag region.
Wherein, the attention degree of each pixel area is obtained according to the following steps:
acquiring the difference between the average coding value of each pixel region and the average coding value of the corresponding tag region; obtaining the difference between the average edge chain code value of each pixel region and the average edge chain code value of the corresponding label region;
and according to the difference between the average coding value of each pixel region and the average coding value of the corresponding label region, adding the difference between the average edge chain code value of each pixel region and the average edge chain code value of the corresponding label region, multiplying the added value by the ratio of the number of pixel points in each pixel region to the data of all pixel points on the labeled coded image, and obtaining the attention degree of each pixel region.
In the present embodiment, the attention degree calculation formula of each pixel region is as follows:
Figure SMS_8
in the method, in the process of the invention,
Figure SMS_10
representing the degree of attention of the ith pixel region;
Figure SMS_13
the average edge chain code value of the label area corresponding to the ith pixel area, namely the average chain code value when the label is not attached, is used for indicating the state of the label area when the label is not attached, and if the calculated value is 4, the state value of the edge chain code of the label area is 4;
Figure SMS_15
the average edge chain code value of the ith pixel area is represented, namely the average chain code of the corresponding pixel area after the wine bottle is labeled, if the calculated value is 6, the average edge chain code value of the pixel area is represented as 6; once the label is distorted, the average edge chain code value changes, then the label is passed through
Figure SMS_11
Before and after labellingThe difference value of the average edge chain code value is changed;
Figure SMS_14
representing the average code value of the label area corresponding to the ith pixel area;
Figure SMS_17
representing the average code value of the i-th pixel region; after the labels are attached, the gray value of the corresponding pixel point area is changed
Figure SMS_19
The difference between the coded values before and after the labeling will change.
Figure SMS_9
Representing the number of pixel points of the ith pixel area;
Figure SMS_12
representing the number of pixels in the labeled coded image; the ratio of the pixel points multiplied by the pixel area to the number of the pixel points in the whole coded image of the labeling is that the original label is fixed in position, the number of the corresponding pixel points is also fixed, and once the label is skewed, the more the number of the pixel points is changed, so that the attention degree is greater;
Figure SMS_16
representing a normalization function with a normalization value of
Figure SMS_18
It should be noted that, the code value of the label area before labeling is fixed, and the code value of the corresponding pixel area after labeling changes, the code value of the pixel area relative to the corresponding label area changes, and the chain code also changes, so that the attention degree of the pixel area is obtained according to the change of the code value and the change of the chain code; the degree of attention of the unchanged area is 0. For this reason, in the present embodiment, the degree of attention of different areas is obtained according to the change of the code value and the edge chain code in the encoded image, and the greater the degree of change of the code value and the chain code, the greater the degree of attention of the area, which means that the greater the degree of label skew.
S5, obtaining the importance degree of each pixel region according to the attention degree of each pixel region and the edge difference of the pixel regions in the coded images of the continuous multi-frame labeling.
The importance of each pixel region is obtained according to the attention degree of different pixel regions. Since the wine bottle is shaken during the transmission process after the label is attached to the wine bottle, the code value of the wine bottle area is changed, however, the change of the code value of the wine bottle shaking is not a defect in the production process of the product, and therefore the attention degree of different areas is obtained according to the change of continuous frames. In the labeled video frame images, five frames of images are extended backwards, and the importance degree of each region is obtained by comparing the changes of the five frames of images corresponding to the coded images; wherein, the continuous five frames of images extending backwards refer to the video frame images of collecting five frames of wine bottles with labels, and the video frame images only comprise the area of the wine bottles.
The edge difference of the pixel areas in the coded images of the continuous multi-frame labeling is obtained according to the following steps:
taking the sum of the differences of the average edge chain code values of the pixel areas in all adjacent two frames of labeled coded images in the continuous multi-frame labeled coded images as the edge differences of the pixel areas in the continuous multi-frame labeled coded images;
the coded image of continuous multi-frame labeling refers to a coded image which comprises a video frame image when the labeled first appears in the monitoring video and continuous multi-frame labeling corresponding to the multi-frame video frame image after the labeled first appears in the monitoring video.
Specifically, the importance degree of each pixel region is obtained according to the following steps:
and taking the product of the attention degree of each pixel area and the value obtained by carrying out negative correlation mapping and normalization on the absolute value of the sum value as the importance degree of each pixel area.
In the present embodiment, the importance degree calculation formula of each pixel region is as follows:
Figure SMS_20
in the method, in the process of the invention,
Figure SMS_29
representing the importance of the ith pixel area;
Figure SMS_23
representing the degree of attention of the ith pixel region;
Figure SMS_35
is shown in
Figure SMS_30
The frame image corresponds to the average edge chain code value of the ith pixel area in the coded image;
Figure SMS_38
is shown in
Figure SMS_24
The frame image corresponds to the average edge chain code value of the ith pixel area in the coded image; wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_34
representation and representation
Figure SMS_28
Frame image adjacent first
Figure SMS_40
A frame image;
Figure SMS_21
representing adjacency
Figure SMS_32
Frame image and current
Figure SMS_31
The frame image corresponds to the ith pixel area in the encoded imageThe difference in the average edge chain code values; wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_39
5 in (2) represents the number of continuous video frame images extending backwards from the video frame images when the whole wine bottle with the label attached appears in the monitoring video; in this embodiment, the variation of the edge chain code value of the whole pixel area caused by shaking of the wine bottle is comprehensively measured by the difference of the average edge chain code value between the continuous five-frame video frame images; because the positions of the labels are fixed after the labels are attached to the wine bottles, the edge chain code value of each pixel area cannot be changed, the difference value of the average edge chain code value between the ith pixel area in the coded image corresponding to the video frame image of the adjacent frame label is 0, the edge chain code value of the whole pixel area of the wine bottle can be changed because the wine bottle shakes, and therefore, the difference value of the edge chain code values in the coded images of different frames is not 0, and then
Figure SMS_25
Is not 0; e represents a natural constant;
Figure SMS_33
representing the edge difference of pixel areas in the coded images of continuous multi-frame labeling, and carrying out negative correlation mapping and normalization; multiplied by
Figure SMS_26
Since the degree of the encoded value occurring in different regions is different, the degree of attention obtained is different, and the degree of importance thereof is different in the region change according to the continuous frames. When (when)
Figure SMS_37
In (a) and (b)
Figure SMS_27
When the value is 0, the maximum value of the exponential function is 1, and when the value which is not zero is less than 1, the exponential function is obtained by
Figure SMS_36
Multiplying the degree of interest of the pixel region
Figure SMS_22
Indicating the importance of the pixel region. Such as: when the degree of attention of each pixel region is calculated, if the label is not skewed, the degree of attention is 0, so when the degree of importance is calculated here, the degree of importance is also 0, and if the label is offset by different degrees, the degree of attention is different, and the degree of importance is also changed.
Therefore, the importance degree of different pixel areas of the wine bottle transmitted on the conveyor belt is obtained according to the change of the average edge chain code value in the coded images of the continuous frames, false area change caused by the change of the coded value due to shaking of an object moving on the conveyor belt is overcome, and the area changed by the label skew coded value can be accurately identified. Because the greater the degree of skew of the label, the greater the change in the encoded value of the region, the greater the importance of the resulting region.
S6, compressing and storing the video frame image when the labeled whole first appears in the monitoring video according to the importance degree of each pixel area.
Specifically, compressing and storing the video frame image when the labeled whole first appears in the monitoring video according to the importance degree of each pixel area, including:
converting the coded value in the labeled coded image corresponding to the video frame image when the labeled whole first appears in the monitoring video into binary to acquire an image to be compressed; acquiring the sampling size of each pixel region corresponding to the image to be compressed according to the importance degree of each pixel region; pyramid sampling is carried out on the image to be compressed to obtain a compressed image; storing the compressed image; when pyramid sampling is performed on each pixel area, sampling is performed according to the sampling size of each pixel area, that is, when pyramid sampling is performed on each pixel area corresponding to the image to be compressed, sampling is performed according to the sampling size of each pixel area corresponding to the image to be compressed. In this embodiment, in order to avoid too large image loss, the number of sampling layers is artificially assigned to be 2, so that the sampled image is smaller than the original image in size, i.e. compression is realized.
In this embodiment, the video is compressed by acquiring importance levels of different pixel areas. Because 16-system coding is adopted in the process of calculating the video frame image, firstly, 16-system is converted into binary system, namely, when the whole wine bottle with the label is displayed in the monitoring video, the picture is converted into binary system to acquire the image to be compressed, and the method is a prior known technology and is not repeated here; such that each pixel is composed of a binary number. It should be noted that the image to be compressed corresponds to a video frame image when the wine bottle with the label is first integrally displayed in the monitoring video; the image to be compressed is aimed at the whole video frame image, and comprises a wine bottle area with labels attached, and other areas around the wine bottle area; and in the process of calculating the importance degree of each pixel area, the video frame image of the wine bottle area in the video frame image is aimed at.
It should be noted that, for the same image, compression is achieved by adopting an image pyramid method according to the importance degrees of different areas, pyramid sampling is performed on the image according to the importance degrees of different areas, and the sampling sizes of different areas are different.
For this reason, in the present embodiment, pyramid sampling is performed on an image to be compressed to obtain a compressed image, and the compressed image is stored; when pyramid sampling is performed on each pixel area, sampling is performed according to the sampling size of each pixel area. And the calculation formula of the sampling size of each pixel area is as follows:
Figure SMS_41
in the method, in the process of the invention,
Figure SMS_42
representing the importance of the ith pixel area;
Figure SMS_43
representing the ith imageThe sampling size of the pixel region;
Figure SMS_44
the function of the rounding is represented as a function of the rounding,
Figure SMS_45
a reduction coefficient representing column sampling or row sampling under a sampling rule of sampling every other row or every other column; e represents a natural constant; the purpose of multiplying by 10 is to expand the multiple, prevent the sampling level from being too small, and not achieve compression. When the attention degree of each pixel region is calculated, if the label is shifted to different degrees, the attention degree will be different, and the importance degree will also be changed, the sampling size of the pixel region will be
Figure SMS_46
The method comprises the steps of carrying out a first treatment on the surface of the If the label is not skewed, the degree of interest is 0, so when the degree of importance is calculated here, the degree of importance is also 0; the sampling size of the pixel region is 50; namely, in the process of compressing the image to be compressed, the sampling size corresponding to the pixel area in the wine bottle area which is marked with the label is as follows
Figure SMS_47
Compressing the lower layer sample; for the regions of the wine bottle except each pixel region and other surrounding regions, the regions are directly sampled in fixed size, namely, the attention degree in the region is regarded as 0, the importance degree is also 0, and the sampling size corresponding to the regions of the wine bottle except each pixel region and other surrounding regions is 50, so that the lower layer sample compression is carried out; and the video frame image of the whole wine bottle with the label is compressed when the monitoring video is displayed, so that the compression of different areas by adopting different sampling sizes is realized, the area sampling scale with large importance degree is smaller, and on the contrary, the area sampling scale with 0 importance degree is larger, the size of the sampled image is smaller, the compressed related information of the important area is effectively ensured, and meanwhile, the storage space is also prevented from being occupied in a large amount.
In this embodiment, for continuously obtaining video frame images of each labeled wine bottle when the whole wine bottle appears in the monitoring video, in the compression process, only if the pixel area is determined to change, if the pixel area changes, different sampling sizes are to be given to the pixel area; of course, if the pixel area is unchanged, the whole video frame image is compressed in a lower layer sample with the sampling size of 50, so that the compression of a larger degree is realized, and the storage space can be greatly saved; and for the pixel area which is changed, the important area information is not lost when the video is compressed by obtaining the importance degrees of different areas, so that the complete video image can be reserved.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (7)

1. A data storage method for an automated white spirit production system, comprising the steps of:
acquiring an unlabeled image of each wine bottle and a video frame image of the first integral appearance of the labeled label in a monitoring video; respectively encoding the unlabeled image and the video frame image to obtain an unlabeled encoded image and a labeled encoded image;
presetting a plurality of label areas on an unlabeled coded image; differentiating the unlabeled coded image and the labeled coded image to obtain a plurality of pixel areas in the labeled coded image, which are opposite to the coding change in the unlabeled coded image; acquiring a label area corresponding to each pixel area according to the area of each pixel area and the area of each label area;
acquiring the attention degree of each pixel region according to the average coding value and the average edge chain code value of each pixel region and the average coding value and the average edge chain code value of the corresponding label region; acquiring the importance degree of each pixel region according to the attention degree of each pixel region and the edge difference of the pixel regions in the coded image of the continuous multi-frame labeling;
the coded image of the continuous multi-frame labeling refers to the coded image which comprises a video frame image when the labeled first appears in the monitoring video and the continuous multi-frame labeling corresponding to the multi-frame video frame image after the labeled first appears in the monitoring video;
and compressing and storing the video frame image when the labeled whole first appears in the monitoring video according to the importance degree of each pixel area.
2. The data storage method for an automated white spirit producing system according to claim 1, wherein the attention of each pixel area is acquired according to the following steps:
acquiring the difference between the average coding value of each pixel region and the average coding value of the corresponding tag region; obtaining the difference between the average edge chain code value of each pixel region and the average edge chain code value of the corresponding label region;
and according to the difference between the average coding value of each pixel region and the average coding value of the corresponding label region, adding the difference between the average edge chain code value of each pixel region and the average edge chain code value of the corresponding label region, multiplying the added value by the ratio of the number of pixel points in each pixel region to the data of all pixel points on the labeled coded image, and obtaining the attention degree of each pixel region.
3. The data storage method for an automated white spirit producing system according to claim 1, wherein the edge differences of the pixel areas in the coded images of the consecutive multi-frame labeling are obtained by:
and taking the sum of the differences of the average edge chain code values of the pixel areas in all adjacent two frames of labeled coded images in the continuous multi-frame labeled coded images as the edge differences of the pixel areas in the continuous multi-frame labeled coded images.
4. A data storage method for an automated white spirit producing system according to claim 3, wherein the importance level of each pixel area is obtained by:
and taking the product of the attention degree of each pixel area and the value obtained by carrying out negative correlation mapping and normalization on the absolute value of the sum value as the importance degree of each pixel area.
5. The method for storing data for an automated white spirit producing system according to claim 1, wherein the label area corresponding to each pixel area is obtained by: the label region corresponding to the area of the label region equal to the area of each pixel region is taken as the label region corresponding to each pixel region.
6. The data storage method for an automated white spirit production system according to claim 1, wherein the unlabeled coded image and the labeled coded image are obtained by:
and respectively acquiring the unlabeled coded image and the labeled coded image by adopting hexadecimal coding on the gray value of each pixel point in the unlabeled image and the labeled video frame image.
7. The data storage method for an automated white spirit production system according to claim 1, wherein compressing and storing the video frame image when the labeled whole first appears in the monitoring video according to the importance degree of each pixel region, comprises:
converting the coded value in the labeled coded image corresponding to the video frame image when the labeled whole first appears in the monitoring video into binary to acquire an image to be compressed;
acquiring the sampling size of each pixel region corresponding to the image to be compressed according to the importance degree of each pixel region;
pyramid sampling is carried out on the image to be compressed to obtain a compressed image; storing the compressed image;
when pyramid sampling is performed on each pixel area, sampling is performed according to the sampling size of each pixel area.
CN202310201820.6A 2023-03-06 2023-03-06 Data storage method for automatic white spirit production system Active CN116055722B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310201820.6A CN116055722B (en) 2023-03-06 2023-03-06 Data storage method for automatic white spirit production system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310201820.6A CN116055722B (en) 2023-03-06 2023-03-06 Data storage method for automatic white spirit production system

Publications (2)

Publication Number Publication Date
CN116055722A true CN116055722A (en) 2023-05-02
CN116055722B CN116055722B (en) 2023-06-16

Family

ID=86123985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310201820.6A Active CN116055722B (en) 2023-03-06 2023-03-06 Data storage method for automatic white spirit production system

Country Status (1)

Country Link
CN (1) CN116055722B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117831744A (en) * 2024-03-06 2024-04-05 大连云间来客科技有限公司 Remote monitoring method and system for critically ill patients
CN117831744B (en) * 2024-03-06 2024-05-10 大连云间来客科技有限公司 Remote monitoring method and system for critically ill patients

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009017502A (en) * 2006-08-08 2009-01-22 Canon Inc Image encoding apparatus and image decoding apparatus
JP2011250117A (en) * 2010-05-26 2011-12-08 Konica Minolta Business Technologies Inc Image coding method, image coding device and program
JP2013143609A (en) * 2012-01-10 2013-07-22 Konica Minolta Inc Image processing device, coding method and decoding method
CN105392012A (en) * 2015-10-28 2016-03-09 清华大学深圳研究生院 Rate distribution method and device based on region chain code
CN115019111A (en) * 2022-08-05 2022-09-06 天津艺点意创科技有限公司 Data processing method for Internet literary composition creation works
CN115297289A (en) * 2022-10-08 2022-11-04 南通第二世界网络科技有限公司 Efficient storage method for monitoring video
CN115601688A (en) * 2022-12-15 2023-01-13 中译文娱科技(青岛)有限公司(Cn) Video main content detection method and system based on deep learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009017502A (en) * 2006-08-08 2009-01-22 Canon Inc Image encoding apparatus and image decoding apparatus
JP2011250117A (en) * 2010-05-26 2011-12-08 Konica Minolta Business Technologies Inc Image coding method, image coding device and program
JP2013143609A (en) * 2012-01-10 2013-07-22 Konica Minolta Inc Image processing device, coding method and decoding method
CN105392012A (en) * 2015-10-28 2016-03-09 清华大学深圳研究生院 Rate distribution method and device based on region chain code
CN115019111A (en) * 2022-08-05 2022-09-06 天津艺点意创科技有限公司 Data processing method for Internet literary composition creation works
CN115297289A (en) * 2022-10-08 2022-11-04 南通第二世界网络科技有限公司 Efficient storage method for monitoring video
CN115601688A (en) * 2022-12-15 2023-01-13 中译文娱科技(青岛)有限公司(Cn) Video main content detection method and system based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RICCARDO DE LUTIO: "Learning graph regularisation for guided super-resolution", 《2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION(CVPR)》 *
洪飞,王军,吴志美: "边缘算子在视频对象提取中的应用", 计算机辅助设计与图形学学报, no. 01 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117831744A (en) * 2024-03-06 2024-04-05 大连云间来客科技有限公司 Remote monitoring method and system for critically ill patients
CN117831744B (en) * 2024-03-06 2024-05-10 大连云间来客科技有限公司 Remote monitoring method and system for critically ill patients

Also Published As

Publication number Publication date
CN116055722B (en) 2023-06-16

Similar Documents

Publication Publication Date Title
WO2023134791A2 (en) Environmental security engineering monitoring data management method and system
CN113158738B (en) Port environment target detection method, system, terminal and readable storage medium based on attention mechanism
CN106231214A (en) High-speed cmos sensor image based on adjustable macro block approximation lossless compression method
CN111681273A (en) Image segmentation method and device, electronic equipment and readable storage medium
CN115914649A (en) Data transmission method and system for medical video
CN115019111B (en) Data processing method for Internet literary composition
CN115082483B (en) Glass fiber board surface defect identification method based on optical camera
CN115456868B (en) Data management method for fire drill system
CN112883795B (en) Rapid and automatic table extraction method based on deep neural network
CN111079734A (en) Method for detecting foreign matters in triangular holes of railway wagon
CN116156196B (en) Efficient transmission method for video data
CN111461147B (en) Binary coding organization algorithm based on image features
CN114882039A (en) PCB defect identification method applied to automatic PCB sorting process
CN115695821A (en) Image compression method and device, image decompression method and device, and storage medium
CN116055722B (en) Data storage method for automatic white spirit production system
CN111723735B (en) Pseudo high bit rate HEVC video detection method based on convolutional neural network
CN104809747B (en) The statistical method and its system of image histogram
CN113192018A (en) Water-cooled wall surface defect video identification method based on fast segmentation convolutional neural network
CN112950599A (en) Large intestine cavity area and intestine content labeling method based on deep learning
CN116664431B (en) Image processing system and method based on artificial intelligence
CN112532938A (en) Video monitoring system based on big data technology
CN117115468B (en) Image recognition method and system based on artificial intelligence
CN112651260B (en) Method and system for converting self-adaptive discrete codes into continuous codes
CN117152142B (en) Bearing defect detection model construction method and system
CN116452794B (en) Directed target detection method based on semi-supervised learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant