CN114445446A - Equipment information statistical method based on computer vision - Google Patents

Equipment information statistical method based on computer vision Download PDF

Info

Publication number
CN114445446A
CN114445446A CN202111574075.7A CN202111574075A CN114445446A CN 114445446 A CN114445446 A CN 114445446A CN 202111574075 A CN202111574075 A CN 202111574075A CN 114445446 A CN114445446 A CN 114445446A
Authority
CN
China
Prior art keywords
image
equipment
pixel
historical
bar code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111574075.7A
Other languages
Chinese (zh)
Inventor
马标
王雷
朱坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujia Newland Software Engineering Co ltd
Original Assignee
Fujia Newland Software Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujia Newland Software Engineering Co ltd filed Critical Fujia Newland Software Engineering Co ltd
Priority to CN202111574075.7A priority Critical patent/CN114445446A/en
Publication of CN114445446A publication Critical patent/CN114445446A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a device information statistical method based on computer vision, which belongs to the technical field of image recognition and comprises the following steps: step S10, acquiring a large number of historical device images, and constructing a device image set after renaming and labeling each historical device image; step S20, preprocessing each historical device image in the device image set; s30, creating a bar code recognition model based on a YOLOV5 algorithm, and training the shape code recognition model by utilizing the preprocessed equipment image set; step S40, after preprocessing a new device image to be counted, inputting the trained shape code recognition model to obtain a bar code image; and S50, identifying the equipment information carried by the barcode image through a zbar library, and automatically counting the equipment information based on the equipment information. The invention has the advantages that: the efficiency, the accuracy and the traceability of equipment information statistics are greatly improved.

Description

Equipment information statistical method based on computer vision
Technical Field
The invention relates to the technical field of image recognition, in particular to an equipment information statistical method based on computer vision.
Background
Many companies need to perform information statistics on running devices and inventory devices such as security, set top boxes and routers at the end of each month, so as to check the conditions of the current running devices and inventory devices, and facilitate troubleshooting, unified management and unified scheduling of abnormal states of the devices. Since the device specifies barcode information to uniquely identify the device, the barcode information of the device is generally counted as device information.
In the existing equipment information statistical method, relevant statistical personnel manually perform bar code information statistics on operating equipment and inventory equipment at the end of each month, and the method has the following defects: 1. because the operating equipment can be arranged according to actual requirements, the positions and the determined quantity cannot be specified in advance, the distribution is dispersed, the quantity is large, and a large amount of time is wasted through manual statistics; 2. in the statistical process, conditions such as equipment position change, loss of coded digital information corresponding to an equipment bar code, ambiguity and the like can be encountered, so that statistical personnel cannot accurately and effectively identify the current equipment information, the phenomena of missing statistics, error statistics and repeated statistics of the equipment information can occur, and the accuracy of equipment information statistics is reduced; 3. only the bar code information of the equipment is recorded in a manual mode, the style of the equipment is not recorded, and the equipment cannot be rechecked timely and effectively when repeated statistics and missed statistics of the equipment occur.
Therefore, how to provide an apparatus information statistical method based on computer vision to improve the efficiency, accuracy and traceability of apparatus information statistics becomes a technical problem to be solved urgently.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a device information statistical method based on computer vision, so that the efficiency, accuracy and traceability of device information statistics are improved.
The invention is realized in the following way: a device information statistical method based on computer vision comprises the following steps:
step S10, acquiring a large number of historical device images, and constructing a device image set after renaming and labeling each historical device image;
step S20, preprocessing each historical device image in the device image set;
s30, creating a bar code recognition model based on a YOLOV5 algorithm, and training the shape code recognition model by utilizing the preprocessed equipment image set;
step S40, after preprocessing each new device image to be counted, inputting the trained shape code recognition model to obtain a bar code image;
and S50, identifying the equipment information carried by the barcode image through a zbar library, and automatically counting the equipment information based on the equipment information.
Further, the step S10 is specifically:
acquiring a large number of historical device images, renaming the historical device images according to a format of 'device area + device type + shooting time', and labeling the renamed historical device images by using a Label Img labeling tool to construct a device image set.
Further, the step S20 specifically includes:
step S21, determining a deviation angle of each historical device image by using a minimum circumscribed rectangle, and rotating the historical device images based on the deviation angles;
step S22, cropping the history device image based on the cropping coefficient:
the width after cutting is as follows: w ratio 1;
the height after cutting is as follows: h ratio 2;
wherein w is the width of the image of the historical device before cutting; h is the height of the historical device before image cutting; both ratio1 and ratio2 are clipping coefficients and are set based on the proportion of equipment in historical equipment images;
step S23, carrying out gray scale processing on the clipped historical device image;
step S24, performing binarization processing on the history device image after the gray level processing;
step S25, performing morphological processing of expansion and corrosion on the history equipment image after binarization processing;
and step S26, performing image enhancement processing on the history equipment image after the morphological processing.
Further, the step S21 is specifically:
setting a rectangle size upper limit and a rectangle size lower limit, searching the outlines of objects in the historical equipment images, drawing the minimum external rectangles of all the outlines, and eliminating the outlines of which the minimum external rectangles are larger than the rectangle size upper limit and smaller than the rectangle size lower limit;
counting first offset angles of the minimum circumscribed rectangle of each residual contour, and acquiring the first offset angle with the most repeated times as a second offset angle of the historical equipment image;
rotating the historical device image based on the second offset angle.
Further, the step S23 is specifically:
carrying out gray level processing on the clipped historical device image in an averaging mode:
gyay=(r+g+b)/3;
wherein gyay represents the pixel gray value after gray processing; and r, g and b respectively represent pixel values of a red channel, a green channel and a blue channel before gray processing.
Further, the step S24 specifically includes:
step S241, setting the pixel value of each pixel point in the historical device image after the gray processing as vi,h(vi) Is a pixel value viThe initial pixel threshold T is:
T=sum/∑h(vi);
sum=∑vi*h(vi);
wherein sum represents the sum of the corresponding pixel values of all the pixels;
step S242, setting the pixel point with the pixel value larger than the pixel initial threshold value T as a background pixel point uiWill beSetting the pixel point with the pixel value smaller than the initial pixel threshold value T as a foreground pixel point di,h(ui) Is a pixel uiNumber of (c), h (d)i) Is a pixel point diNumber of (1), background mean value of AaThe mean value of the foreground is Ab
Aa=∑(ui*h(ui))/∑h(ui);
Ab=∑(di*h(di))/∑h(di);
Step S243, based on the background mean value being AaAnd the foreground mean is AbSetting a new threshold Tn
Tn=(Aa+Ab)/2;
Step S244, judgment TnIf yes, go to step S245; if not, then T is addednAssigning T to the value of (a), and proceeding to step S242;
and step S245, setting the pixel value of the historical device image to be less than 0 of T and setting the pixel value of the historical device image to be greater than 255 of T, and finishing binarization processing.
Further, the step S26 specifically includes:
step S261, adjusting brightness of the morphologically processed historical device image using a brightness adjustment formula:
dsti=srcoi*w+srcbi*(1-w)+k;
wherein dstiRepresenting the pixel channel value after brightness adjustment; srciA pixel channel value representing a historical device image before brightness adjustment; srcbiA pixel channel value representing a black image whose pixel values are all 0 is generated according to the size of the history device image; w represents a weight coefficient; k represents a correction coefficient;
step S262, carrying out first-order derivation on the historical device image after brightness adjustment to finish sharpening;
step S263, the contrast of the sharpened historical device image is adjusted through a gamma conversion formula:
gs=crγ
wherein gs represents a pixel change value of the historical device image after gamma transformation; c is a constant, and the value is 255; r represents a grayscale image pixel value; γ represents a gamma transform coefficient.
Further, the step S30 specifically includes:
step S31, creating a bar code recognition model based on a YOLOV5 algorithm, and setting a training parameter group, an accuracy threshold and a frequency threshold of the bar code recognition model;
step S32, dividing the device image set into a training set, a verification set and a test set based on a preset proportion;
step S33, training the bar code recognition model by using the training set, and recording the training times;
step S34, the trained bar code recognition model is verified by the verification set, whether the recognition accuracy exceeds the accuracy threshold or not is judged, or whether the training times exceeds the times threshold or not is judged, and if yes, the step S35 is executed; if not, adjusting the training parameter group, and proceeding to step S33;
and step S35, testing the bar code identification model with the highest identification accuracy by using the test set to obtain a test result, and storing the bar code identification model, the training parameter group and the test result.
Further, the step S40 is specifically:
and after preprocessing including renaming, rotating, cutting, gray processing, binarization processing, morphological processing and image enhancement processing is carried out on the new equipment image to be counted, the trained shape code identification model is input to obtain a bar code image.
Further, the step S50 specifically includes:
step S51, identifying the equipment information and the number of the bar code digits carried by the bar code image through a zbar library, and acquiring a corresponding equipment area, equipment type and shooting time from the file name of the new equipment image;
step S52, verifying the equipment type by using the bar code number;
and step S53, storing the equipment information, the equipment area, the equipment category and the shooting time as CSV files, and automatically counting the equipment information based on the CSV files.
The invention has the advantages that:
1. through establishing and training bar code identification model, utilize bar code image in the bar code identification model discernment new equipment image, the equipment information that the bar code image carried is discerned to rethread zbar storehouse to carry out equipment information statistics based on equipment information is automatic, carry out equipment information statistics through gathering the image promptly automatically, for traditional artifical statistics, very big promotion equipment information statistics's efficiency, and very big reduction leak statistics, mistake statistics, the phenomenon of repeated statistics.
2. The image is rotated, namely the angle of the bar code in the image is corrected, so that the failure of identification caused by the shooting angle is avoided; the image is subjected to binarization processing by an iterative method, so that the bar code and the background image can be better separated; the brightness of the image is adjusted through a brightness adjusting formula, the image is subjected to first-order derivation sharpening, the contrast of the image is adjusted through a gamma conversion formula, and the definition of the image is further greatly improved; the barcode identification model is created based on the YOLOV5 algorithm, so that the barcode identification model is higher in identification speed and accuracy, and finally the accuracy of equipment information statistics is greatly improved.
3. The device information statistics is automatically carried out by collecting the images, and when repeated statistics and missing statistics of the device occur, the device information statistics traceability can be greatly improved by rechecking the images.
4. The barcode image is identified through the zbar library, and compared with the traditional CODE39 library, the EAN8 library and the EAN13 library, the barcode image identification method can identify various barcodes, and greatly improves the robustness of barcode identification.
Drawings
The invention will be further described with reference to the following examples with reference to the accompanying drawings.
FIG. 1 is a flow chart of a device information statistics method based on computer vision according to the present invention.
Detailed Description
The technical scheme in the embodiment of the application has the following general idea: identifying a bar code image in the new equipment image through a shape code identification model, identifying equipment information carried by the bar code image through a zbar library, and automatically carrying out equipment information statistics based on the equipment information so as to improve the efficiency of the equipment information statistics; the method comprises the steps of correcting the angle of a bar code by rotating an image, better separating the bar code from a background image by carrying out binarization processing on the image through an iterative method, improving the definition of the image by adjusting the brightness and the contrast of the image and carrying out sharpening processing, and improving the identification speed and the identification precision by establishing a bar code identification model based on a YOLOV5 algorithm so as to improve the accuracy of equipment information statistics; the equipment information statistics is automatically carried out by collecting images so as to improve the traceability of the equipment information statistics.
Referring to fig. 1, a preferred embodiment of an apparatus information statistical method based on computer vision according to the present invention includes the following steps:
step S10, acquiring a large number of historical device images, and constructing a device image set after renaming and labeling each historical device image;
step S20, preprocessing each historical device image in the device image set;
s30, creating a bar code recognition model based on a YOLOV5 algorithm, and training the shape code recognition model by utilizing the preprocessed equipment image set; storing the model file and the training parameters in the training process;
step S40, after preprocessing each new device image to be counted, inputting the trained shape code recognition model to obtain a bar code image; the new equipment image is identified through the shape code identification model, and even if the bar code in the new equipment image is partially lost or blurred, the new equipment image can still be accurately identified;
and S50, identifying the equipment information carried by the barcode image through a zbar library, and automatically counting the equipment information based on the equipment information.
The method comprises the steps of detecting a bar code image in a new equipment image by using an image detection technology based on computer vision, and identifying equipment information from the bar code image for statistics.
The step S10 specifically includes:
acquiring a large number of historical device images, renaming the historical device images according to a format of 'device area + device type + shooting time', and labeling the renamed historical device images by using a Label Img labeling tool to construct a device image set.
The step S20 specifically includes:
step S21, determining the offset angle of each historical device image by using the minimum circumscribed rectangle, and rotating the historical device images based on the offset angle, namely correcting the angle of the barcode region;
step S22, cropping the history device image based on the cropping coefficient:
the width after cutting is as follows: w ratio 1;
the height after cutting is as follows: h ratio 2;
wherein w is the width of the image of the historical device before cutting; h is the height of the historical device before image cutting; both ratio1 and ratio2 are clipping coefficients and are set based on the proportion of equipment in historical equipment images; the image recognition processing speed can be improved by cutting;
step S23, carrying out gray scale processing on the clipped historical device image;
step S24, performing binarization processing on the historical device image after gray processing by using an iterative method, so that the foreground and the background can be better distinguished;
step S25, performing morphological processing of expansion and corrosion on the history equipment image after binarization processing, and removing irrelevant information in the history equipment image;
and step S26, performing image enhancement processing on the history equipment image after the morphological processing so as to improve the bar code recognition effect.
By preprocessing the images of the historical devices, the image processing speed is effectively improved, and irrelevant interference is reduced.
The step S21 specifically includes:
setting a rectangle size upper limit and a rectangle size lower limit, searching the outlines of objects in the historical equipment images, drawing the minimum external rectangles of all the outlines, and eliminating the outlines of which the minimum external rectangles are larger than the rectangle size upper limit and smaller than the rectangle size lower limit; namely, the interference of irrelevant outlines in the bar code area is reduced;
counting first offset angles of the minimum circumscribed rectangle of each residual contour, and acquiring the first offset angle with the most repeated times as a second offset angle of the historical equipment image;
rotating the historical device image based on the second offset angle.
The step S23 specifically includes:
carrying out gray level processing on the clipped historical device image in an averaging mode:
gyay=(r+g+b)/3;
wherein gyay represents the pixel gray value after gray processing; and r, g and b respectively represent pixel values of a red channel, a green channel and a blue channel before gray processing.
The step S24 specifically includes:
step S241, setting the pixel value of each pixel point in the historical device image after the gray processing as vi,h(vi) Is a pixel value viThe initial pixel threshold T is:
T=sum/∑h(vi);
sum=∑vi*h(vi);
wherein sum represents the sum of the corresponding pixel values of all the pixels;
step S242, setting the pixel point with the pixel value larger than the pixel initial threshold value T as a background pixel point uiSetting the pixel point with the pixel value smaller than the initial pixel threshold value T as a foreground pixel point di,h(ui) Is a pixel uiNumber of (c), h (d)i) Is a pixel point diNumber of (1), background mean value of AaFront, frontScene mean value is Ab
Aa=∑(ui*h(ui))/∑h(ui);
Ab=∑(di*h(di))/∑h(di);
Step S243, based on the background mean value being AaAnd the foreground mean is AbSetting a new threshold Tn
Tn=(Aa+Ab)/2;
Step S244, judgment TnIf yes, go to step S245; if not, then T is addednAssigning T to the value of (a), and proceeding to step S242;
and step S245, setting the pixel value of the historical device image to be 0 (black) when the pixel value is smaller than T and setting the pixel value of the historical device image to be 255 (white) when the pixel value is larger than T, completing binarization processing, and keeping the bar code area to the maximum extent.
The step S26 specifically includes:
step S261, adjusting brightness of the morphologically processed historical device image using a brightness adjustment formula:
dsti=srcoi*w+srcbi*(1-w)+k;
wherein dstiRepresenting the pixel channel value after brightness adjustment; srciA pixel channel value representing a historical device image before brightness adjustment; srcbiA pixel channel value representing a black image whose pixel values are all 0 is generated according to the size of the history device image; w represents a weight coefficient, and the value is preferably 1.2; k represents a correction coefficient, and the value is preferably 3;
step S262, carrying out first-order derivation on the historical device image after brightness adjustment to finish sharpening;
step S263, the contrast of the sharpened historical device image is adjusted through a gamma conversion formula:
gs=crγ
wherein gs represents a pixel change value of the historical device image after gamma transformation; c is a constant, and the value is 255; r represents a gray image pixel value, and the value range is [0,1 ]; gamma represents gamma transformation coefficient, and the value is preferably 1.5.
The step S30 specifically includes:
step S31, creating a bar code recognition model based on a YOLOV5 algorithm, and setting a training parameter group, an accuracy threshold and a frequency threshold of the bar code recognition model; the YOLOV5 algorithm has the advantages of high recognition speed and high precision; the number threshold is preferably 3; the training parameter group comprises a plurality of hyper-parameters;
step S32, dividing the device image set into a training set, a verification set and a test set based on a preset proportion;
step S33, training the bar code recognition model by using the training set, and recording the training times;
step S34, the trained bar code recognition model is verified by the verification set, whether the recognition accuracy exceeds the accuracy threshold or not is judged, or whether the training times exceeds the times threshold or not is judged, and if yes, the step S35 is executed; if not, adjusting the training parameter group, and proceeding to step S33;
and step S35, testing the bar code identification model with the highest identification accuracy by using the test set to obtain a test result, and storing the bar code identification model, the training parameter group and the test result.
The step S40 specifically includes:
and after preprocessing including renaming, rotating, cutting, gray processing, binarization processing, morphological processing and image enhancement processing is carried out on the new equipment image to be counted, the trained shape code identification model is input to obtain a bar code image.
The step S50 specifically includes:
step S51, identifying the equipment information and the number of digits of the bar code carried by the bar code image through a zbar library, and acquiring a corresponding equipment area, equipment type and shooting time from the file name of the new equipment image; the equipment information comprises equipment styles and surrounding environment information, so that the equipment information is convenient to review;
step S52, verifying the equipment type by using the bar code number; for example, the number of the bar code of the set top box can be set to be 15 bits, and the number of the bar code of the router can be set to be 32 bits;
and step S53, storing the equipment information, the equipment area, the equipment category and the shooting time as CSV files, and automatically counting the equipment information based on the CSV files.
In conclusion, the invention has the advantages that:
1. the bar code recognition model is created and trained, the bar code image in the new equipment image is recognized by the bar code recognition model, the equipment information carried by the bar code image is recognized by the zbar library, the equipment information statistics is automatically carried out based on the equipment information, namely, the equipment information statistics is automatically carried out by collecting the image.
2. The image is rotated, namely the angle of the bar code in the image is corrected, so that the failure of identification caused by the shooting angle is avoided; the image is subjected to binarization processing by an iterative method, so that the bar code and the background image can be better separated; the brightness of the image is adjusted through a brightness adjusting formula, the image is subjected to first-order derivation sharpening, the contrast of the image is adjusted through a gamma conversion formula, and the definition of the image is greatly improved; the barcode identification model is created based on the YOLOV5 algorithm, so that the barcode identification model is higher in identification speed and accuracy, and finally the accuracy of equipment information statistics is greatly improved.
3. The device information statistics is automatically carried out by collecting the images, and when repeated statistics and missing statistics of the device occur, the device information statistics traceability can be greatly improved by rechecking the images.
4. The barcode image is identified through the zbar library, and compared with the traditional CODE39 library, the EAN8 library and the EAN13 library, the barcode image identification method can identify various barcodes, and greatly improves the robustness of barcode identification.
Although specific embodiments of the invention have been described above, it will be understood by those skilled in the art that the specific embodiments described are illustrative only and are not limiting upon the scope of the invention, and that equivalent modifications and variations can be made by those skilled in the art without departing from the spirit of the invention, which is to be limited only by the appended claims.

Claims (10)

1. A device information statistical method based on computer vision is characterized in that: the method comprises the following steps:
step S10, acquiring a large number of historical device images, and constructing a device image set after renaming and labeling each historical device image;
step S20, preprocessing each historical device image in the device image set;
s30, creating a bar code recognition model based on a YOLOV5 algorithm, and training the shape code recognition model by utilizing the preprocessed equipment image set;
step S40, after preprocessing each new device image to be counted, inputting the trained shape code recognition model to obtain a bar code image;
and S50, identifying the equipment information carried by the barcode image through a zbar library, and automatically counting the equipment information based on the equipment information.
2. The computer vision-based device information statistical method according to claim 1, wherein: the step S10 specifically includes:
acquiring a large number of historical device images, renaming the historical device images according to a format of 'device area + device type + shooting time', and labeling the renamed historical device images by using a LabelImg labeling tool to construct a device image set.
3. The computer vision-based device information statistical method according to claim 1, wherein: the step S20 specifically includes:
step S21, determining a deviation angle of each historical device image by using a minimum circumscribed rectangle, and rotating the historical device images based on the deviation angles;
step S22, cropping the history device image based on the cropping coefficient:
the width after cutting is as follows: w ratio 1;
the height after cutting is as follows: h ratio 2;
wherein w is the width of the image of the historical device before cutting; h is the height of the historical device before image cutting; ratio1 and ratio2 are both clipping coefficients and are set based on the proportion of equipment in historical equipment images;
step S23, carrying out gray scale processing on the clipped historical device image;
step S24, performing binarization processing on the history device image after the gray level processing;
step S25, performing morphological processing of expansion and corrosion on the history equipment image after binarization processing;
and step S26, performing image enhancement processing on the history equipment image after the morphological processing.
4. The computer vision-based device information statistical method according to claim 3, wherein: the step S21 specifically includes:
setting a rectangle size upper limit and a rectangle size lower limit, searching the outlines of objects in the historical equipment images, drawing the minimum external rectangles of all the outlines, and eliminating the outlines of which the minimum external rectangles are larger than the rectangle size upper limit and smaller than the rectangle size lower limit;
counting first offset angles of the minimum circumscribed rectangle of each residual contour, and acquiring the first offset angle with the most repeated times as a second offset angle of the historical equipment image;
rotating the historical device image based on the second offset angle.
5. The computer vision-based device information statistical method according to claim 3, wherein: the step S23 specifically includes:
carrying out gray level processing on the clipped historical device image in an averaging mode:
gyay=(r+g+b)/3;
wherein gyay represents the pixel gray value after gray processing; and r, g and b respectively represent pixel values of a red channel, a green channel and a blue channel before gray processing.
6. The computer vision-based device information statistical method according to claim 3, wherein: the step S24 specifically includes:
step S241, setting the pixel value of each pixel point in the historical device image after the gray processing as vi,h(vi) Is a pixel value viThe initial pixel threshold T is:
T=sum/∑h(vi);
sum=∑vi*h(vi);
wherein sum represents the sum of the corresponding pixel values of all the pixels;
step S242, setting the pixel point with the pixel value larger than the pixel initial threshold value T as a background pixel point uiSetting the pixel point with the pixel value smaller than the initial pixel threshold value T as a foreground pixel point di,h(ui) Is a pixel uiNumber of (c), h (d)i) Is a pixel point diNumber of (1), background mean value of AaThe mean value of the foreground is Ab
Aa=∑(ui*h(ui))/∑h(ui);
Ab=∑(di*h(di))/∑h(di);
Step S243, based on the background mean value being AaAnd the foreground mean is AbSetting a new threshold Tn
Tn=(Aa+Ab)/2;
Step S244, judgment TnIf yes, go to step S245;if not, then T is addednAssigning T to the value of (a), and proceeding to step S242;
and step S245, setting the pixel value of the historical device image to be less than 0 of T and setting the pixel value of the historical device image to be greater than 255 of T, and finishing binarization processing.
7. The computer vision-based device information statistical method according to claim 3, characterized in that: the step S26 specifically includes:
step S261, adjusting the brightness of the morphologically processed image of the historical device using a brightness adjustment formula:
dsti=srcoi*w+srcbi*(1-w)+k;
wherein dstiRepresenting the pixel channel value after brightness adjustment; srciA pixel channel value representing a historical device image before brightness adjustment; srcbiA pixel channel value representing a black image whose pixel values are all 0 is generated according to the size of the history device image; w represents a weight coefficient; k represents a correction coefficient;
step S262, carrying out first-order derivation on the historical device image after brightness adjustment to finish sharpening;
step S263, the contrast of the sharpened historical device image is adjusted through a gamma conversion formula:
gs=crγ
wherein gs represents a pixel change value of the historical device image after gamma transformation; c is a constant, and the value is 255; r represents a grayscale image pixel value; γ represents a gamma transform coefficient.
8. The computer vision-based device information statistical method according to claim 1, wherein: the step S30 specifically includes:
step S31, creating a bar code recognition model based on a YOLOV5 algorithm, and setting a training parameter group, an accuracy threshold and a frequency threshold of the bar code recognition model;
step S32, dividing the device image set into a training set, a verification set and a test set based on a preset proportion;
step S33, training the bar code recognition model by using the training set, and recording the training times;
step S34, the trained bar code recognition model is verified by the verification set, whether the recognition accuracy exceeds the accuracy threshold or not is judged, or whether the training times exceeds the times threshold or not is judged, and if yes, the step S35 is executed; if not, adjusting the training parameter group, and proceeding to step S33;
and step S35, testing the bar code identification model with the highest identification accuracy by using the test set to obtain a test result, and storing the bar code identification model, the training parameter group and the test result.
9. The computer vision-based device information statistical method according to claim 1, wherein: the step S40 specifically includes:
and after preprocessing including renaming, rotating, cutting, gray processing, binarization processing, morphological processing and image enhancement processing is carried out on the new equipment image to be counted, the trained shape code identification model is input to obtain a bar code image.
10. The computer vision-based device information statistical method according to claim 1, wherein: the step S50 specifically includes:
step S51, identifying the equipment information and the number of the bar code digits carried by the bar code image through a zbar library, and acquiring a corresponding equipment area, equipment type and shooting time from the file name of the new equipment image;
step S52, verifying the equipment type by using the bar code number;
and step S53, storing the equipment information, the equipment area, the equipment category and the shooting time as CSV files, and automatically counting the equipment information based on the CSV files.
CN202111574075.7A 2021-12-21 2021-12-21 Equipment information statistical method based on computer vision Pending CN114445446A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111574075.7A CN114445446A (en) 2021-12-21 2021-12-21 Equipment information statistical method based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111574075.7A CN114445446A (en) 2021-12-21 2021-12-21 Equipment information statistical method based on computer vision

Publications (1)

Publication Number Publication Date
CN114445446A true CN114445446A (en) 2022-05-06

Family

ID=81364644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111574075.7A Pending CN114445446A (en) 2021-12-21 2021-12-21 Equipment information statistical method based on computer vision

Country Status (1)

Country Link
CN (1) CN114445446A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113099116A (en) * 2021-04-07 2021-07-09 京东数科海益信息科技有限公司 Equipment information collection method and device, robot and computer equipment
US20210264215A1 (en) * 2020-02-24 2021-08-26 Zebra Technologies Corporation Object recognition scanning systems and methods for implementing artificial based item determination
CN113591508A (en) * 2021-09-29 2021-11-02 广州思林杰科技股份有限公司 Bar code decoding method and device based on artificial intelligence target recognition and storage medium
CN113705749A (en) * 2021-08-31 2021-11-26 平安银行股份有限公司 Two-dimensional code identification method, device and equipment based on deep learning and storage medium
CN113780087A (en) * 2021-08-11 2021-12-10 同济大学 Postal parcel text detection method and equipment based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210264215A1 (en) * 2020-02-24 2021-08-26 Zebra Technologies Corporation Object recognition scanning systems and methods for implementing artificial based item determination
CN113099116A (en) * 2021-04-07 2021-07-09 京东数科海益信息科技有限公司 Equipment information collection method and device, robot and computer equipment
CN113780087A (en) * 2021-08-11 2021-12-10 同济大学 Postal parcel text detection method and equipment based on deep learning
CN113705749A (en) * 2021-08-31 2021-11-26 平安银行股份有限公司 Two-dimensional code identification method, device and equipment based on deep learning and storage medium
CN113591508A (en) * 2021-09-29 2021-11-02 广州思林杰科技股份有限公司 Bar code decoding method and device based on artificial intelligence target recognition and storage medium

Similar Documents

Publication Publication Date Title
CN109443480B (en) Water level scale positioning and water level measuring method based on image processing
CN101246549B (en) Method and apparatus for recognizing boundary line in an image information
CN103093225B (en) The binarization method of image in 2 D code
CN110210477B (en) Digital instrument reading identification method
CN111476109A (en) Bill processing method, bill processing apparatus, and computer-readable storage medium
CN109543753B (en) License plate recognition method based on self-adaptive fuzzy repair mechanism
CN117173661B (en) Asphalt road quality detection method based on computer vision
CN113083804A (en) Laser intelligent derusting method and system and readable medium
CN110598581B (en) Optical music score recognition method based on convolutional neural network
CN107679437A (en) Bar code image recognizer based on Zbar
CN113537037A (en) Pavement disease identification method, system, electronic device and storage medium
CN111652117B (en) Method and medium for segmenting multiple document images
CN114004858A (en) Method and device for identifying aviation cable surface code based on machine vision
CN106897997B (en) The method of detection ring bobbin tail yarn based on Computer Image Processing and pattern-recognition
CN115841633A (en) Power tower and power line associated correction power tower and power line detection method
CN112836541B (en) Automatic acquisition and identification method and device for 32-bit bar code of cigarette
CN114445446A (en) Equipment information statistical method based on computer vision
CN117115614B (en) Object identification method, device, equipment and storage medium for outdoor image
CN114067347A (en) Automatic verification method of power distribution station design drawing, operation control device and electronic equipment
CN110135425B (en) Sample labeling method and computer storage medium
CN115620329A (en) Stamp deviation intelligent identification method based on artificial intelligence
CN116363655A (en) Financial bill identification method and system
CN115063603A (en) Wood annual ring line accurate extraction and restoration method based on edge information
CN115049684A (en) Water gauge identification method and system based on water area segmentation
CN112950620A (en) Power transmission line damper deformation defect detection method based on cascade R-CNN algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination