CN109543487B - Automatic induction triggering method and system based on bar code edge segmentation - Google Patents

Automatic induction triggering method and system based on bar code edge segmentation Download PDF

Info

Publication number
CN109543487B
CN109543487B CN201811399970.8A CN201811399970A CN109543487B CN 109543487 B CN109543487 B CN 109543487B CN 201811399970 A CN201811399970 A CN 201811399970A CN 109543487 B CN109543487 B CN 109543487B
Authority
CN
China
Prior art keywords
value
image
pixel
edge detection
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811399970.8A
Other languages
Chinese (zh)
Other versions
CN109543487A (en
Inventor
宋少龙
上官文娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou Totinfo Information Technology Co ltd
Original Assignee
Fuzhou Totinfo Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou Totinfo Information Technology Co ltd filed Critical Fuzhou Totinfo Information Technology Co ltd
Priority to CN201811399970.8A priority Critical patent/CN109543487B/en
Publication of CN109543487A publication Critical patent/CN109543487A/en
Application granted granted Critical
Publication of CN109543487B publication Critical patent/CN109543487B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1452Methods for optical code recognition including a method step for retrieval of the optical code detecting bar code edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an automatic induction triggering method and system based on bar code edge segmentation, which are characterized in that a bar code image in a YUV (Luma and chroma) coding mode is obtained, the difference value of the average gray value of the bar code image and the average gray value of a decoded image in the previous frame is calculated and compared with a first threshold value T1, if the difference value does not exceed the first threshold value T1, decoding is not triggered, otherwise, edge segmentation processing is further carried out on the current image, the ratio of the absolute value of the difference between the number of white pixel points of the current image and the number of white pixel points of the decoded image after the bar code edge segmentation processing is calculated, if the ratio does not exceed a third threshold value T3, automatic induction is not triggered, and otherwise, decoding operation is triggered.

Description

Automatic induction triggering method and system based on bar code edge segmentation
Technical Field
The invention relates to an automatic induction triggering method and system of an image, in particular to an automatic induction triggering method and system based on bar code edge segmentation.
Background
The bar code is composed of bars and spaces with different widths and reflectivity, and is coded according to a certain coding rule (code system) to express a group of graphic identifiers of numeric or alphabetic symbol information. In order to read the information represented by bar code, a bar code recognition system is needed, which is composed of a bar code scanner, an amplifying and shaping circuit, a decoding interface circuit and a computer system, i.e. a bar code reader (the bar code scanner is also called a bar code scanning gun or a bar code reader) is used for scanning to obtain a group of reflected light signals, the signals are converted into a group of electronic signals corresponding to lines and blanks through photoelectric conversion, the electronic signals are restored into corresponding characters and numbers after being decoded, and then the corresponding characters and numbers are transmitted to the computer. The bar code technology has the advantages of high input speed, high reliability, large information acquisition amount, flexibility, practicability and the like.
Before the bar code is identified by equipment, in order to prevent triggering repeated auto-induction, the bar code image preprocessing is usually added, the traditional preprocessing is generally to calculate the average value of the gray scale of the whole image, when the average value of the gray scale changes, the image is not the same as the image identified and processed last time, and then the auto-induction device is triggered to decode the current image. The processing method has the great defect that when the bar code image is kept static and unchanged and the ambient illumination environment changes to a certain extent, the gray average value of the bar code image changes obviously, so that the gray average value of the image as the auto-induction parameter has no robustness to the illumination environment.
Disclosure of Invention
Therefore, the technical problem to be solved by the present invention is that the prior art cannot reduce the interference of the external ambient light by using the average gray value calculation.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
an automatic induction triggering method based on bar code edge segmentation comprises the following steps:
s1: and acquiring a frame of barcode image and converting the frame of barcode image into a YUV color coding mode, wherein the numerical value of a Y channel is taken as the gray value of the barcode image.
S2: and calculating the average value of the numerical values of all the Y channels of the pixels as the average gray value g of the one-frame bar code image.
S3: calculating the difference value between the average gray value g of the one frame of barcode image and the average gray value g' of the decoded barcode image of the previous frame, comparing the difference value with a first threshold value T1, and when the difference value of the average gray values does not exceed the first threshold value T1, not triggering decoding; when the difference value of the average gray values exceeds a preset threshold value T1, the next step is carried out; the first threshold T1 is in the range of 2 to 5.
S4: and carrying out edge detection processing on the one-frame bar code image by using a sobel edge detection operator to obtain an edge detection image.
S5: and calculating the proportion pi of the number of pixels with the gray scale value i in the edge detection image to the total number of pixels in the edge detection image, namely calculating the gray scale distribution of the edge detection image, wherein i is a value of 0-255.
S6: calculating an accumulation sum P1(k) of pi, an accumulation pixel mean m (k), a global gray mean mG and an inter-class variance mB (k), wherein the k value is 0 to 255, namely calculating the pixel value k as each numerical value of 0-255;
s7: the magnitude of the inter-class variance mb (k) obtained in step S6 is compared to obtain the pixel value k that maximizes the inter-class variance mb (k) as the second threshold T2.
S8: and when the gray value of the pixel point of the edge detection image is greater than T2, setting the gray value of the pixel to be 255, otherwise, setting the gray value to be 0, and obtaining a binary image.
S9: and equally dividing the binary image into 10 by 10 blocks, counting the number of white pixel points in each block, and sequentially sequencing to obtain an array w [ ] with the length of 100.
S10: and sequentially comparing the difference between w [ ] of the one frame of barcode image and w0[ ] of the last frame of decoded barcode image, if the difference between the same position values of w [ ] and w0[ ] is smaller than a third threshold value T3, marking the position as 1, otherwise marking the position as 0, and recording the times S marked as 0 by a counter, wherein the third threshold value is positively correlated with the adopted equipment pixel level.
S11: when S is less than 10, decoding is not triggered; and when the S is not less than 10, triggering decoding, storing the average gray value g and the number group w of the frame of barcode image, and using the average gray value g and the number group w as next comparison.
In the step S3, the first threshold T1 is positively correlated with the pixel level of the image pickup apparatus to be used, and the higher the pixel level is, the higher the T1 value is.
In step S4, the filter templates adopted by the sobel edge detection operator are represented as the following two filter templates:
{-1,-2,-1;0,0,0;1,2,1},{-1,0,1;-2,0,2;-1,0,1}
in step S6, the formula for calculating the inter-class variance is:
mB(k)=[mG×Pl(k)-m(k)]×2÷Pl(k)÷[1-Pl(k)]。
in step S10, the third threshold T3 is calculated as:
T3=m×n÷10000,
wherein m and n are the number of pixels of the long side and the wide side respectively in the pixel level of the adopted shooting device.
An auto-induction triggering system based on barcode edge segmentation comprises a camera, a memory and a processor, wherein the camera is used for shooting images, the memory stores instructions, and the instructions are suitable for being loaded by the processor and executing the following steps:
the camera acquires a frame of barcode image and converts the frame of barcode image into a YUV color coding mode, wherein the numerical value of a Y channel is taken as the gray value of the barcode image.
And calculating the average value of the numerical values of all the Y channels of the pixels as the average gray value g of the one-frame bar code image.
Calculating the difference value between the average gray value g of the one frame of barcode image and the average gray value g' of the decoded barcode image of the previous frame, comparing the difference value with a first threshold value T1, and when the difference value of the average gray values does not exceed the first threshold value T1, not triggering decoding; when the difference value of the average gray values exceeds a preset threshold value T1, the next step is carried out; the first threshold T1 is in the range of 2 to 5.
And carrying out edge detection processing on the one-frame bar code image by using a sobel edge detection operator to obtain an edge detection image.
And calculating the proportion pi of the number of pixels with the gray scale value i in the edge detection image to the total number of pixels in the edge detection image, namely calculating the gray scale distribution of the edge detection image, wherein i is a value of 0-255.
Calculating the cumulative sum P1(k) of pi, the cumulative pixel mean m (k), the global gray mean mG, and the inter-class variance mB (k), wherein the k value is 0 to 255, i.e. calculating the pixel value k as each item value of 0-255.
Comparing the values of all the inter-class variances mB (k) to obtain the pixel value k which maximizes the inter-class variance mB (k), and using the pixel value k as the second threshold T2.
And when the gray value of the pixel point of the edge detection image is greater than T2, setting the gray value of the pixel to be 255, otherwise, setting the gray value to be 0, and obtaining a binary image.
And equally dividing the binary image into 10 by 10 blocks, counting the number of white pixel points in each block, and sequentially sequencing to obtain an array w [ ] with the length of 100.
And sequentially comparing the difference between w [ ] of the current frame image and w0[ ] of the previous frame decoded image, if the difference between the same position values of w [ ] and w0[ ] is less than a third threshold value T3, marking the position as 1, otherwise marking the position as 0, and recording the times S marked as 0 by a counter, wherein the third threshold value is positively correlated with the adopted equipment pixel level.
When S is less than 10, decoding is not triggered; and when the S is not less than 10, triggering decoding, storing the average gray value g and the number group w of the frame of barcode image, and using the average gray value g and the number group w as next comparison.
The first threshold T1 is positively correlated with the employed image pickup apparatus pixel level, and the higher the pixel level, the higher the T1 value.
The filter template adopted by the sobel edge detection operator is represented by the following two filter templates:
{-1,-2,-1;0,0,0;1,2,1},{-1,0,1;-2,0,2;-1,0,1}。
the formula for calculating the between-class variance is:
mB(k)=[mG×Pl(k)-m(k)]×2÷Pl(k)÷[1-Pl(k)]。
the calculation formula of the third threshold T3 is:
T3=m×n÷10000,
wherein m and n are the number of pixels of the long side and the wide side respectively in the pixel level of the adopted shooting device.
The invention has the following beneficial effects:
1. according to the automatic induction triggering method and system based on bar code edge segmentation, YUV color coding mode conversion is carried out on a bar code image, the brightness gray value of a Y channel is taken as an operation basis, and compared with a traditional color coding mode, the expression of the ambient brightness is more accurate.
2. The invention relates to an automatic induction triggering method based on bar code edge segmentation and a system thereof.A bar code edge segmentation algorithm is added into the bar code image preprocessing operation, so that the influence of illumination change on the automatic induction of equipment is improved, when the bar code image is kept static and unchanged and the ambient illumination environment is changed to a certain extent, the gray average value of the bar code image is obviously changed, but the white point values of each block after the edge segmentation processing of the bar code image before and after the illumination change are approximate, the equipment can judge that the bar code image is the same image, and the false triggering of an automatic induction device on the equipment can not occur.
3. According to the automatic induction triggering method and system based on bar code edge segmentation, the sobel edge detection operator and the maximum inter-class variance method are adopted for calculation, the processing efficiency is high, and the resolution precision is high.
4. According to the automatic induction triggering method and the automatic induction triggering system based on the bar code edge segmentation, the processing precision of equipment replacement is guaranteed to be unchanged according to the preset comparison threshold of image pixels of different equipment.
Drawings
Fig. 1 is a flow chart of an auto-induction triggering method in the prior art.
Fig. 2 is a flowchart of an auto-induction triggering method based on barcode edge segmentation according to the present invention.
FIG. 3 is an original image of one embodiment of the present invention.
FIG. 4 is an edge detection image according to one embodiment of the invention.
FIG. 5 is a binarized image according to one embodiment of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and the specific embodiments.
Example one
Referring to fig. 2, an auto-induction triggering method based on barcode edge segmentation includes the following steps:
s1: a frame of barcode image is obtained and converted into a YUV color coding mode, as shown in fig. 3, wherein the value of the Y channel is taken as the gray value of the barcode image.
YUV is a color coding method, often used in various video processing components. YUV allows for reduced bandwidth of chrominance in view of human perception when encoding photos or videos. "Y" represents brightness (Luminince, Luma), and "U" and "V" represent Chroma and concentration (Chroma). The image shot by the common camera equipment is in an RGB mode, and the RGB to YUV mode can be directly converted through a common algorithm template.
S2: and calculating the average value of the numerical values of all the Y channels of the pixels as the average gray value g of the one-frame bar code image.
S3: calculating the difference value between the average gray value g of the one frame of barcode image and the average gray value g' of the decoded barcode image of the previous frame, comparing the difference value with a first threshold value T1, and when the difference value of the average gray values does not exceed the first threshold value T1, not triggering decoding; when the difference value of the average gray values exceeds a preset threshold value T1, the next step is carried out; the first threshold T1 is in the range of 2 to 5.
The system stores the average gray value g' of the decoded image of the previous frame for use in each judgment, and automatically replaces the decoded image of the previous frame when a new decoded image is generated. In the present embodiment, since the image pickup apparatus pixel level is 640x480, the first threshold T1 is 3.
S4: and carrying out edge detection processing on the one-frame bar code image by using a sobel edge detection operator to obtain an edge detection image.
The image edge detection greatly reduces the data volume, eliminates information which can be considered irrelevant, and retains important structural attributes of the image. There are many methods for edge detection, and most of them can be divided into two categories: one class based on look-up and one class based on zero-crossings. The search-based approach detects boundaries by finding the maximum and minimum values in the first derivative of the image, usually by locating the boundaries in the direction where the gradient is largest. The zero crossing based method finds the boundary by finding the second derivative zero crossing of the image, usually Laplacian zero crossing or a zero crossing represented by a nonlinear difference. The method adopts the sobel edge detection operator in the method based on one type of search to carry out edge detection processing on the image, the Soble edge detection algorithm is simple and practical, and the efficiency in practical application is higher than that of other edge detection.
S5: and calculating the proportion pi of the number of pixels with the gray scale value i in the edge detection image to the total number of pixels in the edge detection image, namely calculating the gray scale distribution of the edge detection image, wherein i is a value of 0-255.
S6: calculating the cumulative sum P1(k) of pi, the cumulative pixel mean m (k), the global gray mean mG, and the inter-class variance mB (k), wherein the k value is 0 to 255, i.e. calculating the pixel value k as each item value of 0-255.
The method adopts the maximum inter-class variance method to divide the image into a background part and a target part 2 according to the gray characteristic of the image. The larger the inter-class variance between the background and the object, the larger the difference of 2 parts constituting the image, and the smaller the difference of 2 parts is caused when part of the object is mistaken for the background or part of the background is mistaken for the object. Thus, a segmentation that maximizes the inter-class variance means that the probability of false positives is minimized.
S7: the magnitude of the inter-class variance mb (k) obtained in step S6 is compared to obtain the pixel value k that maximizes the inter-class variance mb (k) as the second threshold T2.
S8: when the gray value of the pixel point of the edge detection image is greater than T2, setting the pixel value to 255, otherwise setting the pixel value to 0, and obtaining a binary image as shown in fig. 5.
S9: and equally dividing the binary image into 10 by 10 blocks, counting the number of white pixel points in each block, and sequentially sequencing to obtain an array w [ ] with the length of 100. In this example, the array w [ ] is [10,23,11,23 … 0,2,1 ].
S10: and sequentially comparing the difference between w [ ] of the one frame of barcode image and w0[ ] of the last frame of decoded barcode image, if the difference between the same position values of w [ ] and w0[ ] is smaller than a third threshold value T3, marking the position as 1, otherwise marking the position as 0, and recording the times S marked as 0 by a counter, wherein the third threshold value is positively correlated with the adopted equipment pixel level. In this example, T3 is 30.72, and the array w [ ] [10,23,11,23 … 0,2,1] is compared with w0[ ] [21,22,31,4 … 0,3,33], and S ═ 19.
The system stores an array w0[ ] of the last frame of decoded pictures for use in each decision, automatically replacing the last frame of decoded pictures when a new decoded picture is generated.
S11: when S is less than 10, decoding is not triggered; and when the S is not less than 10, triggering decoding, storing the average gray value g and the number group w of the frame of barcode image, and using the average gray value g and the number group w as next comparison. In this embodiment, when S19 is greater than 10, decoding is triggered.
In the step S3, the first threshold T1 is positively correlated with the employed image pickup apparatus pixel level, and the higher the pixel level is, the higher the T1 value is. According to results obtained by a plurality of previous tests, the value of T1 is between 2 and 5.
In step S4, the filter templates adopted by the sobel edge detection operator are represented as the following two filter templates: -1, -2, -1; 0,0, 0; 1,2,1}, { -1,0, 1; -2,0, 2; -1,0,1}.
The Sobel operator is one of the most important operators in pixel image edge detection, and is a discrete first-order difference operator for calculating the approximate value of the first-order gradient of the image brightness function. Using this operator at any point in the image will produce the corresponding gradient vector or its normal vector.
In step S6, the formula for calculating the inter-class variance is: mb (k) ([ mG × pl (k) — m (k)) ] × 2 ÷ pl (k) ([ 1-pl (k)) ].
In step S10, the third threshold T3 is calculated as: t3 ═ m × n ÷ 10000, where m and n are the number of pixels on the long side and the wide side, respectively, in the pixel hierarchy of the current photographing apparatus.
In this embodiment, the image pickup device has a pixel rank size of 640 × 480, and T3 is 640 × 480/10000 is 30.72, and after the image pickup device is divided into 10 × 10 blocks in average, the total number of pixels in each block is 3072, and if the difference between the same position elements of w [ ] and w0[ ] is less than 30.72, it is determined that the block at the same position as the decoded image in the previous frame is a similar block.
According to the automatic induction triggering method and system based on bar code edge segmentation, YUV color coding mode conversion is carried out on a bar code image, the brightness gray value of a Y channel is taken as an operation basis, and compared with a traditional color coding mode, the expression of the ambient brightness is more accurate. The bar code edge segmentation algorithm is added into the bar code image preprocessing operation, the influence of illumination change on automatic sensing of equipment is improved, when the bar code image is kept static and unchanged and the ambient illumination environment changes to a certain extent, the gray average value of the bar code image changes obviously, but the white point values of all blocks after the edge segmentation processing of the bar code image before and after the illumination change are approximate, the equipment can judge that the bar code image is the same image, and false triggering of an automatic sensing device on the equipment cannot occur.
Example two
An auto-induction triggering system based on barcode edge segmentation comprises a camera, a memory and a processor, wherein the camera is used for shooting images, the memory stores instructions, and the instructions are suitable for being loaded by the processor and executing the following steps:
the camera acquires a frame of barcode image and converts the frame of barcode image into a YUV color coding mode, wherein the numerical value of a Y channel is taken as the gray value of the barcode image.
And calculating the average value of the numerical values of all the Y channels of the pixels as the average gray value g of the one-frame bar code image.
Calculating the difference value between the average gray value g of the one frame of barcode image and the average gray value g' of the decoded barcode image of the previous frame, comparing the difference value with a first threshold value T1, and when the difference value of the average gray values does not exceed the first threshold value T1, not triggering decoding; when the difference value of the average gray values exceeds a preset threshold value T1, the next step is carried out; the first threshold T1 is in the range of 2 to 5.
And carrying out edge detection processing on the one-frame bar code image by using a sobel edge detection operator to obtain an edge detection image.
And calculating the proportion pi of the number of pixels with the gray scale value i in the edge detection image to the total number of pixels in the edge detection image, namely calculating the gray scale distribution of the edge detection image, wherein i is a value of 0-255.
Calculating the cumulative sum P1(k) of pi, the cumulative pixel mean m (k), the global gray mean mG, and the inter-class variance mB (k), wherein the k value is 0 to 255, i.e. calculating the pixel value k as each item value of 0-255.
Comparing the values of all the inter-class variances mB (k) to obtain the pixel value k which maximizes the inter-class variance mB (k), and using the pixel value k as the second threshold T2.
And when the gray value of the pixel point of the edge detection image is greater than T2, setting the pixel value to be 255, otherwise, setting the pixel value to be 0, and obtaining a binary image.
And equally dividing the binary image into 10 by 10 blocks, counting the number of white pixel points in each block, and sequentially sequencing to obtain an array w [ ] with the length of 100.
And sequentially comparing the difference between w [ ] of the current frame image and w0[ ] of the previous frame decoded image, if the difference between the same position values of w [ ] and w0[ ] is less than a third threshold value T3, marking the position as 1, otherwise marking the position as 0, and recording the times S marked as 0 by a counter, wherein the third threshold value is positively correlated with the adopted equipment pixel level.
When S is less than 10, decoding is not triggered; and when the S is not less than 10, triggering decoding, storing the average gray value g and the number group w of the frame of barcode image, and using the average gray value g and the number group w as next comparison.
The first threshold T1 is positively correlated with the employed image pickup apparatus pixel level, and the higher the pixel level, the higher the T1 value.
The filter template adopted by the sobel edge detection operator is represented by the following two filter templates:
{-1,-2,-1;0,0,0;1,2,1},{-1,0,1;-2,0,2;-1,0,1}。
the formula for calculating the between-class variance is:
mB(k)=[mG×Pl(k)-m(k)]×2÷Pl(k)÷[1-Pl(k)]。
the calculation formula of the third threshold T3 is:
T3=m×n÷10000,
wherein m and n are the number of pixels of the long side and the wide side respectively in the pixel level of the adopted shooting device.
The method carries out YUV color coding mode conversion on the bar code image, takes the brightness gray value of the Y channel as the operation basis, and has more accurate expression on the environment brightness compared with the traditional color coding mode. The bar code edge segmentation algorithm is added into the bar code image preprocessing operation, the influence of illumination change on automatic sensing of equipment is improved, when the bar code image is kept static and unchanged and the ambient illumination environment changes to a certain extent, the gray average value of the bar code image changes obviously, but the white point values of all blocks after the edge segmentation processing of the bar code image before and after the illumination change are approximate, the equipment can judge that the bar code image is the same image, and false triggering of an automatic sensing device on the equipment cannot occur. The sobel edge detection operator and the maximum inter-class variance method are adopted for calculation, so that the processing efficiency is high, and the resolution precision is high. And the comparison threshold is preset according to the image pixels of different equipment, so that the processing precision of the equipment replacement is guaranteed to be unchanged.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (10)

1. An automatic induction triggering method based on bar code edge segmentation is characterized by comprising the following steps:
s1: acquiring a frame of barcode image and converting the frame of barcode image into a YUV color coding mode, wherein the numerical value of a Y channel is taken as the gray value of the barcode image;
s2: calculating the average value of the numerical values of all the pixel Y channels as the average gray value g of the one-frame bar code image;
s3: calculating the difference value between the average gray value g of the one frame of barcode image and the average gray value g' of the decoded barcode image of the previous frame, comparing the difference value with a first threshold value T1, and when the difference value of the average gray values does not exceed the first threshold value T1, not triggering decoding; when the difference value of the average gray values exceeds a preset threshold value T1, the next step is carried out; the first threshold value T1 is in a range of 2 to 5;
s4: performing edge detection processing on the one-frame barcode image by using a sobel edge detection operator to obtain an edge detection image;
s5: calculating the proportion pi of the number of pixels with the gray scale value i in the edge detection image to the total number of pixels in the edge detection image, namely calculating the gray scale distribution of the edge detection image, wherein i is each numerical value of 0-255;
s6: calculating an accumulation sum P1(k) of pi, an accumulation pixel mean m (k), a global gray mean mG and an inter-class variance mB (k), wherein the k value is 0 to 255, namely calculating the pixel value k as each numerical value of 0-255;
s7: comparing the magnitude of the inter-class variance mb (k) obtained in step S6 to obtain a pixel value k which maximizes the inter-class variance mb (k), and using the pixel value k as a second threshold T2;
s8: when the gray value of the pixel point of the edge detection image is larger than T2, setting the gray value of the pixel to be 255, otherwise, setting the gray value to be 0, and obtaining a binary image;
s9: equally dividing the binary image into 10 by 10 blocks, counting the number of white pixel points in each block, and sequentially sequencing to obtain an array w [ ] with the length of 100;
s10: sequentially comparing the difference between w [ ] of the one frame of barcode image and w0[ ] of the previous frame of decoded barcode image, if the difference between the same position values of w [ ] and w0[ ] is smaller than a third threshold value T3, marking the position as 1, otherwise marking the position as 0, and recording the times S marked as 0 by a counter, wherein the third threshold value is positively correlated with the adopted equipment pixel level;
s11: when S is less than 10, decoding is not triggered; and when the S is not less than 10, triggering decoding, storing the average gray value g and the number group w of the frame of barcode image, and using the average gray value g and the number group w as next comparison.
2. The method of claim 1, wherein in the step S3, the first threshold T1 is positively correlated with a pixel level of an adopted image capturing device, and the higher the pixel level is, the higher the T1 value is.
3. The method according to claim 2, wherein in the step S4, the filter templates adopted by the sobel edge detection operator are represented by the following two filter templates:
{-1,-2,-1;0,0,0;1,2,1},{-1,0,1;-2,0,2;-1,0,1}。
4. the method of claim 3, wherein in step S6, the formula for calculating the variance between classes is:
mB(k)=[mG×Pl(k)-m(k)]×2÷Pl(k)÷[1-Pl(k)]。
5. the method of claim 4, wherein in the step S10, the third threshold T3 is calculated as:
T3=m×n÷10000,
wherein m and n are the number of pixels of the long side and the wide side respectively in the pixel level of the adopted shooting device.
6. An auto-induction triggering system based on barcode edge segmentation is characterized by comprising a camera, a memory and a processor, wherein the camera is used for shooting images, the memory stores instructions, and the instructions are suitable for being loaded by the processor and executing the following steps:
the camera acquires a frame of barcode image and converts the frame of barcode image into a YUV color coding mode, wherein the numerical value of a Y channel is taken as the gray value of the barcode image;
calculating the average value of the numerical values of all the pixel Y channels as the average gray value g of the one-frame bar code image;
calculating the difference value between the average gray value g of the one frame of barcode image and the average gray value g' of the decoded barcode image of the previous frame, comparing the difference value with a first threshold value T1, and when the difference value of the average gray values does not exceed the first threshold value T1, not triggering decoding; when the difference value of the average gray values exceeds a preset threshold value T1, the next step is carried out; the first threshold value T1 is in a range of 2 to 5;
performing edge detection processing on the one-frame barcode image by using a sobel edge detection operator to obtain an edge detection image;
calculating the proportion pi of the number of pixels with the gray scale value i in the edge detection image to the total number of pixels in the edge detection image, namely calculating the gray scale distribution of the edge detection image, wherein i is each numerical value of 0-255;
calculating an accumulation sum P1(k) of pi, an accumulation pixel mean m (k), a global gray mean mG and an inter-class variance mB (k), wherein the k value is 0 to 255, namely calculating the pixel value k as each numerical value of 0-255;
comparing the values of all the inter-class variances mB (k) to obtain a pixel value k which enables the inter-class variance mB (k) to be maximum, and using the pixel value k as a second threshold value T2;
when the gray value of the pixel point of the edge detection image is larger than T2, setting the gray value of the pixel to be 255, otherwise, setting the gray value to be 0, and obtaining a binary image;
equally dividing the binary image into 10 by 10 blocks, counting the number of white pixel points in each block, and sequentially sequencing to obtain an array w [ ] with the length of 100;
sequentially comparing the difference between w [ ] of the current frame image and w0[ ] of the previous frame decoded image, if the difference between the same position values of w [ ] and w0[ ] is less than a third threshold value T3, marking the position as 1, otherwise marking as 0, and recording the times S marked as 0 by a counter, wherein the third threshold value is positively correlated with the adopted equipment pixel level;
when S is less than 10, decoding is not triggered; and when the S is not less than 10, triggering decoding, storing the average gray value g and the number group w of the frame of barcode image, and using the average gray value g and the number group w as next comparison.
7. The system of claim 6, wherein the first threshold T1 is positively correlated to the pixel level of the camera, and the higher the pixel level is, the higher the T1 value is.
8. The system of claim 7, wherein the filter templates used by the sobel edge detection operator are represented by the following two filter templates:
{-1,-2,-1;0,0,0;1,2,1},{-1,0,1;-2,0,2;-1,0,1}。
9. the system of claim 7, wherein the formula for calculating the between-class variance is:
mB(k)=[mG×Pl(k)-m(k)]×2÷Pl(k)÷[1-Pl(k)]。
10. the system of claim 7, wherein the third threshold T3 is calculated by the following formula:
T3=m×n÷10000,
wherein m and n are the number of pixels of the long side and the wide side respectively in the pixel level of the adopted shooting device.
CN201811399970.8A 2018-11-22 2018-11-22 Automatic induction triggering method and system based on bar code edge segmentation Active CN109543487B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811399970.8A CN109543487B (en) 2018-11-22 2018-11-22 Automatic induction triggering method and system based on bar code edge segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811399970.8A CN109543487B (en) 2018-11-22 2018-11-22 Automatic induction triggering method and system based on bar code edge segmentation

Publications (2)

Publication Number Publication Date
CN109543487A CN109543487A (en) 2019-03-29
CN109543487B true CN109543487B (en) 2022-04-01

Family

ID=65849352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811399970.8A Active CN109543487B (en) 2018-11-22 2018-11-22 Automatic induction triggering method and system based on bar code edge segmentation

Country Status (1)

Country Link
CN (1) CN109543487B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI726525B (en) * 2019-12-09 2021-05-01 新唐科技股份有限公司 Image binarization method and electronic device
CN111523341B (en) * 2020-04-03 2023-07-11 青岛进化者小胖机器人科技有限公司 Binarization method and device for two-dimensional code image
CN111415363B (en) * 2020-04-20 2023-04-18 电子科技大学中山学院 Image edge identification method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101170641A (en) * 2007-12-05 2008-04-30 北京航空航天大学 A method for image edge detection based on threshold sectioning
CN103927526A (en) * 2014-04-30 2014-07-16 长安大学 Vehicle detecting method based on Gauss difference multi-scale edge fusion
CN107610144A (en) * 2017-07-21 2018-01-19 哈尔滨工程大学 A kind of improved IR image segmentation method based on maximum variance between clusters

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AUPO801897A0 (en) * 1997-07-15 1997-08-07 Silverbrook Research Pty Ltd Image processing method and apparatus (ART24)
US20030035586A1 (en) * 2001-05-18 2003-02-20 Jim Chou Decoding compressed image data
KR102260805B1 (en) * 2014-08-06 2021-06-07 삼성전자주식회사 Image searching device and method thereof
CN106228138A (en) * 2016-07-26 2016-12-14 国网重庆市电力公司电力科学研究院 A kind of Road Detection algorithm of integration region and marginal information
CN108022233A (en) * 2016-10-28 2018-05-11 沈阳高精数控智能技术股份有限公司 A kind of edge of work extracting method based on modified Canny operators

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101170641A (en) * 2007-12-05 2008-04-30 北京航空航天大学 A method for image edge detection based on threshold sectioning
CN103927526A (en) * 2014-04-30 2014-07-16 长安大学 Vehicle detecting method based on Gauss difference multi-scale edge fusion
CN107610144A (en) * 2017-07-21 2018-01-19 哈尔滨工程大学 A kind of improved IR image segmentation method based on maximum variance between clusters

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《基于融合及形态学的自适应阈值图像边缘检测》;祁佳等;《数字视频》;20140702;第38卷(第13期);36-38 *

Also Published As

Publication number Publication date
CN109543487A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
CN109344676B (en) Automatic induction triggering method and system based on Hash algorithm
CN109325954B (en) Image segmentation method and device and electronic equipment
CN108241645B (en) Image processing method and device
US20190130169A1 (en) Image processing method and device, readable storage medium and electronic device
EP3021575B1 (en) Image processing device and image processing method
EP3082065A1 (en) Duplicate reduction for face detection
CN110580428A (en) image processing method, image processing device, computer-readable storage medium and electronic equipment
CN109543487B (en) Automatic induction triggering method and system based on bar code edge segmentation
KR101747216B1 (en) Apparatus and method for extracting target, and the recording media storing the program for performing the said method
EP3644599B1 (en) Video processing method and apparatus, electronic device, and storage medium
US9235779B2 (en) Method and apparatus for recognizing a character based on a photographed image
US11049256B2 (en) Image processing apparatus, image processing method, and storage medium
KR100422709B1 (en) Face detecting method depend on image
CN103946866A (en) Text detection using multi-layer connected components with histograms
US20150279054A1 (en) Image retrieval apparatus and image retrieval method
CN110991310B (en) Portrait detection method, device, electronic equipment and computer readable medium
CN111079613B (en) Gesture recognition method and device, electronic equipment and storage medium
US20230127009A1 (en) Joint objects image signal processing in temporal domain
US20110085026A1 (en) Detection method and detection system of moving object
CN110210467B (en) Formula positioning method of text image, image processing device and storage medium
CN111932462B (en) Training method and device for image degradation model, electronic equipment and storage medium
CN111080683B (en) Image processing method, device, storage medium and electronic equipment
CN111160340A (en) Moving target detection method and device, storage medium and terminal equipment
CN108769521B (en) Photographing method, mobile terminal and computer readable storage medium
CN116363753A (en) Tumble detection method and device based on motion history image and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant