CN116110053A - Container surface information detection method based on image recognition - Google Patents

Container surface information detection method based on image recognition Download PDF

Info

Publication number
CN116110053A
CN116110053A CN202310390737.8A CN202310390737A CN116110053A CN 116110053 A CN116110053 A CN 116110053A CN 202310390737 A CN202310390737 A CN 202310390737A CN 116110053 A CN116110053 A CN 116110053A
Authority
CN
China
Prior art keywords
image block
image
importance degree
character area
neighborhood
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310390737.8A
Other languages
Chinese (zh)
Other versions
CN116110053B (en
Inventor
张秋荣
岳增才
卢顺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Shuyue Vehicle Co ltd
Jining Energy Development Group Co ltd
Original Assignee
Shandong Shuyue Vehicle Co ltd
Jining Energy Development Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Shuyue Vehicle Co ltd, Jining Energy Development Group Co ltd filed Critical Shandong Shuyue Vehicle Co ltd
Priority to CN202310390737.8A priority Critical patent/CN116110053B/en
Publication of CN116110053A publication Critical patent/CN116110053A/en
Application granted granted Critical
Publication of CN116110053B publication Critical patent/CN116110053B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/15Cutting or merging image elements, e.g. region growing, watershed or clustering-based techniques
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Character Input (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image data processing, in particular to a container surface information detection method based on image recognition. The method comprises the following steps: acquiring a gray image of a container, acquiring an initial character area, and dividing the initial character area into at least two image blocks; determining the importance degree of the image block, determining the image block to be processed from the image block according to the importance degree of other image blocks in the neighborhood of the image block, and determining the importance degree adjustment coefficient of the image block to be processed; obtaining self-adaptive transmissivity of an image block to be processed, and performing image processing on the image block to be processed to obtain a target image block; and processing the initial character area based on the original transmissivity to obtain a target character area, replacing the target character area at the corresponding position by using the target image block to obtain a character area to be detected, and detecting character information in the character area to be detected as container surface information. The invention can effectively improve the reliability of the detection of the surface information of the container.

Description

Container surface information detection method based on image recognition
Technical Field
The invention relates to the technical field of image data processing, in particular to a container surface information detection method based on image recognition.
Background
The surface information of the cargo container is required to be acquired and detected in marine transportation, the surface information of the container is mostly character information sprayed on the surface of the container, and as the detection scene is more outdoor, the fuzzy effect is brought by factors such as fog rain weather, uneven illumination and the like easily occurring at sea, and the acquired image is required to be subjected to defuzzification treatment.
In the related art, a preset deep neural network model is generally used for deblurring treatment, and container surface information is obtained through detection. In this way, as the container is leaked at the sea all the year round, the surface of the container is influenced by the salinity of sea water and the high-humidity environment, so that characters on the surface of the container are easy to fade, and irregular rust fading is generated, so that the texture of the fading part is complex, various characters cannot be effectively identified in a texture complex area in a mode of directly using a deep neural network model due to the complex edge curve texture, the accuracy of character identification is low, and the reliability of container surface information detection is insufficient.
Disclosure of Invention
In order to solve the technical problem of insufficient reliability of container surface information detection, the invention provides a container surface information detection method based on image recognition, which adopts the following technical scheme:
the invention provides a container surface information detection method based on image recognition, which comprises the following steps:
acquiring a container gray level image, detecting the container gray level image to obtain an initial character area, and dividing the initial character area into at least two image blocks according to the width of characters in the initial character area;
determining a gradient value average value and gradient direction difference of edge pixel points in the image block, determining importance degree of the image block according to the gradient value average value and the gradient direction difference, determining an image block to be processed from the image block according to the importance degree of other image blocks in the image block neighborhood, and determining an importance degree adjustment coefficient of the image block to be processed;
acquiring the original transmissivity of the gray level image of the container, acquiring the self-adaptive transmissivity of the image block to be processed according to the importance degree, the importance degree adjustment coefficient and the original transmissivity, and performing image processing on the image block to be processed according to the self-adaptive transmissivity to obtain a target image block;
and carrying out image processing on the initial character area based on the original transmissivity to obtain a target character area, using the target image block to replace the target character area at a corresponding position to obtain a character area to be detected, detecting character information in the character area to be detected, and taking the character information as container surface information.
Further, the determining the gradient value mean value and the gradient direction difference of the edge pixel points in the image block includes:
performing edge detection processing on the image block to obtain edge pixel points, and calculating gradient amplitude values and gradient directions of the edge pixel points;
calculating the average value of the gradient magnitudes of all the edge pixel points in the image block as the average value of the gradient values;
and determining other two edge pixel points closest to any edge pixel point as adjacent pixel points, calculating the average value of gradient directions of the adjacent pixel points as an adjacent direction average value, and calculating the difference between the gradient directions of any edge pixel point and the adjacent direction average value as the gradient direction difference.
Further, the determining the importance degree of the image block according to the gradient value mean value and the gradient direction difference includes:
normalizing the gradient value mean value to obtain an amplitude normalized value;
calculating gradient direction differences and values of all the edge pixel points in the image block, and carrying out normalization processing on the gradient direction differences and values to obtain direction normalization values;
and calculating the product of the amplitude normalization value and the direction normalization value as the importance degree.
Further, the determining the image block to be processed from the image blocks according to the importance degrees of other image blocks in the image block neighborhood includes:
determining other image blocks in the eight adjacent areas of the image block as adjacent area image blocks, and respectively obtaining the importance degree of the adjacent area image blocks as adjacent area importance degree;
and calculating the number of the neighborhood image blocks with the neighborhood importance degree larger than a preset importance degree threshold as a target number, and taking the image blocks with the target number larger than the preset number threshold as the image blocks to be processed.
Further, the determining the importance degree adjustment coefficient of the image block to be processed includes:
selecting a block from the neighborhood image blocks as a reference image block, wherein the importance degree of the reference image block is a reference importance degree;
determining a neighborhood importance degree with the smallest difference from the reference importance degree as a smallest neighborhood importance degree, and calculating the difference between the reference importance degree and the smallest neighborhood importance degree as a reference importance degree difference of the reference image block;
traversing all the neighborhood image blocks, and calculating the sum of the reference importance degree differences of all the neighborhood image blocks as neighborhood importance degree differences;
calculating the product of the target quantity and the neighborhood importance degree difference as an adjustment coefficient influence factor, and carrying out normalization processing on the adjustment coefficient influence factor to obtain the importance degree adjustment coefficient.
Further, the obtaining the adaptive transmittance of the image block to be processed according to the importance degree, the importance degree adjustment coefficient and the original transmittance includes:
the adaptive transmittance is obtained by using an adaptive transmittance formula, and the corresponding formula is as follows:
Figure SMS_1
in the method, in the process of the invention,
Figure SMS_2
indicating the self-adaptive transmittance is given,
Figure SMS_3
indicating the original transmittance of the light, and,
Figure SMS_4
the degree of importance is indicated as such,
Figure SMS_5
represents the importance level adjustment coefficient,
Figure SMS_6
the representation is normalized.
Further, the image processing is performed on the image block to be processed according to the adaptive transmittance, so as to obtain a target image block, including:
and carrying out dark channel prior processing on the image block to be processed according to the self-adaptive transmissivity to obtain the target image block.
Further, the detecting the gray image of the container to obtain an initial character area includes:
and carrying out semantic segmentation processing on the container gray level image by using a neural network model, determining character pixel points, and taking the area to which the character pixel points belong as the initial character area.
Further, the image processing is performed on the initial character area based on the original transmissivity to obtain a target character area, including:
and carrying out dark channel prior processing on the initial character area based on the original transmissivity to obtain the target character area.
The invention has the following beneficial effects:
in summary, since the image recognition is directly performed on the area with complex texture, the recognition accuracy is not high, and the reliability of the detection of the surface information of the container is insufficient, in order to effectively improve the recognition accuracy of the texture complex position, the image block to be processed is recognized, and the adaptive transmissivity is set, so that the image block to be processed is subjected to the adaptive processing, and the reliability of the detection of the surface information of the container is effectively enhanced. The initial character area is divided into at least two image blocks according to the width of the characters in the initial character area, so that the initial character area can be divided according to the set size of the image blocks, and the recognition rationality of the image blocks to be processed subsequently is ensured. The importance degree of the image block is determined through the gradient value mean value and gradient direction difference of the edge pixel points in the image block, and the image block to be processed is determined, so that the image features in the neighborhood of the image block can be effectively combined, the image block to be processed is determined according to the image features, and the accurate identification of the image block to be processed is ensured. The self-adaptive transmissivity of the image block to be processed is obtained through the importance degree, the importance degree adjusting coefficient and the original transmissivity, the image block to be processed is processed according to the self-adaptive transmissivity, the self-adaptive transmissivity of the image block to be processed can be accurately determined, the blurring effect caused by factors such as rain and fog, uneven illumination and the like is effectively eliminated according to the self-adaptive transmissivity, and the processing accuracy of the image block to be processed is guaranteed. The target image block is used for replacing the target character area at the corresponding position to obtain the character area to be detected, character information in the character area to be detected is detected to be container surface information, the self-adaptive transmissivity can be used for processing the texture complex area, the accuracy of the character information in the character area to be detected is ensured, and the accuracy and the reliability of the container surface information are further ensured. In summary, the invention can effectively improve the reliability of the detection of the surface information of the container.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for detecting surface information of a container based on image recognition according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an initial character area according to one embodiment of the present invention;
FIG. 3 is a schematic diagram of a left-right boundary spacing of a character according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an eight neighborhood provided in one embodiment of the present invention;
fig. 5 is a schematic diagram of a faded character according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following detailed description refers to specific implementation, structure, characteristics and effects of the container surface information detection method based on image recognition according to the invention by combining the accompanying drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the container surface information detection method based on image recognition.
Referring to fig. 1, a flowchart of a method for detecting surface information of a container based on image recognition according to an embodiment of the present invention is shown, where the method includes:
s101: and acquiring a container gray level image, detecting the container gray level image to obtain an initial character area, and dividing the initial character area into at least two image blocks according to the width of characters in the initial character area.
In the embodiment of the invention, the image acquisition device can be arranged in the port area to acquire the original image of the container surface, and the original image of the surface is subjected to preprocessing such as image denoising, image graying and the like to obtain the gray image of the container, wherein the image denoising can be specifically such as mean value filtering denoising, the image graying can be specifically such as weighted average graying, and of course, the image can be preprocessed by using various other arbitrary possible realization modes, wherein the image preprocessing is a technology well known in the art, and the details are not repeated.
Further, in the embodiment of the present invention, detecting the gray image of the container to obtain the initial character area includes: and carrying out semantic segmentation processing on the gray level image of the container by using a neural network model, determining character pixel points, and taking the area where the character pixel points belong as an initial character area.
The initial character area is an area of the container surface including characters, as shown in fig. 2, and fig. 2 is a schematic diagram of the initial character area provided by an embodiment of the present invention, it may be understood that the container surface includes a background area and a character area, and a preset neural network model may be used to perform semantic segmentation processing on a gray image of the container, where the neural network model may be, for example, a full convolution (Fully Convolutional Network, FCN) network model, or may also be, for example, a deep neural (Deep Neural Networks, DNN) network model, which is not limited thereto.
In the embodiment of the invention, the gray level image of the container is input into a pre-trained neural network model, semantic segmentation processing is carried out through the neural network model, and character pixel points are output.
The character pixel points are pixel points representing characters obtained through semantic segmentation processing, the character pixel points form an initial character area, it is understood that the initial character area obtained through semantic segmentation can only be used as a rough character area due to influence of factors such as rainy and foggy weather and uneven illumination, and in the initial character area, the character recognition degree at a complex line position is low due to the fact that the character recognition degree of the simple line is high, and therefore further processing is needed to be conducted on the character area at the complex line position.
In the embodiment of the invention, after the initial character area is determined, the initial character area can be divided into at least two image blocks according to the left-right boundary spacing of the characters in the initial character area. The distance between the left and right boundaries of the character is the thickness of the character, and the thickness of the character obtained from different angles is different due to different distances, so that the distance transformation processing can be performed on the initial character area to determine the distance between the left and right boundaries of the character according to the distance value obtained from the distance transformation processing, as shown in fig. 3, fig. 3 is a schematic diagram of the distance between the left and right boundaries of the character provided by an embodiment of the present invention, in fig. 3, the thickness of the character "9", that is, the white straight line portion in fig. 3, is used as the width of the character, where, since the distance transformation is a technology well known in the art, the details are not repeated here.
In the embodiment of the invention, the initial character area can be divided into a plurality of image blocks according to the width of the character, and the square area with the left and right boundary distance of the character as the side length is taken as the image block.
Of course, the present invention also supports the use of a width of a preset size as a side length of an image block, that is, any value size may be preset as a side length of an image block, and the initial character area is divided into a plurality of image blocks according to the side length, so as to execute a subsequent analysis process for pixel points in the image blocks, which is not limited.
S102: determining the gradient value mean value and gradient direction difference of edge pixel points in an image block, determining the importance degree of the image block according to the gradient value mean value and gradient direction difference, determining the image block to be processed from the image block according to the importance degree of other image blocks in the neighborhood of the image block, and determining the importance degree adjustment coefficient of the image block to be processed.
Further, in an embodiment of the present invention, determining a gradient value mean value and a gradient direction difference of edge pixel points in an image block includes: performing edge detection processing on the image block to obtain edge pixel points, and calculating gradient amplitude and gradient direction of the edge pixel points; calculating the average value of the gradient amplitude values of all edge pixel points in the image block as the average value of the gradient values; and determining the other two edge pixel points closest to any edge pixel point as adjacent pixel points, calculating the average value of the gradient directions of the adjacent pixel points as the average value of the adjacent directions, and calculating the difference between the gradient directions of any edge pixel point and the average value of the adjacent directions as the gradient direction difference.
In the embodiment of the invention, an edge detection operator Sobel operator can be used for carrying out edge processing on the image block to obtain the edge pixel point, and the gradient amplitude and the gradient direction of the edge pixel point are calculated.
In the embodiment of the invention, the average value of the gradient magnitudes of all edge pixel points in the image block is calculated as the average value of the gradient values, that is, the average value of the gradient magnitudes of the edge pixel points is counted as the average value of the gradient values.
In the embodiment of the invention, two other edge pixel points closest to any edge pixel point can be determined as adjacent pixel points, then the average value of the gradient directions of the adjacent pixel points is taken as the average value of the adjacent directions, and the difference between the gradient directions of the edge pixel points and the average value of the adjacent directions is calculated as the gradient direction difference of the edge pixel points.
In the embodiment of the invention, determining the importance degree of the image block according to the gradient value mean value and the gradient direction difference comprises the following steps: normalizing the gradient value mean value to obtain an amplitude normalized value; calculating gradient direction differences and values of all edge pixel points in the image block, and carrying out normalization processing on the gradient direction differences and values to obtain a direction normalization value; and calculating the product of the amplitude normalization value and the direction normalization value as the importance degree.
In the embodiment of the invention, after the gradient value mean value and the gradient direction difference are obtained, the importance degree of the corresponding image block can be determined according to the gradient value mean value and the gradient direction difference, and an importance degree calculation formula can be used for obtaining the importance degree, wherein the corresponding calculation formula is as follows:
Figure SMS_7
in the method, in the process of the invention,
Figure SMS_9
the degree of importance is indicated as such,
Figure SMS_13
represent the first
Figure SMS_17
The average value of the gradient values of the individual image blocks,
Figure SMS_10
an index representing the block of the image is presented,
Figure SMS_14
represent the first
Figure SMS_20
The total number of edge pixels in an image block,
Figure SMS_22
represent the first
Figure SMS_8
Index of edge pixels in the image blocks,
Figure SMS_12
represent the first
Figure SMS_16
The gradient direction of the individual edge pixels,
Figure SMS_18
represent the first
Figure SMS_11
The average value of the adjacent directions of the adjacent pixel points of the edge pixel points,
Figure SMS_15
the difference in the direction of the gradient is indicated,
Figure SMS_19
representing gradientsThe difference in direction and the value of the difference,
Figure SMS_21
the representation is normalized.
As can be seen from the importance degree calculation formula, when the average value of the gradient values is larger, the probability that the corresponding image block is positioned at the edge of the character is higher, and the importance degree is larger. The larger the gradient direction difference and the larger the value, the higher the texture bending degree in the corresponding image block is, and the higher the importance degree is. It will be appreciated that, since the edges of the character are mostly curved edges, the higher the degree of curvature of the texture in the image block, the more the character edge texture contained in the image block can be represented, and the greater the degree of importance. Mapping the gradient value mean value to the gradient direction difference sum value by respectively carrying out normalization processing on the gradient value mean value and carrying out normalization processing on the gradient direction difference sum value
Figure SMS_23
And in the range, the importance degree is obtained by final multiplication, and the importance degree of the region with more bending degree is larger as the edge texture is more, so that the importance degree of each image block is effectively determined.
In the embodiment of the invention, determining the image block to be processed from the image blocks according to the importance degree of other image blocks in the neighborhood of the image block comprises the following steps: determining other image blocks in eight adjacent areas of the image block as adjacent area image blocks, and respectively obtaining the importance degree of the adjacent area image blocks as adjacent area importance degree; calculating the number of neighborhood image blocks with the neighborhood importance degree larger than a preset importance degree threshold as a target number, and taking the image blocks with the target number larger than the preset number threshold as image blocks to be processed.
The fig. 4 is a schematic diagram of eight neighborhoods provided by an embodiment of the present invention, where a black area is an image block, a surrounding white area is a neighborhood image block in the eight neighborhoods, an importance degree of the neighborhood image block may be determined as a neighborhood importance degree, when the neighborhood importance degree is greater than a preset importance degree threshold, the neighborhood image block is marked, the number of marked neighborhood image blocks is counted as a target number, and preferably, the preset importance degree threshold may specifically be, for example, 0.7, that is, the number of neighborhood image blocks with a statistical neighborhood importance degree greater than 0.7 is counted as a target number.
The image block to be processed is a complex texture or an image block containing characteristics of fading positions, and it can be understood that the more the texture of the edge is, the more the bending degree is, that is, the greater the importance degree is, because the texture is complex or the characteristics of the fading positions are obvious.
Since the fading region has irregular shape and complex texture, the edge textures of the fading region have large difference, as shown in fig. 5, fig. 5 is a schematic diagram of the fading character provided by an embodiment of the present invention, and as can be seen from fig. 5, the edge textures of the image block to be processed show irregular distribution.
In the embodiment of the invention, the image blocks with the target number larger than the preset number threshold are used as the image blocks to be processed, preferably, the preset number threshold can be 5, namely, the image block with the center corresponding to the target number larger than 5 is used as the image block to be processed, and it can be understood that the fade area and the area corresponding to the characters with complex textures have the characteristic of being concentrated in distribution, so that the importance threshold and the number threshold are set, and when the importance of the neighborhood image blocks in eight adjacent areas is larger and the number is larger, the corresponding image blocks are used as the image blocks to be processed, thereby effectively distinguishing the image blocks to be processed and facilitating the subsequent image processing of the image blocks to be processed.
In the embodiment of the invention, determining the importance degree adjustment coefficient of the image block to be processed comprises the following steps: selecting a block from the neighborhood image blocks as a reference image block, wherein the importance degree of the reference image block is a reference importance degree; determining the neighborhood importance degree with the smallest difference from the reference importance degree as the smallest neighborhood importance degree, and calculating the difference between the reference importance degree and the smallest neighborhood importance degree as the reference importance degree difference of the reference image block; traversing all the neighborhood image blocks, and calculating the sum of the reference importance degree differences of all the neighborhood image blocks as the neighborhood importance degree difference; and calculating the product of the target number and the neighborhood importance degree difference as an adjustment coefficient influence factor, and carrying out normalization processing on the adjustment coefficient influence factor to obtain an importance degree adjustment coefficient.
In the embodiment of the invention, the neighborhood importance with the smallest difference from the reference importance can be determined as the smallest neighborhood importance, for example, if the neighborhood importance of eight neighbors is respectively
Figure SMS_24
And if 0.6 is the reference importance level, the minimum neighborhood importance level with the smallest difference from 0.6 is 0.8, and the reference importance level difference is 0.2.
In the embodiment of the invention, all neighborhood image blocks are traversed, the sum of the reference importance degree differences of all neighborhood image blocks is calculated as the neighborhood importance degree differences, that is, the reference importance degree differences of each neighborhood image block are calculated respectively, the sum of the reference importance degree differences is taken as the neighborhood importance degree differences, and the neighborhood importance degrees of eight neighborhood regions are still taken as the neighborhood importance degrees respectively
Figure SMS_25
For example, all the neighborhood image blocks are traversed, and the corresponding reference importance degree differences are respectively as follows:
Figure SMS_26
from this, the neighborhood importance difference was calculated to be 0.3.
In the embodiment of the invention, the product of the target number and the neighborhood importance degree difference is used as the adjustment coefficient influence factor, and the adjustment coefficient influence factor is normalized to obtain the importance degree adjustment coefficient, wherein the importance degree adjustment coefficient can be obtained by using the importance degree adjustment coefficient, and the calculation formula of the importance degree adjustment coefficient is shown as follows:
Figure SMS_27
in the method, in the process of the invention,
Figure SMS_29
represents the importance level adjustment coefficient,
Figure SMS_31
the number of targets is indicated and the number of targets,
Figure SMS_34
representing the number of neighborhood image blocks within an eight neighborhood,
Figure SMS_30
an index representing a neighborhood image block,
Figure SMS_32
represent the first
Figure SMS_36
The reference importance of the individual neighborhood image blocks,
Figure SMS_37
representing the importance of the minimum neighborhood,
Figure SMS_28
represent the first
Figure SMS_33
The reference importance level differences of the individual neighborhood image blocks,
Figure SMS_35
representing the neighborhood importance level differences of the corresponding image blocks,
Figure SMS_38
the representation is normalized.
According to the calculation formula of the importance degree adjusting coefficient, the larger the target quantity is, the larger the adjusting coefficient is, the larger the neighborhood importance degree difference is, the larger the adjusting coefficient is, that is, when the neighborhood importance degree difference is large, the larger the character edge texture change is indicated in the region corresponding to the image block to be processed, therefore, the importance degree adjusting coefficient can be calculated to be used as the complexity degree of the character texture in the image block to be processed, the larger the importance degree adjusting coefficient is, and when the image processing is carried out subsequently, the processing can be effectively carried out according to the importance degree adjusting coefficient of the image block to be processed, and the accuracy and the reliability of the subsequent image processing are ensured.
S103: the method comprises the steps of obtaining original transmissivity of a gray level image of a container, obtaining self-adaptive transmissivity of an image block to be processed according to importance degree, importance degree adjustment coefficient and original transmissivity, and carrying out image processing on the image block to be processed according to the self-adaptive transmissivity to obtain a target image block.
In the embodiment of the invention, the method for acquiring the original transmissivity of the gray level image of the container comprises the following steps: and acquiring the original transmissivity of the gray image of the container based on a dark channel prior algorithm.
In the embodiment of the invention, the original transmissivity of the gray image of the container can be obtained based on the dark channel prior algorithm, wherein the original transmissivity is the transmissivity of the whole surface of the container which is directly obtained, and the dark channel prior algorithm is a well-known algorithm in the art and is not repeated here.
Of course, in other embodiments of the present invention, the preset transmittance may be used as the original transmittance, for example, the original transmittance at the current moment calculated according to the weather information, or the original transmittance of the gray image of the container may be obtained by using any of a plurality of other possible implementations, which is not limited.
It can be understood that due to the influence of weather caused by rain and fog, uneven illumination and other factors, the texture complex region is more blurred, and the influence degrees of different texture complex regions may be different, so if uniform transmissivity is used for processing, the problem of insufficient reliability of subsequent container surface information identification is caused due to different processing effects, and therefore, the invention adjusts the transmissivity adaptively for the texture complex region, and ensures the accuracy of the texture complex region.
In the embodiment of the invention, the obtaining of the self-adaptive transmissivity of the image block to be processed according to the importance degree, the importance degree adjustment coefficient and the original transmissivity comprises the following steps: obtaining the adaptive transmittance by using an adaptive transmittance formula, wherein the corresponding formula is as follows:
Figure SMS_39
in the method, in the process of the invention,
Figure SMS_40
indicating the self-adaptive transmittance is given,
Figure SMS_41
indicating the original transmittance of the light, and,
Figure SMS_42
the degree of importance is indicated as such,
Figure SMS_43
represents the importance level adjustment coefficient,
Figure SMS_44
the representation is normalized.
According to the self-adaptive transmissivity formula, when the importance degree is larger, the self-adaptive transmissivity is smaller, and when the importance degree adjustment coefficient is larger, the self-adaptive transmissivity is smaller, and it can be understood that the greater the importance degree is, the more edge textures in the image block to be processed are indicated, the bending degree is higher, and the greater the importance degree adjustment coefficient is, the character edge texture change in the image block to be processed is indicated to be larger, so that the image block to be processed needs to be processed more finely, and the self-adaptive transmissivity of the image block to be processed is obtained according to the importance degree, the importance degree adjustment coefficient and the original transmissivity, so that the transmissivity of the image block to be processed can be effectively represented by the self-adaptive transmissivity, and the reliability of the subsequent image processing step is further ensured.
In the embodiment of the invention, image processing is carried out on an image block to be processed according to the self-adaptive transmissivity, and a target image block is obtained, which comprises the following steps: and carrying out dark channel prior processing on the image block to be processed according to the self-adaptive transmissivity to obtain a target image block.
From the priori knowledge, the greater the transmittance, the thinner the rain and fog of the corresponding region can be represented, that is, the better the imaging effect, the smaller the change before and after the dark channel prior treatment is, and the lesser the transmittance, the thicker the rain and fog of the corresponding region can be represented, that is, the greater the change before and after the dark channel prior treatment is.
That is, the adaptive transmittance is used as the transmittance corresponding to the image block to be processed, and the image block to be processed is subjected to dark channel prior processing to generate the target image block.
The dark channel prior processing is a defuzzification processing mode and is commonly used in scenes such as defogging, so that the dark channel prior processing is carried out on the image block to be processed according to the self-adaptive transmittance, the processing of the image block to be processed can be ensured to be in the optimal self-adaptive transmittance, and the accuracy of acquiring the target image block is effectively improved.
S104: and processing the initial character area based on the original transmissivity to obtain a target character area, replacing the target character area at the corresponding position by using the target image block to obtain a character area to be detected, detecting character information in the character area to be detected, and taking the character information as container surface information.
Further, image processing is performed on the initial character region based on the original transmittance to obtain a target character region, including: and carrying out dark channel prior processing on the initial character area based on the original transmissivity to obtain a target character area.
In the embodiment of the invention, the original character area can be processed based on the original transmissivity by using a dark channel prior processing mode to obtain the target character area.
In the embodiment of the invention, the position corresponding to the target image block in the target character area can be determined, the target character area at the corresponding position is replaced by the processed target image block to obtain the character area to be tested, and it can be understood that the target image block is used for replacing the target character area at the corresponding position, namely, the adaptive transmissivity corresponding to the image block to be processed is used for carrying out dark channel priori processing, and other areas except the image block to be processed in the initial character area are used for carrying out dark channel priori processing by using the original transmissivity to obtain the character area to be tested, so that the overall processing effect of the character area to be tested is effectively improved, and the reliability of the character area to be tested is enhanced.
In the embodiment of the invention, the character information in the character area to be detected can be detected by using a preset neural network or using a template matching algorithm and other modes, and the character information is used as the container surface information, so that the method is not limited.
In summary, since the image recognition is directly performed on the area with complex texture, the recognition accuracy is not high, and the reliability of the detection of the surface information of the container is insufficient, in order to effectively improve the recognition accuracy of the texture complex position, the image block to be processed is recognized, and the adaptive transmissivity is set, so that the image block to be processed is subjected to the adaptive processing, and the reliability of the detection of the surface information of the container is effectively enhanced. The initial character area is divided into at least two image blocks according to the width of the characters in the initial character area, so that the initial character area can be divided according to the set size of the image blocks, and the recognition rationality of the image blocks to be processed subsequently is ensured. The importance degree of the image block is determined through the gradient value mean value and gradient direction difference of the edge pixel points in the image block, and the image block to be processed is determined, so that the image features in the neighborhood of the image block can be effectively combined, the image block to be processed is determined according to the image features, and the accurate identification of the image block to be processed is ensured. The self-adaptive transmissivity of the image block to be processed is obtained through the importance degree, the importance degree adjusting coefficient and the original transmissivity, the image block to be processed is processed according to the self-adaptive transmissivity, the self-adaptive transmissivity of the image block to be processed can be accurately determined, the blurring effect caused by factors such as rain and fog, uneven illumination and the like is effectively eliminated according to the self-adaptive transmissivity, and the processing accuracy of the image block to be processed is guaranteed. The target image block is used for replacing the target character area at the corresponding position to obtain the character area to be detected, character information in the character area to be detected is detected to be container surface information, the self-adaptive transmissivity can be used for processing the texture complex area, the accuracy of the character information in the character area to be detected is ensured, and the accuracy and the reliability of the container surface information are further ensured. The invention can effectively improve the reliability of the detection of the surface information of the container.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.

Claims (9)

1. A method for detecting surface information of a container based on image recognition, the method comprising:
acquiring a container gray level image, detecting the container gray level image to obtain an initial character area, and dividing the initial character area into at least two image blocks according to the width of characters in the initial character area;
determining a gradient value average value and gradient direction difference of edge pixel points in the image block, determining importance degree of the image block according to the gradient value average value and the gradient direction difference, determining an image block to be processed from the image block according to the importance degree of other image blocks in the image block neighborhood, and determining an importance degree adjustment coefficient of the image block to be processed;
acquiring the original transmissivity of the gray level image of the container, acquiring the self-adaptive transmissivity of the image block to be processed according to the importance degree, the importance degree adjustment coefficient and the original transmissivity, and performing image processing on the image block to be processed according to the self-adaptive transmissivity to obtain a target image block;
and carrying out image processing on the initial character area based on the original transmissivity to obtain a target character area, using the target image block to replace the target character area at a corresponding position to obtain a character area to be detected, detecting character information in the character area to be detected, and taking the character information as container surface information.
2. The method of claim 1, wherein determining the gradient value mean and gradient direction difference for edge pixels in the image block comprises:
performing edge detection processing on the image block to obtain edge pixel points, and calculating gradient amplitude values and gradient directions of the edge pixel points;
calculating the average value of the gradient magnitudes of all the edge pixel points in the image block as the average value of the gradient values;
and determining other two edge pixel points closest to any edge pixel point as adjacent pixel points, calculating the average value of gradient directions of the adjacent pixel points as an adjacent direction average value, and calculating the difference between the gradient directions of any edge pixel point and the adjacent direction average value as the gradient direction difference.
3. The method of claim 1, wherein said determining the importance of the image block based on the gradient value mean and the gradient direction difference comprises:
normalizing the gradient value mean value to obtain an amplitude normalized value;
calculating gradient direction differences and values of all the edge pixel points in the image block, and carrying out normalization processing on the gradient direction differences and values to obtain direction normalization values;
and calculating the product of the amplitude normalization value and the direction normalization value as the importance degree.
4. The method of claim 1, wherein said determining an image block to be processed from said image blocks based on said importance levels of other image blocks within said image block neighborhood comprises:
determining other image blocks in the eight adjacent areas of the image block as adjacent area image blocks, and respectively obtaining the importance degree of the adjacent area image blocks as adjacent area importance degree;
and calculating the number of the neighborhood image blocks with the neighborhood importance degree larger than a preset importance degree threshold as a target number, and taking the image blocks with the target number larger than the preset number threshold as the image blocks to be processed.
5. The method of claim 4, wherein determining the importance adjustment factor for the image block to be processed comprises:
selecting a block from the neighborhood image blocks as a reference image block, wherein the importance degree of the reference image block is a reference importance degree;
determining a neighborhood importance degree with the smallest difference from the reference importance degree as a smallest neighborhood importance degree, and calculating the difference between the reference importance degree and the smallest neighborhood importance degree as a reference importance degree difference of the reference image block;
traversing all the neighborhood image blocks, and calculating the sum of the reference importance degree differences of all the neighborhood image blocks as neighborhood importance degree differences;
calculating the product of the target quantity and the neighborhood importance degree difference as an adjustment coefficient influence factor, and carrying out normalization processing on the adjustment coefficient influence factor to obtain the importance degree adjustment coefficient.
6. The method of claim 1, wherein the obtaining the adaptive transmittance of the image block to be processed according to the importance level, the importance level adjustment coefficient, and the original transmittance comprises:
the adaptive transmittance is obtained by using an adaptive transmittance formula, and the corresponding formula is as follows:
Figure QLYQS_1
in the method, in the process of the invention,
Figure QLYQS_2
indicating adaptive transmittance, +.>
Figure QLYQS_3
Representing the original transmittance, +.>
Figure QLYQS_4
Indicating the degree of importance->
Figure QLYQS_5
Represents the importance degree adjustment coefficient,/->
Figure QLYQS_6
The representation is normalized.
7. The method of claim 1, wherein the performing image processing on the image block to be processed according to the adaptive transmittance to obtain a target image block comprises:
and carrying out dark channel prior processing on the image block to be processed according to the self-adaptive transmissivity to obtain the target image block.
8. The method of claim 1, wherein the detecting the container gray scale image to obtain an initial character region comprises:
and carrying out semantic segmentation processing on the container gray level image by using a neural network model, determining character pixel points, and taking the area to which the character pixel points belong as the initial character area.
9. The method of claim 1, wherein the image processing the initial character area based on the original transmittance to obtain a target character area comprises:
and carrying out dark channel prior processing on the initial character area based on the original transmissivity to obtain the target character area.
CN202310390737.8A 2023-04-13 2023-04-13 Container surface information detection method based on image recognition Active CN116110053B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310390737.8A CN116110053B (en) 2023-04-13 2023-04-13 Container surface information detection method based on image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310390737.8A CN116110053B (en) 2023-04-13 2023-04-13 Container surface information detection method based on image recognition

Publications (2)

Publication Number Publication Date
CN116110053A true CN116110053A (en) 2023-05-12
CN116110053B CN116110053B (en) 2023-07-21

Family

ID=86264163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310390737.8A Active CN116110053B (en) 2023-04-13 2023-04-13 Container surface information detection method based on image recognition

Country Status (1)

Country Link
CN (1) CN116110053B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116429790A (en) * 2023-06-14 2023-07-14 山东力乐新材料研究院有限公司 Wooden packing box production line intelligent management and control system based on data analysis
CN116452467A (en) * 2023-06-16 2023-07-18 山东曙岳车辆有限公司 Container real-time positioning method based on laser data
CN116596921A (en) * 2023-07-14 2023-08-15 济宁市质量计量检验检测研究院(济宁半导体及显示产品质量监督检验中心、济宁市纤维质量监测中心) Method and system for sorting incinerator slag
CN116612126A (en) * 2023-07-21 2023-08-18 青岛国际旅行卫生保健中心(青岛海关口岸门诊部) Container disease vector biological detection early warning method based on artificial intelligence
CN116703899B (en) * 2023-08-03 2023-10-24 青岛义龙包装机械有限公司 Bag type packaging machine product quality detection method based on image data

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279931A (en) * 2013-06-03 2013-09-04 中国人民解放军国防科学技术大学 Defogged image denoising method based on transmissivity
CN106203237A (en) * 2015-05-04 2016-12-07 杭州海康威视数字技术股份有限公司 The recognition methods of container-trailer numbering and device
US20170116709A1 (en) * 2007-12-07 2017-04-27 Sony Corporation Image processing apparatus, moving image reproducing apparatus, and processing method and program therefor
CN107067375A (en) * 2016-12-23 2017-08-18 四川大学 A kind of image defogging method based on dark channel prior and marginal information
CN107330433A (en) * 2017-05-17 2017-11-07 北京捷通华声科技股份有限公司 Image processing method and device
CN108416745A (en) * 2018-02-02 2018-08-17 中国科学院西安光学精密机械研究所 A kind of image adaptive defogging Enhancement Method with color constancy
CN108932700A (en) * 2018-05-17 2018-12-04 常州工学院 Self-adaption gradient gain underwater picture Enhancement Method based on target imaging model
CN109523480A (en) * 2018-11-12 2019-03-26 上海海事大学 A kind of defogging method, device, computer storage medium and the terminal of sea fog image
CN109949247A (en) * 2019-03-26 2019-06-28 常州工学院 A kind of gradient field adaptive gain underwater picture Enhancement Method based on YIQ space optics imaging model
CN110717869A (en) * 2019-09-11 2020-01-21 哈尔滨工程大学 Underwater turbid image sharpening method
CN111192213A (en) * 2019-12-27 2020-05-22 杭州雄迈集成电路技术股份有限公司 Image defogging adaptive parameter calculation method, image defogging method and system
CN111639542A (en) * 2020-05-06 2020-09-08 中移雄安信息通信科技有限公司 License plate recognition method, device, equipment and medium
CN114155173A (en) * 2022-02-10 2022-03-08 山东信通电子股份有限公司 Image defogging method and device and nonvolatile storage medium
CN114219732A (en) * 2021-12-15 2022-03-22 大连海事大学 Image defogging method and system based on sky region segmentation and transmissivity refinement
CN114373147A (en) * 2021-12-24 2022-04-19 辽宁工程技术大学 Detection method for low-texture video license plate
CN115147823A (en) * 2021-03-30 2022-10-04 锦航能源科技(天津)有限公司 Efficient and accurate license plate recognition method
WO2022237811A1 (en) * 2021-05-11 2022-11-17 北京字跳网络技术有限公司 Image processing method and apparatus, and device
CN115830585A (en) * 2022-12-05 2023-03-21 浙江海洋大学 Port container number identification method based on image enhancement
CN115861996A (en) * 2023-02-16 2023-03-28 青岛新比特电子科技有限公司 Data acquisition method and system based on Internet of things perception and AI neural network
CN115908428A (en) * 2023-03-03 2023-04-04 山东大学齐鲁医院 Image processing method and system for adjusting finger retractor

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170116709A1 (en) * 2007-12-07 2017-04-27 Sony Corporation Image processing apparatus, moving image reproducing apparatus, and processing method and program therefor
CN103279931A (en) * 2013-06-03 2013-09-04 中国人民解放军国防科学技术大学 Defogged image denoising method based on transmissivity
CN106203237A (en) * 2015-05-04 2016-12-07 杭州海康威视数字技术股份有限公司 The recognition methods of container-trailer numbering and device
CN107067375A (en) * 2016-12-23 2017-08-18 四川大学 A kind of image defogging method based on dark channel prior and marginal information
CN107330433A (en) * 2017-05-17 2017-11-07 北京捷通华声科技股份有限公司 Image processing method and device
CN108416745A (en) * 2018-02-02 2018-08-17 中国科学院西安光学精密机械研究所 A kind of image adaptive defogging Enhancement Method with color constancy
CN108932700A (en) * 2018-05-17 2018-12-04 常州工学院 Self-adaption gradient gain underwater picture Enhancement Method based on target imaging model
CN109523480A (en) * 2018-11-12 2019-03-26 上海海事大学 A kind of defogging method, device, computer storage medium and the terminal of sea fog image
CN109949247A (en) * 2019-03-26 2019-06-28 常州工学院 A kind of gradient field adaptive gain underwater picture Enhancement Method based on YIQ space optics imaging model
CN110717869A (en) * 2019-09-11 2020-01-21 哈尔滨工程大学 Underwater turbid image sharpening method
CN111192213A (en) * 2019-12-27 2020-05-22 杭州雄迈集成电路技术股份有限公司 Image defogging adaptive parameter calculation method, image defogging method and system
CN111639542A (en) * 2020-05-06 2020-09-08 中移雄安信息通信科技有限公司 License plate recognition method, device, equipment and medium
CN115147823A (en) * 2021-03-30 2022-10-04 锦航能源科技(天津)有限公司 Efficient and accurate license plate recognition method
WO2022237811A1 (en) * 2021-05-11 2022-11-17 北京字跳网络技术有限公司 Image processing method and apparatus, and device
CN114219732A (en) * 2021-12-15 2022-03-22 大连海事大学 Image defogging method and system based on sky region segmentation and transmissivity refinement
CN114373147A (en) * 2021-12-24 2022-04-19 辽宁工程技术大学 Detection method for low-texture video license plate
CN114155173A (en) * 2022-02-10 2022-03-08 山东信通电子股份有限公司 Image defogging method and device and nonvolatile storage medium
CN115830585A (en) * 2022-12-05 2023-03-21 浙江海洋大学 Port container number identification method based on image enhancement
CN115861996A (en) * 2023-02-16 2023-03-28 青岛新比特电子科技有限公司 Data acquisition method and system based on Internet of things perception and AI neural network
CN115908428A (en) * 2023-03-03 2023-04-04 山东大学齐鲁医院 Image processing method and system for adjusting finger retractor

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JIANBO XU等: "A fast video haze removal algorithm via mixed transmissivity optimisation", 《INTERNATIONAL JOURNAL OF EMBEDDED SYSTEMS》, vol. 11, no. 1, pages 84 - 93 *
YI-HSUAN LAI等: "Single-Image Dehazing via Optimal Transmission Map Under Scene Priors", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》, vol. 25, no. 1, pages 1 - 14, XP011569358, DOI: 10.1109/TCSVT.2014.2329381 *
张溯: "针对雾霾天气的车牌识别系统的设计与实现", 《中国优秀硕士学位论文全文数据库 工程科技II辑》, vol. 2023, no. 2, pages 034 - 1413 *
项胤: "基于图像处理的夜间雾天交通路标牌检测识别技术研究", 《中国优秀硕士学位论文全文数据库 工程科技II辑》, vol. 2023, no. 1, pages 035 - 971 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116429790A (en) * 2023-06-14 2023-07-14 山东力乐新材料研究院有限公司 Wooden packing box production line intelligent management and control system based on data analysis
CN116429790B (en) * 2023-06-14 2023-08-15 山东力乐新材料研究院有限公司 Wooden packing box production line intelligent management and control system based on data analysis
CN116452467A (en) * 2023-06-16 2023-07-18 山东曙岳车辆有限公司 Container real-time positioning method based on laser data
CN116452467B (en) * 2023-06-16 2023-09-22 山东曙岳车辆有限公司 Container real-time positioning method based on laser data
CN116596921A (en) * 2023-07-14 2023-08-15 济宁市质量计量检验检测研究院(济宁半导体及显示产品质量监督检验中心、济宁市纤维质量监测中心) Method and system for sorting incinerator slag
CN116596921B (en) * 2023-07-14 2023-10-20 济宁市质量计量检验检测研究院(济宁半导体及显示产品质量监督检验中心、济宁市纤维质量监测中心) Method and system for sorting incinerator slag
CN116612126A (en) * 2023-07-21 2023-08-18 青岛国际旅行卫生保健中心(青岛海关口岸门诊部) Container disease vector biological detection early warning method based on artificial intelligence
CN116612126B (en) * 2023-07-21 2023-09-19 青岛国际旅行卫生保健中心(青岛海关口岸门诊部) Container disease vector biological detection early warning method based on artificial intelligence
CN116703899B (en) * 2023-08-03 2023-10-24 青岛义龙包装机械有限公司 Bag type packaging machine product quality detection method based on image data

Also Published As

Publication number Publication date
CN116110053B (en) 2023-07-21

Similar Documents

Publication Publication Date Title
CN116110053B (en) Container surface information detection method based on image recognition
CN116168026B (en) Water quality detection method and system based on computer vision
CN116310360B (en) Reactor surface defect detection method
CN107680054B (en) Multi-source image fusion method in haze environment
CN115829883B (en) Surface image denoising method for special-shaped metal structural member
CN114494210B (en) Plastic film production defect detection method and system based on image processing
CN113034452B (en) Weldment contour detection method
CN116664559B (en) Machine vision-based memory bank damage rapid detection method
CN114219805B (en) Intelligent detection method for glass defects
CN111027446B (en) Coastline automatic extraction method of high-resolution image
CN114118144A (en) Anti-interference accurate aerial remote sensing image shadow detection method
CN110009653A (en) Increase limb recognition point sharp picture based on gray level threshold segmentation method and knows method for distinguishing
CN116310845B (en) Intelligent monitoring system for sewage treatment
CN116993731B (en) Shield tunneling machine tool bit defect detection method based on image
CN116152115B (en) Garbage image denoising processing method based on computer vision
CN116630813A (en) Highway road surface construction quality intelligent detection system
CN114764801A (en) Weak and small ship target fusion detection method and device based on multi-vision significant features
CN109829858A (en) A kind of shipborne radar image spilled oil monitoring method based on local auto-adaptive threshold value
CN109101985A (en) It is a kind of based on adaptive neighborhood test image mismatch point to elimination method
CN113705501B (en) Marine target detection method and system based on image recognition technology
CN113673385A (en) Sea surface ship detection method based on infrared image
CN114742849B (en) Leveling instrument distance measuring method based on image enhancement
CN112329674B (en) Icing lake detection method and device based on multi-texture feature fusion
CN115187788A (en) Crop seed automatic counting method based on machine vision
Li et al. Adaptive image enhancement and dynamic-template-matching-based edge extraction method for diamond roller on-machine profile measurement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant