CN115272301A - Automatic cheese defect detection method based on robot - Google Patents

Automatic cheese defect detection method based on robot Download PDF

Info

Publication number
CN115272301A
CN115272301A CN202211144530.4A CN202211144530A CN115272301A CN 115272301 A CN115272301 A CN 115272301A CN 202211144530 A CN202211144530 A CN 202211144530A CN 115272301 A CN115272301 A CN 115272301A
Authority
CN
China
Prior art keywords
texture
reconstruction
pixel point
feature map
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211144530.4A
Other languages
Chinese (zh)
Other versions
CN115272301B (en
Inventor
吴永惠
林静君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Xinshijia Home Textile High Tech Co ltd
Original Assignee
Jiangsu Xinshijia Home Textile High Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Xinshijia Home Textile High Tech Co ltd filed Critical Jiangsu Xinshijia Home Textile High Tech Co ltd
Priority to CN202211144530.4A priority Critical patent/CN115272301B/en
Publication of CN115272301A publication Critical patent/CN115272301A/en
Application granted granted Critical
Publication of CN115272301B publication Critical patent/CN115272301B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a robot-based automatic detection method for cheese defects, and belongs to the technical field of automatic detection of cheese defects. The method comprises the following steps: obtaining the information loss amount of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic map according to each reconstruction key parameter in the reconstruction characteristic sequence and the value of each reconstruction key parameter; obtaining the effective degree of a reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the information loss amount; according to the effective degree, obtaining the corresponding priority of each first texture reconstruction characteristic graph; obtaining a target first texture reconstruction characteristic map corresponding to the target surface image according to the priority; and obtaining the yarn disorder degree of the cheese surface corresponding to the target surface image according to the coordinates of each pixel point in the target first texture reconstruction characteristic diagram and the effective degree of the reconstruction characteristic sequence corresponding to each pixel point. The invention can improve the accuracy of detecting the defects of the cheese.

Description

Automatic cheese defect detection method based on robot
Technical Field
The invention relates to the technical field of automatic detection of cheese defects, in particular to a method for automatically detecting cheese defects based on a robot.
Background
Cheese is a thread wound on a bobbin, including but not limited to yarn, thread tape or the like; slight slippage can occur between yarn layers in the high-speed winding process of the cheese, so that the phenomenon of yarn disorder occurs between the yarn layers; since the cheese usually needs to be unwound for further use, when the yarn layers of the cheese have yarn disorder, the yarn or silk on the cheese may be broken when the cheese is unwound for use, and the normal use of the cheese is influenced.
The existing method for detecting the defects of the cheese is generally based on a manual detection mode, the detection mode has strong subjectivity, can waste a large amount of manual resources, can have the phenomena of missing detection or error detection and the like, and cannot detect the defects of the cheese accurately.
Disclosure of Invention
The invention provides a cheese defect automatic detection method based on a robot, which is used for solving the problem of low accuracy of the existing cheese defect detection, and adopts the following technical scheme:
in a first aspect, an embodiment of the present invention provides a method for automatically detecting defects of cheese based on a robot, including the following steps:
acquiring a target surface image in the unwinding process of the cheese;
obtaining each first texture feature map corresponding to the target surface image and a pixel value of each pixel point in each corresponding first texture feature map according to the neighborhood pixel points with different set numbers corresponding to each pixel point in the target surface image; according to the first texture feature map, obtaining a second texture feature map corresponding to the target surface image and a pixel value of each pixel point in the corresponding second texture feature map;
obtaining an original characteristic sequence corresponding to each pixel point in each first texture characteristic image according to the pixel value of each pixel point in each first texture characteristic image; obtaining a reference characteristic sequence corresponding to each pixel point in the second texture characteristic image according to the pixel value of each pixel point in the second texture characteristic image; obtaining a reconstruction characteristic sequence corresponding to each pixel point in each first texture characteristic image according to the original characteristic sequence and the reference characteristic sequence; obtaining a first texture reconstruction feature map corresponding to each first texture feature map according to the reconstruction feature sequence;
obtaining each reference key parameter and the value of each reference key parameter in the reference characteristic sequence corresponding to each pixel point in the second texture characteristic diagram according to the parameters in the reference characteristic sequence;
obtaining each reconstruction key parameter and each value of the reconstruction key parameter in the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map according to each reference key parameter and each value of the reference key parameter;
obtaining the information loss amount of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to each reference key parameter in each reference characteristic sequence, the value of each reference key parameter, each reconstruction key parameter in the reconstruction characteristic sequence and the value of each reconstruction key parameter;
obtaining the effective degree of a reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the information loss amount; according to the effective degree, obtaining the corresponding priority of each first texture reconstruction characteristic graph;
obtaining a target first texture reconstruction characteristic map corresponding to the target surface image according to the priority; and obtaining the yarn disorder degree of the cheese surface corresponding to the target surface image according to the coordinates of each pixel point in the target first texture reconstruction characteristic diagram and the effective degree of the reconstruction characteristic sequence corresponding to each pixel point.
Has the advantages that: according to the method, each reference key parameter and each value of the reference key parameter in each reference characteristic sequence and each reconstruction key parameter and each value of the reconstruction key parameter in each reconstruction characteristic sequence are used as a basis for obtaining the information loss amount corresponding to the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram; taking the information loss amount corresponding to the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram as a basis for obtaining the effective degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram; taking the effective degree as a basis for obtaining the priority corresponding to each first texture reconstruction feature map; taking the priority as a basis for obtaining a target first texture reconstruction feature map corresponding to the target surface image; taking the coordinates of each pixel point in the target first texture reconstruction characteristic diagram and the effective degree corresponding to each pixel point as a basis for obtaining the yarn disorder degree of the cheese surface corresponding to the target surface image; the method for detecting the defects of the cheese is an automatic processing method, and compared with a manual detection mode, the method can improve the accuracy of detecting the defects of the cheese.
Preferably, according to the neighborhood pixel points with different set numbers corresponding to each pixel point in the target surface image, obtaining each first texture feature map corresponding to the target surface image and the pixel value of each pixel point in each corresponding first texture feature map; according to the first texture feature map, a method for obtaining a second texture feature map corresponding to the target surface image comprises the following steps:
obtaining each first texture feature map corresponding to the target surface image under the neighborhood pixel points with different set numbers in the same size neighborhood of each pixel point in the target surface image and the LBP value corresponding to each pixel point on each corresponding first texture feature map by using an LBP algorithm; recording the LBP value as the pixel value of each pixel point on each first texture feature map;
and marking a first texture feature map corresponding to the target surface image under the neighborhood pixel points with the maximum set number in the same size neighborhood of each pixel point in the target surface image as a second texture feature map corresponding to the target surface image.
Preferably, according to the pixel value of each pixel point in each first textural feature map, an original feature sequence corresponding to each pixel point in each first textural feature map is obtained; obtaining a reference characteristic sequence corresponding to each pixel point in the second texture characteristic image according to the pixel value of each pixel point in the second texture characteristic image; obtaining a reconstruction characteristic sequence corresponding to each pixel point in each first texture characteristic image according to the original characteristic sequence and the reference characteristic sequence; the method for obtaining the first texture reconstruction feature map corresponding to each first texture feature map according to the reconstruction feature sequence comprises the following steps:
binary conversion is carried out on the pixel value of each pixel point in each first texture feature map, and binary numbers corresponding to each pixel point in each first texture feature map are obtained;
obtaining an original characteristic sequence corresponding to each pixel point in each first texture characteristic image according to the binary number of each pixel point in each first texture characteristic image;
binary conversion is carried out on the pixel value of each pixel point in the second texture feature map, and binary numbers corresponding to each pixel point in the second texture feature map are obtained;
obtaining a reference characteristic sequence corresponding to each pixel point in the second texture characteristic graph according to the binary number corresponding to each pixel point in the second texture characteristic graph;
interpolating the original characteristic sequence by a polynomial interpolation method; recording the original characteristic sequence after the interpolation processing as a target characteristic sequence corresponding to each pixel point in each first texture characteristic image; the length of the target characteristic sequence is the same as that of the reference characteristic sequence;
adjusting each parameter in the target feature sequence according to the difference between the value of each parameter in the target feature sequence and a preset parameter threshold value, and recording the adjusted target feature sequence as a reconstruction feature sequence corresponding to each pixel point in each first texture feature map;
and obtaining a first texture reconstruction feature map corresponding to each first texture feature map according to the reconstruction feature sequence.
Preferably, the method for obtaining, according to the parameters in the reference feature sequence, the reference key parameters and the values of the reference key parameters in the reference feature sequence corresponding to each pixel point in the second texture feature map includes:
recording the value of each parameter in the reference characteristic sequence as a longitudinal coordinate value corresponding to each parameter in the reference characteristic sequence, and recording the sequence of each parameter in the reference characteristic sequence as an abscissa value corresponding to each parameter in the reference characteristic sequence;
obtaining derivative values corresponding to the parameters in the reference characteristic sequence according to the abscissa values and the ordinate values corresponding to the parameters in the reference characteristic sequence;
recording a parameter with a derivative of 0 in the reference characteristic sequence as a reference key parameter in the reference characteristic sequence, and recording a value of the parameter with a derivative of 0 in the reference characteristic sequence as a value of the reference key parameter in the reference characteristic sequence; and the derivative value corresponding to each reference key parameter in the reference characteristic sequence is 0.
Preferably, the method for obtaining the information loss amount of the reconstructed feature sequence corresponding to each pixel point in each first texture reconstructed feature map according to each reference key parameter in each reference feature sequence, the value of each reference key parameter, each reconstructed key parameter in the reconstructed feature sequence, and the value of each reconstructed key parameter in the reconstructed feature sequence, includes:
counting the number of the reference key parameters in the reference characteristic sequence;
recording the value of each parameter in the reconstruction feature sequence as the ordinate value corresponding to each parameter in the reconstruction feature sequence, and recording the sequence of each parameter in the reconstruction feature sequence as the abscissa value corresponding to each parameter in the reconstruction feature sequence;
obtaining a derivative value corresponding to each reconstruction key parameter in the reconstruction feature sequence according to the longitudinal coordinate value and the corresponding abscissa value corresponding to each parameter in the reconstruction feature sequence;
and obtaining the information loss amount of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to each reference key parameter, the value of each reference key parameter, the derivative value corresponding to each reference key parameter, the number of the reference key parameters, each reconstruction key parameter, the value of each reconstruction key parameter and the derivative value corresponding to each reconstruction key parameter in each reconstruction characteristic sequence.
Preferably, the information loss amount of the reconstructed feature sequence corresponding to each pixel point in each first texture reconstructed feature map is calculated according to the following formula:
Figure 668179DEST_PATH_IMAGE001
wherein, the first and the second end of the pipe are connected with each other,
Figure 54161DEST_PATH_IMAGE002
is a first
Figure 568319DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 557003DEST_PATH_IMAGE004
The information loss amount of the reconstruction characteristic sequence corresponding to each pixel point,
Figure 105796DEST_PATH_IMAGE005
is the first texture feature map
Figure 662680DEST_PATH_IMAGE004
The number of reference key parameters in the reference characteristic sequence corresponding to each pixel point,
Figure 24653DEST_PATH_IMAGE006
is as follows
Figure 957974DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 220328DEST_PATH_IMAGE004
The first in the reconstruction characteristic sequence corresponding to each pixel point
Figure 948113DEST_PATH_IMAGE007
The values of the individual reconstruction key parameters are,
Figure 171284DEST_PATH_IMAGE008
is as follows
Figure 767350DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 25156DEST_PATH_IMAGE004
The first in the reconstruction characteristic sequence corresponding to each pixel point
Figure 189421DEST_PATH_IMAGE007
The derivative value corresponding to each reconstructed key parameter,
Figure 24522DEST_PATH_IMAGE009
is the first texture feature map
Figure 34066DEST_PATH_IMAGE004
In the reference feature sequence corresponding to each pixel point
Figure 411958DEST_PATH_IMAGE007
The value of each of the reference key parameters,
Figure 839135DEST_PATH_IMAGE010
is the first texture feature map
Figure 36898DEST_PATH_IMAGE004
In the reference feature sequence corresponding to each pixel point
Figure 850133DEST_PATH_IMAGE007
And the derivative value corresponding to each reference key parameter.
Preferably, the method for obtaining the validity degree of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map according to the information loss amount includes:
normalizing the reference characteristic sequence and the reconstruction characteristic sequence by using a DTW algorithm to obtain the corresponding relation between each parameter in the reconstruction characteristic sequence and each parameter in the reference characteristic sequence;
according to the corresponding relation, grouping each parameter in the reconstruction feature sequence and each parameter in the reference feature sequence to obtain the number of parameters corresponding to each group in the reference feature sequence and the number of reference key parameters in each group, and the number of parameters corresponding to each group in the reconstruction feature sequence and the number of reconstruction key parameters in each group;
obtaining the position offset of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the number of the parameters corresponding to each group in the reference characteristic sequence, the number of the reference key parameters in each group, the number of the parameters corresponding to each group in the reconstruction characteristic sequence and the number of the reconstruction key parameters in each group;
obtaining the error degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the position offset and the information loss;
and obtaining the effective degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the error degree.
Preferably, the method for obtaining the priority corresponding to each first texture reconstruction feature map according to the validity degree includes:
obtaining a pixel value corresponding to each pixel point in the first texture reconstruction characteristic map corresponding to each first texture characteristic map according to the reconstruction characteristic sequence;
selecting the maximum pixel value in each first texture reconstruction characteristic image and the maximum pixel value in each first texture characteristic image;
calculating the entropy value of the original characteristic sequence corresponding to each pixel point in each first texture characteristic diagram; calculating the entropy value of a reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram;
obtaining the credibility degree of the reconstruction characteristic sequence corresponding to each pixel point in the first texture reconstruction characteristic diagram corresponding to each first texture characteristic diagram according to the pixel value corresponding to each pixel point in each first texture characteristic diagram, the entropy value of the original characteristic sequence corresponding to each pixel point, the maximum pixel value in each first texture characteristic diagram, the pixel value corresponding to each pixel point in the first texture reconstruction characteristic diagram corresponding to each first texture characteristic diagram, the entropy value of the reconstruction characteristic sequence corresponding to each pixel point and the maximum pixel value in each first texture reconstruction characteristic diagram;
and obtaining the corresponding priority of each first texture reconstruction characteristic graph according to the effective degree and the credibility.
Preferably, the confidence level of the reconstruction feature sequence corresponding to each pixel point in the first texture reconstruction feature map corresponding to each first texture feature map is calculated according to the following formula:
Figure 207165DEST_PATH_IMAGE011
wherein the content of the first and second substances,
Figure 713233DEST_PATH_IMAGE012
is as follows
Figure 132713DEST_PATH_IMAGE003
The first texture reconstruction feature map corresponding to the first texture feature map
Figure 325140DEST_PATH_IMAGE004
The confidence level of the reconstructed feature sequence corresponding to each pixel point,
Figure 412044DEST_PATH_IMAGE013
is as follows
Figure 823434DEST_PATH_IMAGE003
First texture feature map
Figure 120423DEST_PATH_IMAGE004
The pixel value corresponding to each pixel point,
Figure 603357DEST_PATH_IMAGE014
is as follows
Figure 544768DEST_PATH_IMAGE003
The maximum pixel value in the first texture feature map,
Figure 487579DEST_PATH_IMAGE015
is a first
Figure 943968DEST_PATH_IMAGE003
The maximum pixel value in the first texture reconstruction feature map corresponding to the first texture feature map,
Figure 761751DEST_PATH_IMAGE016
is as follows
Figure 557669DEST_PATH_IMAGE003
The first texture reconstruction feature map corresponding to the first texture feature map
Figure 904337DEST_PATH_IMAGE004
The pixel value corresponding to each pixel point,
Figure 408874DEST_PATH_IMAGE017
is as follows
Figure 171293DEST_PATH_IMAGE003
A first texture feature map
Figure 87297DEST_PATH_IMAGE004
Entropy values of the original feature sequence corresponding to the individual pixel points,
Figure 11391DEST_PATH_IMAGE018
is as follows
Figure 770268DEST_PATH_IMAGE003
The first texture reconstruction feature map corresponding to the first texture feature map
Figure 70799DEST_PATH_IMAGE004
Entropy values of the reconstruction feature sequences corresponding to the pixel points.
Drawings
To more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the following description will be made
While the drawings necessary for the embodiment or prior art description are briefly described, it should be apparent that the drawings in the following description are merely examples of the invention and that other drawings may be derived from those drawings by those of ordinary skill in the art without inventive step.
FIG. 1 is a flow chart of a method for automatically detecting defects of cheese based on a robot according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, rather than all embodiments, and all other embodiments obtained by those skilled in the art based on the embodiments of the present invention belong to the protection scope of the embodiments of the present invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The embodiment provides a cheese defect automatic detection method based on a robot, which is described in detail as follows:
as shown in figure 1, the automatic detection method for defects of the cheese based on the robot comprises the following steps:
and S001, acquiring a target surface image in the unwinding process of the cheese.
In the embodiment, a camera at the tail end of a robot mechanical arm is used for acquiring the surface image of the cheese in the unwinding process, the cheese in the unwinding machine is enabled to rotate towards the same direction at a constant speed during image acquisition, the camera is kept still, the optical axis of the camera is perpendicular to the rotating shaft of the cheese, and the camera is enabled to look at the side face of the cheese; the method comprises the steps of obtaining a video frame of a cheese rotating for one circle, splicing surface images of all the cheeses in the collected video frame, carrying out graying processing on the spliced images, and recording the images subjected to graying processing after splicing as target surface images in the unwinding process of the cheeses.
In the embodiment, the rotating angle of the cheese and the frame rate of the image collected by the camera are set according to actual conditions; as a further embodiment, the cheese in the unwinding machine can also be stationary, the camera moving synchronously with the thread guide opening.
In this embodiment, the LBP operator is subsequently used to analyze the target surface image, and when the LBP operator is used to generate each texture feature map corresponding to the target surface image, if the gray values of each pixel point in the target surface image are equal, one pixel point in the neighborhood of each pixel point in the target surface image is used to reflect the texture feature of the target surface image; if the target surface image is a binary stripe image, the texture characteristics of the target surface image can be effectively embodied only by using at least two pixel points in the neighborhood of each pixel point in the target surface image; if the target surface image is a sine stripe image, the texture characteristics of the target surface image can be effectively embodied only by at least three pixel points in the neighborhood of each pixel point in the target surface image according to the Nyquist sampling theorem; therefore, the more complex texture needs more neighborhood pixels in the same size neighborhood, that is, the more complex texture needs more sampling points in the same size neighborhood, and the meanings of the neighborhood pixels and the sampling points are the same; in this embodiment, because the complexity of the texture of the target surface image is not determined, when generating each texture feature map corresponding to the target surface image by using the LBP operator, the number of sampling points in the same-size neighborhood corresponding to each pixel in the target surface image needs to be determined, that is, the number of neighborhood pixel points in the same-size neighborhood corresponding to each pixel in the target surface image is determined, and the number of sampling points in the same-size neighborhood corresponding to each pixel in the determined target surface image is different, so that the generated texture features in the texture feature maps are also different; the excessive sampling points have unnecessary calculation amount when generating the corresponding texture feature map of the target surface image, and the insufficient extraction of the texture features when generating the corresponding texture feature map has the phenomenon of inaccurate reflection of the texture features of the target surface image due to the insufficient number of the excessive sampling points; therefore, in this embodiment, each texture feature map corresponding to the target surface image needs to be analyzed to obtain a texture feature map which has a proper number of sampling points and can accurately reflect the texture features of the target surface image, and the texture feature map which has a proper number of sampling points and can accurately reflect the texture features of the target surface image is analyzed to obtain the random yarn degree in the unwinding process of the cheese corresponding to the target surface image.
Step S002, obtaining each first texture feature map corresponding to the target surface image and the pixel value of each pixel point in each corresponding first texture feature map according to the neighborhood pixel points with different set numbers corresponding to each pixel point in the target surface image; and obtaining a second texture feature map corresponding to the target surface image and a pixel value of each pixel point in the corresponding second texture feature map according to the first texture feature map.
In the embodiment, each first texture feature map corresponding to the target surface image is obtained by analyzing neighborhood pixel points with different set numbers corresponding to each pixel point in the target surface image; analyzing each first texture feature map to obtain a second texture feature map corresponding to the target surface image; and taking the obtained first texture feature map and the second texture feature map as a basis for subsequently obtaining an original feature sequence corresponding to each pixel point in each first texture feature map and a reference feature sequence corresponding to each pixel point in the second texture feature map.
In this embodiment, an LBP algorithm is used to obtain a first texture feature map corresponding to a set number of neighborhood pixels in the same size neighborhood of each pixel point in the target surface image, the positions of the neighborhood pixels corresponding to each pixel point used in the same size neighborhood of each pixel point in the target surface image are consistent, and an LBP value corresponding to each pixel point in the corresponding first texture feature map is obtained, and the LBP value corresponding to each pixel point in the first texture feature map is recorded as the pixel value corresponding to each pixel point in the corresponding first texture feature map; therefore, through the process, the first texture feature maps corresponding to the neighborhood pixel points of different set numbers in the same size neighborhood of each pixel point in the target surface image and the pixel values corresponding to the pixel points in the first texture feature maps can be obtained; for example, a first texture feature map corresponding to the neighborhood pixels with the same size in the target surface image is obtained by using an LBP algorithm, a first texture feature map corresponding to the neighborhood pixels with the same size in the target surface image is obtained by using the LBP algorithm, and a first texture feature map corresponding to the neighborhood pixels with the same size in the target surface image is obtained by using the LBP algorithm, wherein the neighborhood pixels with the same size in the target surface image have the number of 2.
In this embodiment, the method for extracting the texture feature map by using the LBP algorithm is the prior art, so this embodiment does not have any toolsDescription of the drawings. In this embodiment, the number of the neighborhood pixels which are most set in the same size neighborhood of each pixel in the target surface image is set asN,NValue of (d) is 64; as other implementation manners, the number of the neighborhood pixels which are set in the neighborhood with the same size at most in each pixel in the target surface image may also be set according to actual conditions.
In this embodiment, the pixel values of the pixel points in each first texture feature map corresponding to the target surface image and each corresponding first texture feature map can be obtained through the above process; recording first texture feature maps corresponding to neighborhood pixel points with the maximum set number of pixel points in the same-size neighborhood in the target surface image as second texture feature maps corresponding to the target surface image; and recording the pixel value of each pixel point in the first texture characteristic graph corresponding to the neighborhood pixel points with the maximum set number in the same size neighborhood of each pixel point in the target surface image as the pixel value of each pixel point in the second texture characteristic graph corresponding to the target surface image.
Step S003, obtaining an original characteristic sequence corresponding to each pixel point in each first textural feature map according to the pixel value of each pixel point in each first textural feature map; obtaining a reference characteristic sequence corresponding to each pixel point in the second texture characteristic image according to the pixel value of each pixel point in the second texture characteristic image; obtaining a reconstruction characteristic sequence corresponding to each pixel point in each first texture characteristic image according to the original characteristic sequence and the reference characteristic sequence; and obtaining a first texture reconstruction feature map corresponding to each first texture feature map according to the reconstruction feature sequence.
In this embodiment, an original feature sequence corresponding to each pixel point in each first texture feature map and a reference feature sequence corresponding to each pixel point in the second texture feature map are obtained by analyzing the pixel value of each pixel point in each first texture feature map corresponding to the target surface image and the pixel value of each pixel point in the second texture feature map corresponding to the target surface image; obtaining a reconstruction characteristic sequence corresponding to each pixel point in each first texture characteristic image according to the original characteristic sequence and the reference characteristic sequence; and obtaining a first texture reconstruction feature map corresponding to each first texture feature map according to the reconstruction feature sequence corresponding to each pixel point in each first texture feature map. And taking the obtained reconstruction characteristic sequence corresponding to each pixel point in each first texture characteristic graph and the first texture reconstruction characteristic graph corresponding to each first texture characteristic graph as a basis for subsequently calculating the information loss amount and the offset amount corresponding to the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic graph.
(a) Obtaining an original characteristic sequence corresponding to each pixel point in each first texture characteristic image according to the pixel value of each pixel point in each first texture characteristic image; the specific process of obtaining the reference feature sequence corresponding to each pixel point in the second texture feature map according to the pixel value of each pixel point in the second texture feature map is as follows:
in this embodiment, because the LBP value corresponding to each pixel point on each first texture feature map is decimal, the pixel value corresponding to each pixel point in each first texture feature map corresponding to the obtained target surface image is converted into a binary number, and each first texture feature map corresponding to the target surface image does not include the first texture feature map corresponding to the second texture feature map, that is, each first texture feature map corresponding to the target surface image does not include the first texture feature map corresponding to the target surface image under the neighborhood pixel points of which the number is at most set in the neighborhood of the same size of each pixel point in the target surface image; obtaining an original characteristic sequence corresponding to each pixel point in each first texture characteristic image according to the binary number corresponding to each pixel point in each first texture characteristic image; for example, if a pixel value corresponding to a certain pixel point in a certain first texture feature map is 5,5 is converted into a binary value of 101, the value of the 1 st parameter in the original feature sequence corresponding to the pixel point in the first texture feature map is 1, the value of the 2 nd parameter is 0, and the value of the 3 rd parameter is 1. Then, the pixel value of each pixel point in a second texture feature map corresponding to the target surface image is converted into a binary number through the method, a reference feature sequence corresponding to each pixel point in the second texture feature map is obtained according to the binary number corresponding to each pixel point in the second texture feature map, and the length of the reference feature sequence is equal to that of the reference feature sequenceN
In this embodiment, through the above process, it can be obtained that each pixel point in the target surface image corresponds to one reference feature sequence and N-1 original feature sequences.
(b) Obtaining a reconstruction characteristic sequence corresponding to each pixel point in each first texture characteristic diagram according to the original characteristic sequence corresponding to each pixel point in each first texture characteristic diagram and the reference characteristic sequence corresponding to each pixel point in the second texture characteristic diagram; the specific process of obtaining the first texture reconstruction feature map corresponding to each first texture feature map according to the reconstruction feature sequence corresponding to each pixel point in each first texture feature map is as follows:
in this embodiment, a polynomial interpolation method is used to perform interpolation processing on the original feature sequences corresponding to each pixel point in each first texture feature map, so that the length of the original feature sequence corresponding to each pixel point in each first texture feature map after interpolation processing is consistent with the length of the reference feature sequence; recording an original characteristic sequence corresponding to each pixel point in each first texture characteristic image after interpolation processing as a target characteristic sequence corresponding to each pixel point in each first texture characteristic image; then, according to the difference between the value of each parameter in the target feature sequence corresponding to each pixel point in each first textural feature map and a preset parameter threshold value, adjusting each parameter in the target feature sequence corresponding to each pixel point in each first textural feature map, and recording the target feature sequence corresponding to each pixel point in each adjusted first textural feature map as a reconstruction feature sequence corresponding to each pixel point in each first textural feature map; the specific process of adjusting each parameter in the target feature sequence is as follows: and judging whether the value of each parameter in the target feature sequence corresponding to each pixel point in each first texture feature map is smaller than a preset parameter threshold value, if so, adjusting the corresponding parameter to be 0, and otherwise, adjusting the corresponding parameter to be 1.
In this embodiment, the polynomial interpolation method is the prior art, and therefore, this embodiment is not described in detail; in this embodiment, the value of the preset parameter threshold is set to 0.5; as another embodiment, the preset parameter threshold may also be set according to actual conditions.
In this embodiment, the first texture reconstruction feature maps corresponding to the first texture feature maps are formed by the first reconstruction feature sequences corresponding to the pixel points in the first texture feature maps obtained in the above process, and each pixel point in the first texture reconstruction feature maps corresponds to one reconstruction feature sequence. In this embodiment, through the above process, it can be obtained that each pixel point on the target surface image corresponds to one reference feature sequence and N-1 reconstruction feature sequences.
And step S004, obtaining each reference key parameter and each value of each reference key parameter in the reference characteristic sequence corresponding to each pixel point in the second texture characteristic diagram according to the parameters in the reference characteristic sequence.
In this embodiment, each reference key parameter, a value of each reference key parameter, and a number of reference key parameters in the reference feature sequence corresponding to each pixel point in the second texture feature map are obtained by analyzing the reference feature sequence; and taking each reference key parameter, the value of each reference key parameter and the number of the reference key parameters in the obtained reference feature sequence as a basis for subsequent analysis and calculation of each reconstruction key parameter, the value of each reconstruction key parameter and the number of the reconstruction key parameters in the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map.
In this embodiment, the value of each parameter in the reference feature sequence corresponding to each pixel point in the second texture feature map is recorded as the ordinate value corresponding to each parameter in the reference feature sequence corresponding to each pixel point in the second texture feature map, and the sequence of each parameter in the reference feature sequence corresponding to each pixel point in the second texture feature map is recorded as the abscissa value corresponding to each parameter in the reference feature sequence corresponding to each pixel point in the second texture feature map; obtaining derivative values corresponding to the parameters in the reference characteristic sequence corresponding to the pixel points in the second texture characteristic diagram according to the abscissa value and the ordinate value corresponding to the parameters in the reference characteristic sequence corresponding to the pixel points in the second texture characteristic diagram; and recording the parameter with the derivative of 0 in the reference characteristic sequence corresponding to each pixel point as the reference key parameter in the reference characteristic sequence corresponding to each pixel point in the second texture characteristic diagram, recording the value of the parameter with the derivative of 0 in the reference characteristic sequence corresponding to each pixel point as the value of the reference key parameter in the reference characteristic sequence corresponding to each pixel point in the second texture characteristic diagram, and counting the number of the reference key parameters in the reference characteristic sequence corresponding to each pixel point in the second texture characteristic diagram.
In this embodiment, through the above process, each reference key parameter, a value of each reference key parameter, and a number of reference key parameters in the reference feature sequence corresponding to each pixel point in the second texture feature map may be obtained.
And step S005, obtaining each reconstruction key parameter and each value of the reconstruction key parameter in the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map according to each reference key parameter and each value of the reference key parameter.
In this embodiment, each reference key parameter, each value of the reference key parameter, and the number of the reference key parameters in the reference feature sequence are analyzed to determine each reconstruction key parameter, each value of the reconstruction key parameter, and the number of the reconstruction key parameters in the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map; and taking the reconstruction key parameters, the values of the reconstruction key parameters and the quantity of the reconstruction key parameters in the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map as a basis for subsequently analyzing and calculating the information loss quantity of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map.
In this embodiment, each reference key parameter, the value of each reference key parameter, and the number of reference key parameters in the reference feature sequence corresponding to each pixel point in the second texture feature map may be obtained according to the above process; the positions of all the pixel points in the second texture characteristic image and the positions of all the pixel points in the second texture characteristic image are consistent with the positions of all the pixel points in the target surface image. Therefore, in this embodiment, each reconstruction key parameter, each value of the reconstruction key parameter, and the number of the reconstruction key parameters in the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map are determined according to each reference key parameter, each value of the reference key parameter, and the number of the reference key parameters in the reference feature sequence corresponding to each pixel point in the second texture feature map. For example, the reference feature sequence corresponding to the 1 st pixel point in the second texture feature map is [00011100], the reconstructed feature sequence corresponding to the 1 st pixel point in a certain first texture feature map is [00011000], since the derivative of the 3 rd and 6 th parameters in the reference feature sequence is 0, the 3 rd and 6 th parameters in the reference feature sequence are reference key parameters, the 3 rd and 6 th parameters in the reference feature sequence are recorded as the 1 st and 2 nd reference key parameters in the reference feature sequence, the value of the 1 st reference key parameter is 0, and the value of the 2 nd reference key parameter is 1; the number of the reference key parameters in the reference characteristic sequence is 2; therefore, the 3 rd and 6 th parameters in the reconstruction feature sequence are reconstruction key parameters, the 3 rd and 6 th parameters in the reconstruction feature sequence are recorded as the 1 st and 2 nd reconstruction key parameters in the reconstruction feature sequence, the value of the 1 st reconstruction key parameter is 0, and the value of the 2 nd reconstruction key parameter is 0; the number of reconstruction key parameters in the reconstruction signature sequence is 2.
Step S006 is to obtain an information loss amount of the reconstructed feature sequence corresponding to each pixel point in each first texture reconstructed feature map according to each reference key parameter in each reference feature sequence, a value of each reference key parameter, and each reconstructed key parameter and a value of each reconstructed key parameter in the reconstructed feature sequence.
In this embodiment, the information loss amount of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map is obtained by analyzing each reference key parameter, the value of each reference key parameter, and the number of the reference key parameters in the reference feature sequence corresponding to each pixel point, and each reconstruction key parameter, the value of each reconstruction key parameter, and the number of the reconstruction key parameters corresponding to the reconstruction feature sequence corresponding to each pixel point; and taking the information loss amount as a basis for calculating the effective degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram through subsequent analysis.
In this embodiment, the value of each parameter in the reconstructed feature sequence corresponding to each pixel point in each first texture feature map is recorded as a longitudinal coordinate value corresponding to each parameter in the reconstructed feature sequence corresponding to each pixel point in the first texture feature map, and the sequence of each parameter in the reconstructed feature sequence corresponding to each pixel point in each first texture feature map is recorded as an abscissa value corresponding to each parameter in the reconstructed feature sequence corresponding to each pixel point in the first texture feature map; and obtaining derivative values corresponding to all parameters in the reconstruction characteristic sequence corresponding to all pixel points in each first texture characteristic diagram according to the longitudinal coordinate values and the corresponding transverse coordinate values of all parameters in the reconstruction characteristic sequence corresponding to all pixel points in each first texture characteristic diagram, obtaining derivative values corresponding to all reconstruction key parameters in the reconstruction characteristic sequence corresponding to all pixel points in each first texture characteristic diagram, and counting the quantity of the reconstruction key parameters in the reconstruction characteristic sequence corresponding to all pixel points in each first texture characteristic diagram.
In this embodiment, the information loss amount of the reconstructed feature sequence corresponding to each pixel point in each first texture reconstructed feature map is obtained according to each reference key parameter, the value of each reference key parameter, the derivative value and the number of the reference key parameters corresponding to each pixel point in the second texture feature map, each reconstructed key parameter, the value of each reconstructed key parameter, and the derivative value corresponding to each reconstructed key parameter in the reconstructed feature sequence corresponding to each pixel point in each first texture feature map; and calculating the information loss amount of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the following formula:
Figure 841309DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 60938DEST_PATH_IMAGE002
is as follows
Figure 182478DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 21121DEST_PATH_IMAGE004
Reconstructed feature sequence corresponding to each pixel pointThe amount of information lost of (a) is,
Figure 770771DEST_PATH_IMAGE005
is the first texture feature map
Figure 36667DEST_PATH_IMAGE004
The number of reference key parameters in the reference feature sequence corresponding to each pixel point, i.e.
Figure 379924DEST_PATH_IMAGE005
Is also the first
Figure 913936DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 393459DEST_PATH_IMAGE004
The number of the reconstruction key parameters in the reconstruction characteristic sequence corresponding to each pixel point,
Figure 95835DEST_PATH_IMAGE006
is as follows
Figure 51022DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 497047DEST_PATH_IMAGE004
The first in the reconstruction characteristic sequence corresponding to each pixel point
Figure 96655DEST_PATH_IMAGE007
The values of the individual reconstruction key parameters are,
Figure 969933DEST_PATH_IMAGE008
is a first
Figure 146837DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 130973DEST_PATH_IMAGE004
The first in the reconstruction characteristic sequence corresponding to each pixel point
Figure 850668DEST_PATH_IMAGE007
The derivative value corresponding to each reconstructed key parameter,
Figure 19481DEST_PATH_IMAGE009
is the first texture feature map
Figure 824626DEST_PATH_IMAGE004
In the reference feature sequence corresponding to each pixel point
Figure 612453DEST_PATH_IMAGE007
The value of each of the reference key parameters,
Figure 550103DEST_PATH_IMAGE010
is the first texture feature map
Figure 765184DEST_PATH_IMAGE004
In the reference feature sequence corresponding to each pixel point
Figure 792046DEST_PATH_IMAGE007
And the derivative value corresponding to each reference key parameter.
In the present embodiment, the first and second electrodes are,
Figure 242619DEST_PATH_IMAGE019
the larger the value of (A) is, the first
Figure 671326DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 57308DEST_PATH_IMAGE004
The greater the amount of information lost from the reconstructed signature sequence corresponding to each pixel point,
Figure 837045DEST_PATH_IMAGE020
the larger the value of (A) is, the first
Figure 91309DEST_PATH_IMAGE003
A first oneTexture reconstruction of feature map
Figure 108943DEST_PATH_IMAGE004
The larger the information loss amount corresponding to the reconstruction characteristic sequence corresponding to each pixel point.
In this embodiment, the information loss amount of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map can be obtained through the above calculation process.
Step S007, obtaining the effective degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the information loss amount; and obtaining the priority corresponding to each first texture reconstruction characteristic graph according to the effective degree.
In this embodiment, the effectiveness of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map is obtained by analyzing the information loss amount and the position offset of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map; obtaining the priority corresponding to each first texture reconstruction feature map according to the effective degree of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map; and taking the priority corresponding to each obtained first texture reconstruction characteristic graph as a basis for subsequently obtaining a target first texture reconstruction characteristic graph corresponding to the target surface image.
(a) The specific process of obtaining the effective degree of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map according to the information loss amount and the position offset of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map is as follows:
in this embodiment, the specific process of obtaining the position offset of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map by analyzing each reconstruction feature sequence corresponding to each pixel point and the reference feature sequence corresponding to each pixel point is as follows: utilizing a DTW algorithm to normalize the reference characteristic sequence corresponding to each pixel point and each reconstruction characteristic sequence corresponding to each pixel point to obtain the corresponding relation between each parameter in each reconstruction characteristic sequence corresponding to each pixel point and each parameter in the reference characteristic sequence corresponding to each pixel point; grouping each parameter in each reconstruction characteristic sequence corresponding to each pixel point and each parameter in the reference characteristic sequence corresponding to each pixel point according to the corresponding relationship between each parameter in each reconstruction characteristic sequence corresponding to each pixel point and each parameter in the reference characteristic sequence corresponding to each pixel point; according to the grouping result, the number of parameters corresponding to each group in the reference characteristic sequence corresponding to each pixel point and the number of reference key parameters in each group are obtained, and the number of parameters corresponding to each group in each reconstruction characteristic sequence corresponding to each pixel point and the number of reconstruction key parameters in each group are obtained; for example, the reference feature sequence corresponding to the 1 st pixel point in the second texture feature map is [00011100], the reconstructed feature sequence corresponding to the 1 st pixel point in a certain first texture feature map is [00011000], then the 1 st, 2 nd, and 3 rd parameters in the reference feature sequence are the 1 st group corresponding to the reference feature sequence, the 4 th, 5 th, and 6 th parameters in the reference feature sequence are the 2 nd group corresponding to the reference feature sequence, and the 7 th and 8 th parameters in the reference feature sequence are the 3 rd group corresponding to the reference feature sequence; the number of the parameters in the 1 st group corresponding to the reference feature sequence is 3, the number of the reference key parameters is 1, the number of the parameters in the 2 nd group corresponding to the reference feature sequence is 3, the number of the reference key parameters is 1, the number of the parameters in the 3 rd group corresponding to the reference feature sequence is 2, and the number of the reference key parameters is 0; then the 1 st, 2 nd and 3 rd parameters in the reconstructed feature sequence are the 1 st group corresponding to the reconstructed feature sequence, the 4 th and 5 th parameters in the reconstructed feature sequence are the 2 nd group corresponding to the reconstructed feature sequence, and the 6 th, 7 th and 8 th parameters in the reconstructed feature sequence are the 3 rd group corresponding to the reconstructed feature sequence; the number of the parameters in the 1 st group corresponding to the reconstruction feature sequence is 3, the number of the reconstruction key parameters is 1, the number of the parameters in the 2 nd group corresponding to the reconstruction feature sequence is 2, the number of the reconstruction key parameters is 0, the number of the parameters in the 3 rd group corresponding to the reconstruction feature sequence is 3, and the number of the reconstruction key parameters is 1.
In this embodiment, the position offset of the reconstructed feature sequence corresponding to each pixel point in each first texture reconstructed feature map is obtained according to the quantity of the parameters corresponding to each group in the reference feature sequence corresponding to each pixel point, the quantity of the reference key parameters in each group, the quantity of the parameters corresponding to each group in each reconstructed feature sequence corresponding to each pixel point, and the quantity of the reconstruction key parameters in each group; calculating the position offset of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic graph according to the following formula:
Figure 931406DEST_PATH_IMAGE021
wherein the content of the first and second substances,
Figure 57494DEST_PATH_IMAGE022
is a first
Figure 990815DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 862956DEST_PATH_IMAGE004
The position offset of the reconstruction characteristic sequence corresponding to each pixel point,
Figure 482418DEST_PATH_IMAGE023
is the first texture feature map
Figure 971168DEST_PATH_IMAGE004
The number of groups in the reference feature sequence corresponding to each pixel point,
Figure 442601DEST_PATH_IMAGE024
is the first texture feature map
Figure 700407DEST_PATH_IMAGE004
In the reference feature sequence corresponding to each pixel point
Figure 989306DEST_PATH_IMAGE025
The number of parameters in the set is,
Figure 699773DEST_PATH_IMAGE026
in the second texture feature map
Figure 974896DEST_PATH_IMAGE004
The first in the reference characteristic sequence corresponding to each pixel point
Figure 352788DEST_PATH_IMAGE025
The number of reference key parameters in the set,
Figure 547009DEST_PATH_IMAGE027
is a first
Figure 807089DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 102548DEST_PATH_IMAGE004
The first in the reconstruction characteristic sequence corresponding to each pixel point
Figure 334946DEST_PATH_IMAGE025
The number of parameters in the set is,
Figure 309855DEST_PATH_IMAGE028
is as follows
Figure 385128DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 2054DEST_PATH_IMAGE004
The first in the reconstruction characteristic sequence corresponding to each pixel point
Figure 88958DEST_PATH_IMAGE025
The number of reconstruction critical parameters in the set.
In the present embodiment, the first and second electrodes are,
Figure 765927DEST_PATH_IMAGE029
greater value of (A) indicates the first
Figure 564381DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 453840DEST_PATH_IMAGE004
The greater the amount of positional offset of the reconstructed feature sequence corresponding to a pixel point,
Figure 660830DEST_PATH_IMAGE030
greater values of (A) indicate (B)
Figure 367755DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 27407DEST_PATH_IMAGE004
The larger the position offset corresponding to the reconstruction feature sequence corresponding to each pixel point is.
In this embodiment, the position offset of the reconstructed feature sequence corresponding to each pixel point in each first texture reconstructed feature map can be obtained through the above process; in this embodiment, the error degree of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map is obtained according to the position offset and the information loss amount of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map; calculating the error degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the following formula:
Figure 720556DEST_PATH_IMAGE031
wherein the content of the first and second substances,
Figure 906687DEST_PATH_IMAGE032
is a first
Figure 925459DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 72406DEST_PATH_IMAGE004
The degree of error of the reconstructed feature sequence corresponding to each pixel point,
Figure 569247DEST_PATH_IMAGE022
is as follows
Figure 609884DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 799557DEST_PATH_IMAGE004
The position offset of the reconstructed feature sequence corresponding to each pixel point,
Figure 433800DEST_PATH_IMAGE002
first, the
Figure 734332DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 145589DEST_PATH_IMAGE004
And (4) information loss amount of the reconstruction characteristic sequence corresponding to each pixel point. In the present embodiment, the first and second electrodes are,
Figure 506163DEST_PATH_IMAGE033
the larger the value of (A) is, the first
Figure 627703DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 466346DEST_PATH_IMAGE004
The higher the degree of error of the reconstructed feature sequence corresponding to each pixel point.
In this embodiment, because each pixel point on the target surface image corresponds to N-1 error degrees, an error degree curve corresponding to each pixel point on the target surface image is obtained by taking the number of the first texture reconstruction feature maps as the horizontal axis and the error degree corresponding to each pixel point in each first texture reconstruction feature map as the vertical axis; removing noise on an error degree curve corresponding to each pixel point in the target surface image by using median filtering, and recording each error degree corresponding to each pixel point in the target surface image after the noise is removed as each target error degree corresponding to each pixel point in the target surface image; obtaining the target error degree of a reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic map according to each target error degree corresponding to each pixel point in the target surface image; obtaining the effective degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic map according to the target error degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic map; calculating the effective degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the following formula:
Figure 481575DEST_PATH_IMAGE034
wherein, the first and the second end of the pipe are connected with each other,
Figure 13050DEST_PATH_IMAGE035
is as follows
Figure 356307DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 264220DEST_PATH_IMAGE004
The significance of the reconstructed feature sequence corresponding to each pixel point,
Figure 133956DEST_PATH_IMAGE036
is as follows
Figure 836333DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 666886DEST_PATH_IMAGE004
The target error degree of the reconstruction characteristic sequence corresponding to each pixel point;
Figure 847331DEST_PATH_IMAGE036
the larger the value of (A) is, the first
Figure 837153DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 710431DEST_PATH_IMAGE004
The less effective the reconstructed feature sequence corresponding to a pixel point.
(b) The specific process of obtaining the priority corresponding to each first texture reconstruction feature map according to the effective degree and the credibility of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map is as follows:
in this embodiment, the specific process of analyzing each first texture feature map and the first texture reconstruction feature map corresponding to each first texture feature map to obtain the credibility of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map is as follows: obtaining a pixel value corresponding to each pixel point in the first texture reconstruction characteristic map corresponding to each first texture characteristic map according to the reconstruction characteristic sequence corresponding to each pixel point in the first texture reconstruction characteristic map corresponding to each first texture characteristic map; selecting a maximum pixel value in each first texture reconstruction characteristic image and a maximum pixel value in each first texture characteristic image; then calculating the entropy of the original characteristic sequence corresponding to each pixel point in each first texture characteristic diagram and calculating the entropy of the reconstruction characteristic sequence corresponding to each pixel point in the first texture reconstruction characteristic diagram corresponding to each first texture characteristic diagram; obtaining the credibility of the corresponding reconstruction feature sequence of each pixel in the first texture reconstruction feature map corresponding to each first texture feature map according to the pixel value corresponding to each pixel in each first texture feature map, the entropy of the corresponding original feature sequence of each pixel, the maximum pixel value in each first texture feature map, the pixel value corresponding to each pixel in the first texture reconstruction feature map corresponding to each first texture feature map, the entropy of the corresponding reconstruction feature sequence of each pixel and the maximum pixel value in each first texture reconstruction feature map:
Figure 28280DEST_PATH_IMAGE037
wherein the content of the first and second substances,
Figure 12416DEST_PATH_IMAGE012
is as follows
Figure 358209DEST_PATH_IMAGE003
The first texture reconstruction feature map corresponding to the first texture feature map
Figure 402389DEST_PATH_IMAGE004
The confidence level of the reconstructed feature sequence corresponding to each pixel point,
Figure 207534DEST_PATH_IMAGE013
is a first
Figure 995361DEST_PATH_IMAGE003
A first texture feature map
Figure 694196DEST_PATH_IMAGE004
The pixel value corresponding to each pixel point,
Figure 909276DEST_PATH_IMAGE014
is a first
Figure 936138DEST_PATH_IMAGE003
The maximum pixel value in the first texture feature map,
Figure 527657DEST_PATH_IMAGE015
is as follows
Figure 80998DEST_PATH_IMAGE003
The maximum pixel value in the first texture reconstruction feature map corresponding to the first texture feature map,
Figure 466980DEST_PATH_IMAGE016
is as follows
Figure 246717DEST_PATH_IMAGE003
The first texture reconstruction feature map corresponding to the first texture feature map
Figure 376347DEST_PATH_IMAGE004
The pixel value corresponding to each pixel point is calculated,
Figure 784194DEST_PATH_IMAGE017
is as follows
Figure 606657DEST_PATH_IMAGE003
A first texture feature map
Figure 608111DEST_PATH_IMAGE004
Entropy values of the original feature sequence corresponding to the individual pixel points,
Figure 275853DEST_PATH_IMAGE018
is a first
Figure 302321DEST_PATH_IMAGE003
The first texture reconstruction feature map corresponding to the first texture feature map
Figure 295685DEST_PATH_IMAGE004
And entropy values of the reconstruction characteristic sequences corresponding to the pixel points.
In the present embodiment, the first and second electrodes are,
Figure 518856DEST_PATH_IMAGE038
the larger the value of (A) is, the first
Figure 255868DEST_PATH_IMAGE003
The first texture reconstruction feature map corresponding to the first texture feature map
Figure 638308DEST_PATH_IMAGE004
The higher the credibility of the reconstruction characteristic sequence corresponding to each pixel point is;
Figure 536993DEST_PATH_IMAGE039
the smaller the value of (A) is, the first
Figure 513040DEST_PATH_IMAGE003
The first texture reconstruction feature map corresponding to the first texture feature map
Figure 788163DEST_PATH_IMAGE004
The higher the confidence level of the reconstructed feature sequence corresponding to each pixel point.
In this embodiment, the validity degree and the credibility degree of the reconstruction feature sequence corresponding to each pixel point on each first texture reconstruction feature map can be obtained according to the above process; obtaining the priority corresponding to each first texture reconstruction feature map according to the effective degree and the credibility of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map; calculating the corresponding priority of each first texture reconstruction feature map according to the following formula:
Figure 25109DEST_PATH_IMAGE040
wherein the content of the first and second substances,
Figure 360276DEST_PATH_IMAGE041
is a first
Figure 558039DEST_PATH_IMAGE003
The priority corresponding to the first texture reconstruction feature map,
Figure 636853DEST_PATH_IMAGE042
is as follows
Figure 993885DEST_PATH_IMAGE003
The number of pixel points in the first texture reconstruction feature map, i.e.
Figure 234374DEST_PATH_IMAGE042
And the number of pixel points in each first texture reconstruction feature map,
Figure 185012DEST_PATH_IMAGE035
is as follows
Figure 801939DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 514942DEST_PATH_IMAGE004
Reconstruction characteristics corresponding to each pixel pointThe degree of validity of the signature sequence,
Figure 926332DEST_PATH_IMAGE012
is as follows
Figure 98687DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 519304DEST_PATH_IMAGE004
Credibility of the reconstruction characteristic sequence corresponding to each pixel point;
Figure 585349DEST_PATH_IMAGE043
the larger the value of (A), the more
Figure 433219DEST_PATH_IMAGE041
The greater the value of (a) is,
Figure 92871DEST_PATH_IMAGE041
greater values of (A) indicate (B)
Figure 51600DEST_PATH_IMAGE003
The first texture reconstruction characteristic graph can clearly reflect the texture information of the target surface image under the condition that the set number of neighborhood pixel points corresponding to the target surface image is proper.
Step S008, obtaining a target first texture reconstruction characteristic diagram corresponding to the target surface image according to the priority; and obtaining the yarn disorder degree of the cheese surface corresponding to the target surface image according to the coordinates of each pixel point in the target first texture reconstruction characteristic diagram and the effective degree of the reconstruction characteristic sequence corresponding to each pixel point.
In this embodiment, a target first texture reconstruction feature map corresponding to the target surface image is obtained by analyzing the priority corresponding to each first texture reconstruction feature map; and analyzing the coordinates of each pixel point in the target first texture reconstruction characteristic diagram and the effectiveness corresponding to each pixel point to obtain the yarn disorder degree of the surface of the cheese corresponding to the target surface image.
This implementationIn the example, the first texture reconstruction feature map corresponding to the maximum priority is taken as a target first texture reconstruction feature map corresponding to the target surface image; obtaining a feature vector corresponding to each pixel point in a target first texture reconstruction feature map corresponding to the target surface image according to the corresponding coordinates of each pixel point in the target first texture reconstruction feature map corresponding to the target surface image and the effective degree of each pixel point
Figure 237730DEST_PATH_IMAGE044
Wherein
Figure 256502DEST_PATH_IMAGE045
Reconstructing a first texture in a feature map for a target
Figure 137870DEST_PATH_IMAGE004
The feature vector corresponding to each pixel point,
Figure 900290DEST_PATH_IMAGE046
reconstructing a first texture in a feature map for a target
Figure 940927DEST_PATH_IMAGE004
The abscissa of the point corresponding to each pixel,
Figure 130600DEST_PATH_IMAGE047
reconstructing the first texture in the feature map for the target
Figure 764844DEST_PATH_IMAGE004
The ordinate of the point corresponding to each pixel point,
Figure 65375DEST_PATH_IMAGE048
reconstructing the first texture in the feature map for the target
Figure 464913DEST_PATH_IMAGE004
The significance of the point correspondence of each pixel.
In the embodiment, feature vectors corresponding to each pixel point in the target first texture reconstruction feature map are clustered by using mean shift clustering, the number of pixel points which are not divided into any one cluster is obtained, and the number of the pixel points which are not divided into any one cluster is recorded as the number of discrete points in the target first texture reconstruction feature map; obtaining the yarn disorder degree of the surface of the cheese corresponding to the target surface image according to the number of discrete points in the target first texture reconstruction characteristic diagram; calculating the disorder degree of the surface of the cheese corresponding to the target surface image according to the following formula:
Figure 825487DEST_PATH_IMAGE049
wherein the content of the first and second substances,
Figure 681448DEST_PATH_IMAGE050
in order to obtain the yarn disorder degree of the surface of the cheese corresponding to the target surface image,
Figure 785670DEST_PATH_IMAGE051
the number of discrete points in the feature map is reconstructed for the target first texture,
Figure 800899DEST_PATH_IMAGE042
reconstructing the number of pixel points in the feature map for the target first texture;
Figure 332375DEST_PATH_IMAGE051
the larger the value of (A) is, the more serious the yarn disorder degree of the surface of the cheese corresponding to the target surface image is obtained, i.e. the
Figure 675632DEST_PATH_IMAGE050
The larger the value of (A), the more serious the yarn disorder degree of the surface of the cheese corresponding to the target surface image is obtained.
Has the advantages that: in this embodiment, each reference key parameter and each value of the reference key parameter in each reference feature sequence, and each reconstruction key parameter and each value of the reconstruction key parameter in the reconstruction feature sequence are used as a basis for obtaining an information loss amount corresponding to the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map; taking the information loss amount corresponding to the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic map as a basis for obtaining the effective degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic map; taking the effective degree as a basis for obtaining the priority corresponding to each first texture reconstruction feature map; taking the priority as a basis for obtaining a target first texture reconstruction feature map corresponding to the target surface image; the coordinates of all pixel points in the target first texture reconstruction characteristic diagram and the effective degree corresponding to all pixel points are used as a basis for obtaining the yarn disorder degree of the cheese surface corresponding to the target surface image; the method for detecting the defects of the cheese is an automatic processing method, and compared with a manual detection mode, the method can improve the accuracy of detecting the defects of the cheese.
It should be noted that the order of the above-mentioned embodiments of the present invention is merely for description and does not represent the merits of the embodiments, and in some cases, actions or steps recited in the claims may be executed in an order different from the order of the embodiments and still achieve desirable results.

Claims (9)

1. A cheese defect automatic detection method based on a robot is characterized by comprising the following steps:
acquiring a target surface image in the unwinding process of the cheese;
obtaining each first texture feature map corresponding to the target surface image and a pixel value of each pixel point in each corresponding first texture feature map according to the neighborhood pixel points with different set numbers corresponding to each pixel point in the target surface image; obtaining a second texture feature map corresponding to the target surface image and a pixel value of each pixel point in the corresponding second texture feature map according to the first texture feature map;
obtaining an original characteristic sequence corresponding to each pixel point in each first textural feature map according to the pixel value of each pixel point in each first textural feature map; obtaining a reference characteristic sequence corresponding to each pixel point in the second texture characteristic diagram according to the pixel value of each pixel point in the second texture characteristic diagram; obtaining a reconstruction characteristic sequence corresponding to each pixel point in each first texture characteristic image according to the original characteristic sequence and the reference characteristic sequence; obtaining a first texture reconstruction feature map corresponding to each first texture feature map according to the reconstruction feature sequence;
obtaining each reference key parameter and the value of each reference key parameter in the reference feature sequence corresponding to each pixel point in the second texture feature map according to the parameters in the reference feature sequence;
obtaining each reconstruction key parameter and each value of the reconstruction key parameter in the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map according to each reference key parameter and each value of the reference key parameter;
obtaining the information loss amount of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to each reference key parameter in each reference characteristic sequence, the value of each reference key parameter, each reconstruction key parameter in the reconstruction characteristic sequence and the value of each reconstruction key parameter;
obtaining the effective degree of a reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the information loss amount; according to the effective degree, obtaining the corresponding priority of each first texture reconstruction characteristic map;
obtaining a target first texture reconstruction feature map corresponding to the target surface image according to the priority; and obtaining the yarn disorder degree of the cheese surface corresponding to the target surface image according to the coordinates of each pixel point in the target first texture reconstruction characteristic diagram and the effective degree of the reconstruction characteristic sequence corresponding to each pixel point.
2. The automatic cheese defect detection method based on the robot according to claim 1, wherein the pixel values of the pixel points in each first texture feature map corresponding to the target surface image and the pixel points in each corresponding first texture feature map are obtained according to the neighborhood pixel points with different set numbers corresponding to the pixel points in the target surface image; according to the first texture feature map, a method for obtaining a second texture feature map corresponding to the target surface image comprises the following steps:
obtaining each first texture feature map corresponding to the target surface image under the neighborhood pixel points with different set numbers in the same size neighborhood of each pixel point in the target surface image and the LBP value corresponding to each pixel point on each corresponding first texture feature map by using an LBP algorithm; recording the LBP value as the pixel value of each pixel point on each first texture feature map;
and recording a first texture feature map corresponding to the target surface image under the neighborhood pixel points with the maximum set number in the same-size neighborhood of each pixel point in the target surface image as a second texture feature map corresponding to the target surface image.
3. The method according to claim 1, wherein an original feature sequence corresponding to each pixel point in each first texture feature map is obtained according to the pixel value of each pixel point in each first texture feature map; obtaining a reference characteristic sequence corresponding to each pixel point in the second texture characteristic diagram according to the pixel value of each pixel point in the second texture characteristic diagram; obtaining a reconstruction characteristic sequence corresponding to each pixel point in each first texture characteristic image according to the original characteristic sequence and the reference characteristic sequence; the method for obtaining the first texture reconstruction feature map corresponding to each first texture feature map according to the reconstruction feature sequence comprises the following steps:
performing binary conversion on the pixel value of each pixel point in each first textural feature map to obtain a binary number corresponding to each pixel point in each first textural feature map;
obtaining an original characteristic sequence corresponding to each pixel point in each first texture characteristic image according to the binary number of each pixel point in each first texture characteristic image;
binary conversion is carried out on the pixel value of each pixel point in the second texture feature map, and binary numbers corresponding to each pixel point in the second texture feature map are obtained;
obtaining a reference characteristic sequence corresponding to each pixel point in the second texture characteristic diagram according to the binary number corresponding to each pixel point in the second texture characteristic diagram;
carrying out interpolation processing on the original characteristic sequence by utilizing a polynomial interpolation method; recording the original characteristic sequence after the interpolation processing as a target characteristic sequence corresponding to each pixel point in each first texture characteristic image; the length of the target characteristic sequence is the same as that of the reference characteristic sequence;
adjusting each parameter in the target feature sequence according to the difference between the value of each parameter in the target feature sequence and a preset parameter threshold value, and recording the adjusted target feature sequence as a reconstructed feature sequence corresponding to each pixel point in each first texture feature map;
and obtaining a first texture reconstruction feature map corresponding to each first texture feature map according to the reconstruction feature sequence.
4. The method for automatically detecting the defects of the cheese based on the robot as claimed in claim 1, wherein the method for obtaining the values of the reference key parameters and the reference key parameters in the reference feature sequence corresponding to the pixel points in the second texture feature map according to the parameters in the reference feature sequence comprises:
recording the value of each parameter in the reference characteristic sequence as a longitudinal coordinate value corresponding to each parameter in the reference characteristic sequence, and recording the sequence of each parameter in the reference characteristic sequence as an abscissa value corresponding to each parameter in the reference characteristic sequence;
obtaining a derivative value corresponding to each parameter in the reference characteristic sequence according to an abscissa value and a corresponding ordinate value corresponding to each parameter in the reference characteristic sequence;
recording a parameter with a derivative of 0 in the reference characteristic sequence as a reference key parameter in the reference characteristic sequence, and recording a value of the parameter with a derivative of 0 in the reference characteristic sequence as a value of the reference key parameter in the reference characteristic sequence; and the derivative value corresponding to each reference key parameter in the reference characteristic sequence is 0.
5. The robot-based automatic cheese defect detection method according to claim 1 or 4, wherein the method for obtaining the information loss amount of the reconstructed feature sequence corresponding to each pixel point in each first texture reconstructed feature map according to each reference key parameter and each reference key parameter value in each reference feature sequence and each reconstructed key parameter value in the reconstructed feature sequence comprises:
counting the number of the reference key parameters in the reference characteristic sequence;
recording the value of each parameter in the reconstruction feature sequence as the ordinate value corresponding to each parameter in the reconstruction feature sequence, and recording the sequence of each parameter in the reconstruction feature sequence as the abscissa value corresponding to each parameter in the reconstruction feature sequence;
obtaining a derivative value corresponding to each reconstruction key parameter in the reconstruction feature sequence according to the ordinate value and the abscissa value corresponding to each parameter in the reconstruction feature sequence;
and obtaining the information loss amount of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to each reference key parameter, the value of each reference key parameter, the derivative value corresponding to each reference key parameter and the number of the reference key parameters in each reference characteristic sequence, each reconstruction key parameter, the value of each reconstruction key parameter and the derivative value corresponding to each reconstruction key parameter in the reconstruction characteristic sequence.
6. The automatic cheese defect detection method based on robot according to claim 5, wherein the information loss amount of the reconstructed feature sequence corresponding to each pixel point in each first texture reconstructed feature map is calculated according to the following formula:
Figure 934307DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 648185DEST_PATH_IMAGE002
is as follows
Figure 162343DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 88711DEST_PATH_IMAGE004
The information loss amount of the reconstruction characteristic sequence corresponding to each pixel point,
Figure 699821DEST_PATH_IMAGE005
is the first texture feature map
Figure 256704DEST_PATH_IMAGE004
The number of reference key parameters in the reference characteristic sequence corresponding to each pixel point,
Figure 71208DEST_PATH_IMAGE006
is as follows
Figure 4529DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 470145DEST_PATH_IMAGE004
The first in the reconstruction characteristic sequence corresponding to each pixel point
Figure 197930DEST_PATH_IMAGE007
The values of the individual reconstruction key parameters are,
Figure 748997DEST_PATH_IMAGE008
is a first
Figure 220429DEST_PATH_IMAGE003
A first texture reconstructs the feature map
Figure 274973DEST_PATH_IMAGE004
The first in the reconstruction characteristic sequence corresponding to each pixel point
Figure 252287DEST_PATH_IMAGE007
The derivative value corresponding to each reconstructed key parameter,
Figure 228334DEST_PATH_IMAGE009
is the first texture feature map
Figure 300195DEST_PATH_IMAGE004
In the reference feature sequence corresponding to each pixel point
Figure 678086DEST_PATH_IMAGE007
The value of each of the reference key parameters,
Figure 809991DEST_PATH_IMAGE010
is the first texture feature map
Figure 273333DEST_PATH_IMAGE004
In the reference feature sequence corresponding to each pixel point
Figure 148885DEST_PATH_IMAGE007
Derivative values corresponding to the individual reference key parameters.
7. The method for automatically detecting the defects of the cheese based on the robot as claimed in claim 1, wherein the method for obtaining the effective degree of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map according to the information loss amount comprises:
normalizing the reference characteristic sequence and the reconstruction characteristic sequence by using a DTW algorithm to obtain the corresponding relation between each parameter in the reconstruction characteristic sequence and each parameter in the reference characteristic sequence;
according to the corresponding relation, grouping each parameter in the reconstruction feature sequence and each parameter in the reference feature sequence to obtain the number of parameters corresponding to each group in the reference feature sequence, the number of reference key parameters in each group, the number of parameters corresponding to each group in the reconstruction feature sequence and the number of reconstruction key parameters in each group;
obtaining the position offset of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the number of the parameters corresponding to each group in the reference characteristic sequence, the number of the reference key parameters in each group, the number of the parameters corresponding to each group in the reconstruction characteristic sequence and the number of the reconstruction key parameters in each group;
obtaining the error degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the position offset and the information loss;
and obtaining the effective degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the error degree.
8. The method for automatically detecting the defects of the cheese based on the robot as claimed in claim 1, wherein the method for obtaining the corresponding priority of each first texture reconstruction feature map according to the effectiveness degree comprises the following steps:
according to the reconstruction characteristic sequence, obtaining pixel values corresponding to all pixel points in the first texture reconstruction characteristic diagram corresponding to all the first texture characteristic diagrams;
selecting the maximum pixel value in each first texture reconstruction characteristic image and the maximum pixel value in each first texture characteristic image;
calculating the entropy value of the original characteristic sequence corresponding to each pixel point in each first texture characteristic diagram; calculating an entropy value of a reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram;
obtaining the credibility degree of the reconstruction characteristic sequence corresponding to each pixel point in the first texture reconstruction characteristic diagram corresponding to each first texture characteristic diagram according to the pixel value corresponding to each pixel point in each first texture characteristic diagram, the entropy value of the original characteristic sequence corresponding to each pixel point, the maximum pixel value in each first texture characteristic diagram, the pixel value corresponding to each pixel point in the first texture reconstruction characteristic diagram corresponding to each first texture characteristic diagram, the entropy value of the reconstruction characteristic sequence corresponding to each pixel point and the maximum pixel value in each first texture reconstruction characteristic diagram;
and obtaining the corresponding priority of each first texture reconstruction characteristic graph according to the effective degree and the credibility.
9. The method according to claim 8, wherein the confidence level of the reconstructed feature sequence corresponding to each pixel point in the first texture reconstruction feature map corresponding to each first texture feature map is calculated according to the following formula:
Figure 381283DEST_PATH_IMAGE011
wherein the content of the first and second substances,
Figure 697471DEST_PATH_IMAGE012
is a first
Figure 382530DEST_PATH_IMAGE003
The first texture reconstruction feature map corresponding to the first texture feature map
Figure 61773DEST_PATH_IMAGE004
The confidence level of the reconstructed feature sequence corresponding to each pixel point,
Figure 945415DEST_PATH_IMAGE013
is a first
Figure 622384DEST_PATH_IMAGE003
A first texture feature map
Figure 857056DEST_PATH_IMAGE004
The pixel value corresponding to each pixel point is calculated,
Figure 12094DEST_PATH_IMAGE014
is a first
Figure 32134DEST_PATH_IMAGE003
The maximum pixel value in the first texture feature map,
Figure 880004DEST_PATH_IMAGE015
is as follows
Figure 601973DEST_PATH_IMAGE003
The maximum pixel value in the first texture reconstruction feature map corresponding to the first texture feature map,
Figure 357439DEST_PATH_IMAGE016
is as follows
Figure 153357DEST_PATH_IMAGE003
The first texture reconstruction feature map corresponding to the first texture feature map
Figure 234445DEST_PATH_IMAGE004
The pixel value corresponding to each pixel point,
Figure 381393DEST_PATH_IMAGE017
is as follows
Figure 956862DEST_PATH_IMAGE003
A first texture feature map
Figure 872865DEST_PATH_IMAGE004
Entropy values of the original feature sequence corresponding to the individual pixel points,
Figure 859276DEST_PATH_IMAGE018
is as follows
Figure 493519DEST_PATH_IMAGE003
The first texture reconstruction feature map corresponding to the first texture feature map
Figure 856368DEST_PATH_IMAGE004
And entropy values of the reconstruction characteristic sequences corresponding to the pixel points.
CN202211144530.4A 2022-09-20 2022-09-20 Automatic cheese defect detection method based on robot Active CN115272301B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211144530.4A CN115272301B (en) 2022-09-20 2022-09-20 Automatic cheese defect detection method based on robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211144530.4A CN115272301B (en) 2022-09-20 2022-09-20 Automatic cheese defect detection method based on robot

Publications (2)

Publication Number Publication Date
CN115272301A true CN115272301A (en) 2022-11-01
CN115272301B CN115272301B (en) 2022-12-23

Family

ID=83756488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211144530.4A Active CN115272301B (en) 2022-09-20 2022-09-20 Automatic cheese defect detection method based on robot

Country Status (1)

Country Link
CN (1) CN115272301B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570889A (en) * 2016-11-10 2017-04-19 河海大学 Detecting method for weak target in infrared video
CN113688844A (en) * 2021-08-13 2021-11-23 上海商汤智能科技有限公司 Neural network training method and device, electronic equipment and storage medium
CN113850808A (en) * 2021-12-01 2021-12-28 武汉泰盛包装材料有限公司 Multilayer corrugated paper arrangement defect detection method and device based on image processing
CN114510969A (en) * 2022-01-20 2022-05-17 中国人民解放军战略支援部队信息工程大学 Noise reduction method for coordinate time series
CN114549448A (en) * 2022-02-17 2022-05-27 中国空气动力研究与发展中心超高速空气动力研究所 Complex multi-type defect detection and evaluation method based on infrared thermal imaging data analysis
CN114897894A (en) * 2022-07-11 2022-08-12 海门市芳华纺织有限公司 Method for detecting defects of cheese chrysanthemum core
CN115019159A (en) * 2022-08-09 2022-09-06 济宁安泰矿山设备制造有限公司 Method for quickly identifying pump bearing fault

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570889A (en) * 2016-11-10 2017-04-19 河海大学 Detecting method for weak target in infrared video
CN113688844A (en) * 2021-08-13 2021-11-23 上海商汤智能科技有限公司 Neural network training method and device, electronic equipment and storage medium
CN113850808A (en) * 2021-12-01 2021-12-28 武汉泰盛包装材料有限公司 Multilayer corrugated paper arrangement defect detection method and device based on image processing
CN114510969A (en) * 2022-01-20 2022-05-17 中国人民解放军战略支援部队信息工程大学 Noise reduction method for coordinate time series
CN114549448A (en) * 2022-02-17 2022-05-27 中国空气动力研究与发展中心超高速空气动力研究所 Complex multi-type defect detection and evaluation method based on infrared thermal imaging data analysis
CN114897894A (en) * 2022-07-11 2022-08-12 海门市芳华纺织有限公司 Method for detecting defects of cheese chrysanthemum core
CN115019159A (en) * 2022-08-09 2022-09-06 济宁安泰矿山设备制造有限公司 Method for quickly identifying pump bearing fault

Also Published As

Publication number Publication date
CN115272301B (en) 2022-12-23

Similar Documents

Publication Publication Date Title
CN115641329B (en) Lithium battery diaphragm defect detection method and system
CN114882044B (en) Metal pipe surface quality detection method
CN114279357B (en) Die casting burr size measurement method and system based on machine vision
CN115330628B (en) Video frame-by-frame denoising method based on image processing
CN113610773B (en) Gasket hole quality detection method, system, device and storage medium
CN115841434B (en) Infrared image enhancement method for gas concentration analysis
CN111179233B (en) Self-adaptive deviation rectifying method based on laser cutting of two-dimensional parts
CN115082462A (en) Method and system for detecting appearance quality of fluid conveying pipe
CN111027398A (en) Automobile data recorder video occlusion detection method
CN114494210A (en) Plastic film production defect detection method and system based on image processing
CN113610772B (en) Method, system, device and storage medium for detecting spraying code defect at bottom of pop can bottle
CN109540917B (en) Method for extracting and analyzing yarn appearance characteristic parameters in multi-angle mode
CN108489996A (en) A kind of defect inspection method of insulator, system and terminal device
CN111415339B (en) Image defect detection method for complex texture industrial product
CN116485797B (en) Artificial intelligence-based paint color difference rapid detection method
CN112330598A (en) Method and device for detecting stiff silk defects on chemical fiber surface and storage medium
CN112258444A (en) Elevator steel wire rope detection method
CN111861990A (en) Method, system and storage medium for detecting bad appearance of product
CN113920122A (en) Cable defect detection method and system based on artificial intelligence
CN115239661A (en) Mechanical part burr detection method and system based on image processing
CN115272301B (en) Automatic cheese defect detection method based on robot
CN112529853A (en) Method and device for detecting damage of netting of underwater aquaculture net cage
CN115994870B (en) Image processing method for enhancing denoising
CN107993193A (en) The tunnel-liner image split-joint method of surf algorithms is equalized and improved based on illumination
CN111127450A (en) Bridge crack detection method and system based on image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant