CN115272301A - Automatic cheese defect detection method based on robot - Google Patents
Automatic cheese defect detection method based on robot Download PDFInfo
- Publication number
- CN115272301A CN115272301A CN202211144530.4A CN202211144530A CN115272301A CN 115272301 A CN115272301 A CN 115272301A CN 202211144530 A CN202211144530 A CN 202211144530A CN 115272301 A CN115272301 A CN 115272301A
- Authority
- CN
- China
- Prior art keywords
- texture
- reconstruction
- pixel point
- feature map
- sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 235000013351 cheese Nutrition 0.000 title claims abstract description 58
- 230000007547 defect Effects 0.000 title claims abstract description 26
- 238000001514 detection method Methods 0.000 title claims abstract description 19
- 238000010586 diagram Methods 0.000 claims abstract description 73
- 238000000034 method Methods 0.000 claims abstract description 58
- 239000000126 substance Substances 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000005070 sampling Methods 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 240000002129 Malva sylvestris Species 0.000 description 2
- 235000006770 Malva sylvestris Nutrition 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
- 238000004804 winding Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/467—Encoded features or binary features, e.g. local binary patterns [LBP]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/54—Extraction of image or video features relating to texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30124—Fabrics; Textile; Paper
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a robot-based automatic detection method for cheese defects, and belongs to the technical field of automatic detection of cheese defects. The method comprises the following steps: obtaining the information loss amount of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic map according to each reconstruction key parameter in the reconstruction characteristic sequence and the value of each reconstruction key parameter; obtaining the effective degree of a reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the information loss amount; according to the effective degree, obtaining the corresponding priority of each first texture reconstruction characteristic graph; obtaining a target first texture reconstruction characteristic map corresponding to the target surface image according to the priority; and obtaining the yarn disorder degree of the cheese surface corresponding to the target surface image according to the coordinates of each pixel point in the target first texture reconstruction characteristic diagram and the effective degree of the reconstruction characteristic sequence corresponding to each pixel point. The invention can improve the accuracy of detecting the defects of the cheese.
Description
Technical Field
The invention relates to the technical field of automatic detection of cheese defects, in particular to a method for automatically detecting cheese defects based on a robot.
Background
Cheese is a thread wound on a bobbin, including but not limited to yarn, thread tape or the like; slight slippage can occur between yarn layers in the high-speed winding process of the cheese, so that the phenomenon of yarn disorder occurs between the yarn layers; since the cheese usually needs to be unwound for further use, when the yarn layers of the cheese have yarn disorder, the yarn or silk on the cheese may be broken when the cheese is unwound for use, and the normal use of the cheese is influenced.
The existing method for detecting the defects of the cheese is generally based on a manual detection mode, the detection mode has strong subjectivity, can waste a large amount of manual resources, can have the phenomena of missing detection or error detection and the like, and cannot detect the defects of the cheese accurately.
Disclosure of Invention
The invention provides a cheese defect automatic detection method based on a robot, which is used for solving the problem of low accuracy of the existing cheese defect detection, and adopts the following technical scheme:
in a first aspect, an embodiment of the present invention provides a method for automatically detecting defects of cheese based on a robot, including the following steps:
acquiring a target surface image in the unwinding process of the cheese;
obtaining each first texture feature map corresponding to the target surface image and a pixel value of each pixel point in each corresponding first texture feature map according to the neighborhood pixel points with different set numbers corresponding to each pixel point in the target surface image; according to the first texture feature map, obtaining a second texture feature map corresponding to the target surface image and a pixel value of each pixel point in the corresponding second texture feature map;
obtaining an original characteristic sequence corresponding to each pixel point in each first texture characteristic image according to the pixel value of each pixel point in each first texture characteristic image; obtaining a reference characteristic sequence corresponding to each pixel point in the second texture characteristic image according to the pixel value of each pixel point in the second texture characteristic image; obtaining a reconstruction characteristic sequence corresponding to each pixel point in each first texture characteristic image according to the original characteristic sequence and the reference characteristic sequence; obtaining a first texture reconstruction feature map corresponding to each first texture feature map according to the reconstruction feature sequence;
obtaining each reference key parameter and the value of each reference key parameter in the reference characteristic sequence corresponding to each pixel point in the second texture characteristic diagram according to the parameters in the reference characteristic sequence;
obtaining each reconstruction key parameter and each value of the reconstruction key parameter in the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map according to each reference key parameter and each value of the reference key parameter;
obtaining the information loss amount of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to each reference key parameter in each reference characteristic sequence, the value of each reference key parameter, each reconstruction key parameter in the reconstruction characteristic sequence and the value of each reconstruction key parameter;
obtaining the effective degree of a reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the information loss amount; according to the effective degree, obtaining the corresponding priority of each first texture reconstruction characteristic graph;
obtaining a target first texture reconstruction characteristic map corresponding to the target surface image according to the priority; and obtaining the yarn disorder degree of the cheese surface corresponding to the target surface image according to the coordinates of each pixel point in the target first texture reconstruction characteristic diagram and the effective degree of the reconstruction characteristic sequence corresponding to each pixel point.
Has the advantages that: according to the method, each reference key parameter and each value of the reference key parameter in each reference characteristic sequence and each reconstruction key parameter and each value of the reconstruction key parameter in each reconstruction characteristic sequence are used as a basis for obtaining the information loss amount corresponding to the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram; taking the information loss amount corresponding to the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram as a basis for obtaining the effective degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram; taking the effective degree as a basis for obtaining the priority corresponding to each first texture reconstruction feature map; taking the priority as a basis for obtaining a target first texture reconstruction feature map corresponding to the target surface image; taking the coordinates of each pixel point in the target first texture reconstruction characteristic diagram and the effective degree corresponding to each pixel point as a basis for obtaining the yarn disorder degree of the cheese surface corresponding to the target surface image; the method for detecting the defects of the cheese is an automatic processing method, and compared with a manual detection mode, the method can improve the accuracy of detecting the defects of the cheese.
Preferably, according to the neighborhood pixel points with different set numbers corresponding to each pixel point in the target surface image, obtaining each first texture feature map corresponding to the target surface image and the pixel value of each pixel point in each corresponding first texture feature map; according to the first texture feature map, a method for obtaining a second texture feature map corresponding to the target surface image comprises the following steps:
obtaining each first texture feature map corresponding to the target surface image under the neighborhood pixel points with different set numbers in the same size neighborhood of each pixel point in the target surface image and the LBP value corresponding to each pixel point on each corresponding first texture feature map by using an LBP algorithm; recording the LBP value as the pixel value of each pixel point on each first texture feature map;
and marking a first texture feature map corresponding to the target surface image under the neighborhood pixel points with the maximum set number in the same size neighborhood of each pixel point in the target surface image as a second texture feature map corresponding to the target surface image.
Preferably, according to the pixel value of each pixel point in each first textural feature map, an original feature sequence corresponding to each pixel point in each first textural feature map is obtained; obtaining a reference characteristic sequence corresponding to each pixel point in the second texture characteristic image according to the pixel value of each pixel point in the second texture characteristic image; obtaining a reconstruction characteristic sequence corresponding to each pixel point in each first texture characteristic image according to the original characteristic sequence and the reference characteristic sequence; the method for obtaining the first texture reconstruction feature map corresponding to each first texture feature map according to the reconstruction feature sequence comprises the following steps:
binary conversion is carried out on the pixel value of each pixel point in each first texture feature map, and binary numbers corresponding to each pixel point in each first texture feature map are obtained;
obtaining an original characteristic sequence corresponding to each pixel point in each first texture characteristic image according to the binary number of each pixel point in each first texture characteristic image;
binary conversion is carried out on the pixel value of each pixel point in the second texture feature map, and binary numbers corresponding to each pixel point in the second texture feature map are obtained;
obtaining a reference characteristic sequence corresponding to each pixel point in the second texture characteristic graph according to the binary number corresponding to each pixel point in the second texture characteristic graph;
interpolating the original characteristic sequence by a polynomial interpolation method; recording the original characteristic sequence after the interpolation processing as a target characteristic sequence corresponding to each pixel point in each first texture characteristic image; the length of the target characteristic sequence is the same as that of the reference characteristic sequence;
adjusting each parameter in the target feature sequence according to the difference between the value of each parameter in the target feature sequence and a preset parameter threshold value, and recording the adjusted target feature sequence as a reconstruction feature sequence corresponding to each pixel point in each first texture feature map;
and obtaining a first texture reconstruction feature map corresponding to each first texture feature map according to the reconstruction feature sequence.
Preferably, the method for obtaining, according to the parameters in the reference feature sequence, the reference key parameters and the values of the reference key parameters in the reference feature sequence corresponding to each pixel point in the second texture feature map includes:
recording the value of each parameter in the reference characteristic sequence as a longitudinal coordinate value corresponding to each parameter in the reference characteristic sequence, and recording the sequence of each parameter in the reference characteristic sequence as an abscissa value corresponding to each parameter in the reference characteristic sequence;
obtaining derivative values corresponding to the parameters in the reference characteristic sequence according to the abscissa values and the ordinate values corresponding to the parameters in the reference characteristic sequence;
recording a parameter with a derivative of 0 in the reference characteristic sequence as a reference key parameter in the reference characteristic sequence, and recording a value of the parameter with a derivative of 0 in the reference characteristic sequence as a value of the reference key parameter in the reference characteristic sequence; and the derivative value corresponding to each reference key parameter in the reference characteristic sequence is 0.
Preferably, the method for obtaining the information loss amount of the reconstructed feature sequence corresponding to each pixel point in each first texture reconstructed feature map according to each reference key parameter in each reference feature sequence, the value of each reference key parameter, each reconstructed key parameter in the reconstructed feature sequence, and the value of each reconstructed key parameter in the reconstructed feature sequence, includes:
counting the number of the reference key parameters in the reference characteristic sequence;
recording the value of each parameter in the reconstruction feature sequence as the ordinate value corresponding to each parameter in the reconstruction feature sequence, and recording the sequence of each parameter in the reconstruction feature sequence as the abscissa value corresponding to each parameter in the reconstruction feature sequence;
obtaining a derivative value corresponding to each reconstruction key parameter in the reconstruction feature sequence according to the longitudinal coordinate value and the corresponding abscissa value corresponding to each parameter in the reconstruction feature sequence;
and obtaining the information loss amount of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to each reference key parameter, the value of each reference key parameter, the derivative value corresponding to each reference key parameter, the number of the reference key parameters, each reconstruction key parameter, the value of each reconstruction key parameter and the derivative value corresponding to each reconstruction key parameter in each reconstruction characteristic sequence.
Preferably, the information loss amount of the reconstructed feature sequence corresponding to each pixel point in each first texture reconstructed feature map is calculated according to the following formula:
wherein, the first and the second end of the pipe are connected with each other,is a firstA first texture reconstructs the feature mapThe information loss amount of the reconstruction characteristic sequence corresponding to each pixel point,is the first texture feature mapThe number of reference key parameters in the reference characteristic sequence corresponding to each pixel point,is as followsA first texture reconstructs the feature mapThe first in the reconstruction characteristic sequence corresponding to each pixel pointThe values of the individual reconstruction key parameters are,is as followsA first texture reconstructs the feature mapThe first in the reconstruction characteristic sequence corresponding to each pixel pointThe derivative value corresponding to each reconstructed key parameter,is the first texture feature mapIn the reference feature sequence corresponding to each pixel pointThe value of each of the reference key parameters,is the first texture feature mapIn the reference feature sequence corresponding to each pixel pointAnd the derivative value corresponding to each reference key parameter.
Preferably, the method for obtaining the validity degree of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map according to the information loss amount includes:
normalizing the reference characteristic sequence and the reconstruction characteristic sequence by using a DTW algorithm to obtain the corresponding relation between each parameter in the reconstruction characteristic sequence and each parameter in the reference characteristic sequence;
according to the corresponding relation, grouping each parameter in the reconstruction feature sequence and each parameter in the reference feature sequence to obtain the number of parameters corresponding to each group in the reference feature sequence and the number of reference key parameters in each group, and the number of parameters corresponding to each group in the reconstruction feature sequence and the number of reconstruction key parameters in each group;
obtaining the position offset of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the number of the parameters corresponding to each group in the reference characteristic sequence, the number of the reference key parameters in each group, the number of the parameters corresponding to each group in the reconstruction characteristic sequence and the number of the reconstruction key parameters in each group;
obtaining the error degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the position offset and the information loss;
and obtaining the effective degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the error degree.
Preferably, the method for obtaining the priority corresponding to each first texture reconstruction feature map according to the validity degree includes:
obtaining a pixel value corresponding to each pixel point in the first texture reconstruction characteristic map corresponding to each first texture characteristic map according to the reconstruction characteristic sequence;
selecting the maximum pixel value in each first texture reconstruction characteristic image and the maximum pixel value in each first texture characteristic image;
calculating the entropy value of the original characteristic sequence corresponding to each pixel point in each first texture characteristic diagram; calculating the entropy value of a reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram;
obtaining the credibility degree of the reconstruction characteristic sequence corresponding to each pixel point in the first texture reconstruction characteristic diagram corresponding to each first texture characteristic diagram according to the pixel value corresponding to each pixel point in each first texture characteristic diagram, the entropy value of the original characteristic sequence corresponding to each pixel point, the maximum pixel value in each first texture characteristic diagram, the pixel value corresponding to each pixel point in the first texture reconstruction characteristic diagram corresponding to each first texture characteristic diagram, the entropy value of the reconstruction characteristic sequence corresponding to each pixel point and the maximum pixel value in each first texture reconstruction characteristic diagram;
and obtaining the corresponding priority of each first texture reconstruction characteristic graph according to the effective degree and the credibility.
Preferably, the confidence level of the reconstruction feature sequence corresponding to each pixel point in the first texture reconstruction feature map corresponding to each first texture feature map is calculated according to the following formula:
wherein the content of the first and second substances,is as followsThe first texture reconstruction feature map corresponding to the first texture feature mapThe confidence level of the reconstructed feature sequence corresponding to each pixel point,is as followsFirst texture feature mapThe pixel value corresponding to each pixel point,is as followsThe maximum pixel value in the first texture feature map,is a firstThe maximum pixel value in the first texture reconstruction feature map corresponding to the first texture feature map,is as followsThe first texture reconstruction feature map corresponding to the first texture feature mapThe pixel value corresponding to each pixel point,is as followsA first texture feature mapEntropy values of the original feature sequence corresponding to the individual pixel points,is as followsThe first texture reconstruction feature map corresponding to the first texture feature mapEntropy values of the reconstruction feature sequences corresponding to the pixel points.
Drawings
To more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the following description will be made
While the drawings necessary for the embodiment or prior art description are briefly described, it should be apparent that the drawings in the following description are merely examples of the invention and that other drawings may be derived from those drawings by those of ordinary skill in the art without inventive step.
FIG. 1 is a flow chart of a method for automatically detecting defects of cheese based on a robot according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, rather than all embodiments, and all other embodiments obtained by those skilled in the art based on the embodiments of the present invention belong to the protection scope of the embodiments of the present invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The embodiment provides a cheese defect automatic detection method based on a robot, which is described in detail as follows:
as shown in figure 1, the automatic detection method for defects of the cheese based on the robot comprises the following steps:
and S001, acquiring a target surface image in the unwinding process of the cheese.
In the embodiment, a camera at the tail end of a robot mechanical arm is used for acquiring the surface image of the cheese in the unwinding process, the cheese in the unwinding machine is enabled to rotate towards the same direction at a constant speed during image acquisition, the camera is kept still, the optical axis of the camera is perpendicular to the rotating shaft of the cheese, and the camera is enabled to look at the side face of the cheese; the method comprises the steps of obtaining a video frame of a cheese rotating for one circle, splicing surface images of all the cheeses in the collected video frame, carrying out graying processing on the spliced images, and recording the images subjected to graying processing after splicing as target surface images in the unwinding process of the cheeses.
In the embodiment, the rotating angle of the cheese and the frame rate of the image collected by the camera are set according to actual conditions; as a further embodiment, the cheese in the unwinding machine can also be stationary, the camera moving synchronously with the thread guide opening.
In this embodiment, the LBP operator is subsequently used to analyze the target surface image, and when the LBP operator is used to generate each texture feature map corresponding to the target surface image, if the gray values of each pixel point in the target surface image are equal, one pixel point in the neighborhood of each pixel point in the target surface image is used to reflect the texture feature of the target surface image; if the target surface image is a binary stripe image, the texture characteristics of the target surface image can be effectively embodied only by using at least two pixel points in the neighborhood of each pixel point in the target surface image; if the target surface image is a sine stripe image, the texture characteristics of the target surface image can be effectively embodied only by at least three pixel points in the neighborhood of each pixel point in the target surface image according to the Nyquist sampling theorem; therefore, the more complex texture needs more neighborhood pixels in the same size neighborhood, that is, the more complex texture needs more sampling points in the same size neighborhood, and the meanings of the neighborhood pixels and the sampling points are the same; in this embodiment, because the complexity of the texture of the target surface image is not determined, when generating each texture feature map corresponding to the target surface image by using the LBP operator, the number of sampling points in the same-size neighborhood corresponding to each pixel in the target surface image needs to be determined, that is, the number of neighborhood pixel points in the same-size neighborhood corresponding to each pixel in the target surface image is determined, and the number of sampling points in the same-size neighborhood corresponding to each pixel in the determined target surface image is different, so that the generated texture features in the texture feature maps are also different; the excessive sampling points have unnecessary calculation amount when generating the corresponding texture feature map of the target surface image, and the insufficient extraction of the texture features when generating the corresponding texture feature map has the phenomenon of inaccurate reflection of the texture features of the target surface image due to the insufficient number of the excessive sampling points; therefore, in this embodiment, each texture feature map corresponding to the target surface image needs to be analyzed to obtain a texture feature map which has a proper number of sampling points and can accurately reflect the texture features of the target surface image, and the texture feature map which has a proper number of sampling points and can accurately reflect the texture features of the target surface image is analyzed to obtain the random yarn degree in the unwinding process of the cheese corresponding to the target surface image.
Step S002, obtaining each first texture feature map corresponding to the target surface image and the pixel value of each pixel point in each corresponding first texture feature map according to the neighborhood pixel points with different set numbers corresponding to each pixel point in the target surface image; and obtaining a second texture feature map corresponding to the target surface image and a pixel value of each pixel point in the corresponding second texture feature map according to the first texture feature map.
In the embodiment, each first texture feature map corresponding to the target surface image is obtained by analyzing neighborhood pixel points with different set numbers corresponding to each pixel point in the target surface image; analyzing each first texture feature map to obtain a second texture feature map corresponding to the target surface image; and taking the obtained first texture feature map and the second texture feature map as a basis for subsequently obtaining an original feature sequence corresponding to each pixel point in each first texture feature map and a reference feature sequence corresponding to each pixel point in the second texture feature map.
In this embodiment, an LBP algorithm is used to obtain a first texture feature map corresponding to a set number of neighborhood pixels in the same size neighborhood of each pixel point in the target surface image, the positions of the neighborhood pixels corresponding to each pixel point used in the same size neighborhood of each pixel point in the target surface image are consistent, and an LBP value corresponding to each pixel point in the corresponding first texture feature map is obtained, and the LBP value corresponding to each pixel point in the first texture feature map is recorded as the pixel value corresponding to each pixel point in the corresponding first texture feature map; therefore, through the process, the first texture feature maps corresponding to the neighborhood pixel points of different set numbers in the same size neighborhood of each pixel point in the target surface image and the pixel values corresponding to the pixel points in the first texture feature maps can be obtained; for example, a first texture feature map corresponding to the neighborhood pixels with the same size in the target surface image is obtained by using an LBP algorithm, a first texture feature map corresponding to the neighborhood pixels with the same size in the target surface image is obtained by using the LBP algorithm, and a first texture feature map corresponding to the neighborhood pixels with the same size in the target surface image is obtained by using the LBP algorithm, wherein the neighborhood pixels with the same size in the target surface image have the number of 2.
In this embodiment, the method for extracting the texture feature map by using the LBP algorithm is the prior art, so this embodiment does not have any toolsDescription of the drawings. In this embodiment, the number of the neighborhood pixels which are most set in the same size neighborhood of each pixel in the target surface image is set asN,NValue of (d) is 64; as other implementation manners, the number of the neighborhood pixels which are set in the neighborhood with the same size at most in each pixel in the target surface image may also be set according to actual conditions.
In this embodiment, the pixel values of the pixel points in each first texture feature map corresponding to the target surface image and each corresponding first texture feature map can be obtained through the above process; recording first texture feature maps corresponding to neighborhood pixel points with the maximum set number of pixel points in the same-size neighborhood in the target surface image as second texture feature maps corresponding to the target surface image; and recording the pixel value of each pixel point in the first texture characteristic graph corresponding to the neighborhood pixel points with the maximum set number in the same size neighborhood of each pixel point in the target surface image as the pixel value of each pixel point in the second texture characteristic graph corresponding to the target surface image.
Step S003, obtaining an original characteristic sequence corresponding to each pixel point in each first textural feature map according to the pixel value of each pixel point in each first textural feature map; obtaining a reference characteristic sequence corresponding to each pixel point in the second texture characteristic image according to the pixel value of each pixel point in the second texture characteristic image; obtaining a reconstruction characteristic sequence corresponding to each pixel point in each first texture characteristic image according to the original characteristic sequence and the reference characteristic sequence; and obtaining a first texture reconstruction feature map corresponding to each first texture feature map according to the reconstruction feature sequence.
In this embodiment, an original feature sequence corresponding to each pixel point in each first texture feature map and a reference feature sequence corresponding to each pixel point in the second texture feature map are obtained by analyzing the pixel value of each pixel point in each first texture feature map corresponding to the target surface image and the pixel value of each pixel point in the second texture feature map corresponding to the target surface image; obtaining a reconstruction characteristic sequence corresponding to each pixel point in each first texture characteristic image according to the original characteristic sequence and the reference characteristic sequence; and obtaining a first texture reconstruction feature map corresponding to each first texture feature map according to the reconstruction feature sequence corresponding to each pixel point in each first texture feature map. And taking the obtained reconstruction characteristic sequence corresponding to each pixel point in each first texture characteristic graph and the first texture reconstruction characteristic graph corresponding to each first texture characteristic graph as a basis for subsequently calculating the information loss amount and the offset amount corresponding to the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic graph.
(a) Obtaining an original characteristic sequence corresponding to each pixel point in each first texture characteristic image according to the pixel value of each pixel point in each first texture characteristic image; the specific process of obtaining the reference feature sequence corresponding to each pixel point in the second texture feature map according to the pixel value of each pixel point in the second texture feature map is as follows:
in this embodiment, because the LBP value corresponding to each pixel point on each first texture feature map is decimal, the pixel value corresponding to each pixel point in each first texture feature map corresponding to the obtained target surface image is converted into a binary number, and each first texture feature map corresponding to the target surface image does not include the first texture feature map corresponding to the second texture feature map, that is, each first texture feature map corresponding to the target surface image does not include the first texture feature map corresponding to the target surface image under the neighborhood pixel points of which the number is at most set in the neighborhood of the same size of each pixel point in the target surface image; obtaining an original characteristic sequence corresponding to each pixel point in each first texture characteristic image according to the binary number corresponding to each pixel point in each first texture characteristic image; for example, if a pixel value corresponding to a certain pixel point in a certain first texture feature map is 5,5 is converted into a binary value of 101, the value of the 1 st parameter in the original feature sequence corresponding to the pixel point in the first texture feature map is 1, the value of the 2 nd parameter is 0, and the value of the 3 rd parameter is 1. Then, the pixel value of each pixel point in a second texture feature map corresponding to the target surface image is converted into a binary number through the method, a reference feature sequence corresponding to each pixel point in the second texture feature map is obtained according to the binary number corresponding to each pixel point in the second texture feature map, and the length of the reference feature sequence is equal to that of the reference feature sequenceN。
In this embodiment, through the above process, it can be obtained that each pixel point in the target surface image corresponds to one reference feature sequence and N-1 original feature sequences.
(b) Obtaining a reconstruction characteristic sequence corresponding to each pixel point in each first texture characteristic diagram according to the original characteristic sequence corresponding to each pixel point in each first texture characteristic diagram and the reference characteristic sequence corresponding to each pixel point in the second texture characteristic diagram; the specific process of obtaining the first texture reconstruction feature map corresponding to each first texture feature map according to the reconstruction feature sequence corresponding to each pixel point in each first texture feature map is as follows:
in this embodiment, a polynomial interpolation method is used to perform interpolation processing on the original feature sequences corresponding to each pixel point in each first texture feature map, so that the length of the original feature sequence corresponding to each pixel point in each first texture feature map after interpolation processing is consistent with the length of the reference feature sequence; recording an original characteristic sequence corresponding to each pixel point in each first texture characteristic image after interpolation processing as a target characteristic sequence corresponding to each pixel point in each first texture characteristic image; then, according to the difference between the value of each parameter in the target feature sequence corresponding to each pixel point in each first textural feature map and a preset parameter threshold value, adjusting each parameter in the target feature sequence corresponding to each pixel point in each first textural feature map, and recording the target feature sequence corresponding to each pixel point in each adjusted first textural feature map as a reconstruction feature sequence corresponding to each pixel point in each first textural feature map; the specific process of adjusting each parameter in the target feature sequence is as follows: and judging whether the value of each parameter in the target feature sequence corresponding to each pixel point in each first texture feature map is smaller than a preset parameter threshold value, if so, adjusting the corresponding parameter to be 0, and otherwise, adjusting the corresponding parameter to be 1.
In this embodiment, the polynomial interpolation method is the prior art, and therefore, this embodiment is not described in detail; in this embodiment, the value of the preset parameter threshold is set to 0.5; as another embodiment, the preset parameter threshold may also be set according to actual conditions.
In this embodiment, the first texture reconstruction feature maps corresponding to the first texture feature maps are formed by the first reconstruction feature sequences corresponding to the pixel points in the first texture feature maps obtained in the above process, and each pixel point in the first texture reconstruction feature maps corresponds to one reconstruction feature sequence. In this embodiment, through the above process, it can be obtained that each pixel point on the target surface image corresponds to one reference feature sequence and N-1 reconstruction feature sequences.
And step S004, obtaining each reference key parameter and each value of each reference key parameter in the reference characteristic sequence corresponding to each pixel point in the second texture characteristic diagram according to the parameters in the reference characteristic sequence.
In this embodiment, each reference key parameter, a value of each reference key parameter, and a number of reference key parameters in the reference feature sequence corresponding to each pixel point in the second texture feature map are obtained by analyzing the reference feature sequence; and taking each reference key parameter, the value of each reference key parameter and the number of the reference key parameters in the obtained reference feature sequence as a basis for subsequent analysis and calculation of each reconstruction key parameter, the value of each reconstruction key parameter and the number of the reconstruction key parameters in the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map.
In this embodiment, the value of each parameter in the reference feature sequence corresponding to each pixel point in the second texture feature map is recorded as the ordinate value corresponding to each parameter in the reference feature sequence corresponding to each pixel point in the second texture feature map, and the sequence of each parameter in the reference feature sequence corresponding to each pixel point in the second texture feature map is recorded as the abscissa value corresponding to each parameter in the reference feature sequence corresponding to each pixel point in the second texture feature map; obtaining derivative values corresponding to the parameters in the reference characteristic sequence corresponding to the pixel points in the second texture characteristic diagram according to the abscissa value and the ordinate value corresponding to the parameters in the reference characteristic sequence corresponding to the pixel points in the second texture characteristic diagram; and recording the parameter with the derivative of 0 in the reference characteristic sequence corresponding to each pixel point as the reference key parameter in the reference characteristic sequence corresponding to each pixel point in the second texture characteristic diagram, recording the value of the parameter with the derivative of 0 in the reference characteristic sequence corresponding to each pixel point as the value of the reference key parameter in the reference characteristic sequence corresponding to each pixel point in the second texture characteristic diagram, and counting the number of the reference key parameters in the reference characteristic sequence corresponding to each pixel point in the second texture characteristic diagram.
In this embodiment, through the above process, each reference key parameter, a value of each reference key parameter, and a number of reference key parameters in the reference feature sequence corresponding to each pixel point in the second texture feature map may be obtained.
And step S005, obtaining each reconstruction key parameter and each value of the reconstruction key parameter in the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map according to each reference key parameter and each value of the reference key parameter.
In this embodiment, each reference key parameter, each value of the reference key parameter, and the number of the reference key parameters in the reference feature sequence are analyzed to determine each reconstruction key parameter, each value of the reconstruction key parameter, and the number of the reconstruction key parameters in the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map; and taking the reconstruction key parameters, the values of the reconstruction key parameters and the quantity of the reconstruction key parameters in the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map as a basis for subsequently analyzing and calculating the information loss quantity of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map.
In this embodiment, each reference key parameter, the value of each reference key parameter, and the number of reference key parameters in the reference feature sequence corresponding to each pixel point in the second texture feature map may be obtained according to the above process; the positions of all the pixel points in the second texture characteristic image and the positions of all the pixel points in the second texture characteristic image are consistent with the positions of all the pixel points in the target surface image. Therefore, in this embodiment, each reconstruction key parameter, each value of the reconstruction key parameter, and the number of the reconstruction key parameters in the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map are determined according to each reference key parameter, each value of the reference key parameter, and the number of the reference key parameters in the reference feature sequence corresponding to each pixel point in the second texture feature map. For example, the reference feature sequence corresponding to the 1 st pixel point in the second texture feature map is [00011100], the reconstructed feature sequence corresponding to the 1 st pixel point in a certain first texture feature map is [00011000], since the derivative of the 3 rd and 6 th parameters in the reference feature sequence is 0, the 3 rd and 6 th parameters in the reference feature sequence are reference key parameters, the 3 rd and 6 th parameters in the reference feature sequence are recorded as the 1 st and 2 nd reference key parameters in the reference feature sequence, the value of the 1 st reference key parameter is 0, and the value of the 2 nd reference key parameter is 1; the number of the reference key parameters in the reference characteristic sequence is 2; therefore, the 3 rd and 6 th parameters in the reconstruction feature sequence are reconstruction key parameters, the 3 rd and 6 th parameters in the reconstruction feature sequence are recorded as the 1 st and 2 nd reconstruction key parameters in the reconstruction feature sequence, the value of the 1 st reconstruction key parameter is 0, and the value of the 2 nd reconstruction key parameter is 0; the number of reconstruction key parameters in the reconstruction signature sequence is 2.
Step S006 is to obtain an information loss amount of the reconstructed feature sequence corresponding to each pixel point in each first texture reconstructed feature map according to each reference key parameter in each reference feature sequence, a value of each reference key parameter, and each reconstructed key parameter and a value of each reconstructed key parameter in the reconstructed feature sequence.
In this embodiment, the information loss amount of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map is obtained by analyzing each reference key parameter, the value of each reference key parameter, and the number of the reference key parameters in the reference feature sequence corresponding to each pixel point, and each reconstruction key parameter, the value of each reconstruction key parameter, and the number of the reconstruction key parameters corresponding to the reconstruction feature sequence corresponding to each pixel point; and taking the information loss amount as a basis for calculating the effective degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram through subsequent analysis.
In this embodiment, the value of each parameter in the reconstructed feature sequence corresponding to each pixel point in each first texture feature map is recorded as a longitudinal coordinate value corresponding to each parameter in the reconstructed feature sequence corresponding to each pixel point in the first texture feature map, and the sequence of each parameter in the reconstructed feature sequence corresponding to each pixel point in each first texture feature map is recorded as an abscissa value corresponding to each parameter in the reconstructed feature sequence corresponding to each pixel point in the first texture feature map; and obtaining derivative values corresponding to all parameters in the reconstruction characteristic sequence corresponding to all pixel points in each first texture characteristic diagram according to the longitudinal coordinate values and the corresponding transverse coordinate values of all parameters in the reconstruction characteristic sequence corresponding to all pixel points in each first texture characteristic diagram, obtaining derivative values corresponding to all reconstruction key parameters in the reconstruction characteristic sequence corresponding to all pixel points in each first texture characteristic diagram, and counting the quantity of the reconstruction key parameters in the reconstruction characteristic sequence corresponding to all pixel points in each first texture characteristic diagram.
In this embodiment, the information loss amount of the reconstructed feature sequence corresponding to each pixel point in each first texture reconstructed feature map is obtained according to each reference key parameter, the value of each reference key parameter, the derivative value and the number of the reference key parameters corresponding to each pixel point in the second texture feature map, each reconstructed key parameter, the value of each reconstructed key parameter, and the derivative value corresponding to each reconstructed key parameter in the reconstructed feature sequence corresponding to each pixel point in each first texture feature map; and calculating the information loss amount of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the following formula:
wherein the content of the first and second substances,is as followsA first texture reconstructs the feature mapReconstructed feature sequence corresponding to each pixel pointThe amount of information lost of (a) is,is the first texture feature mapThe number of reference key parameters in the reference feature sequence corresponding to each pixel point, i.e.Is also the firstA first texture reconstructs the feature mapThe number of the reconstruction key parameters in the reconstruction characteristic sequence corresponding to each pixel point,is as followsA first texture reconstructs the feature mapThe first in the reconstruction characteristic sequence corresponding to each pixel pointThe values of the individual reconstruction key parameters are,is a firstA first texture reconstructs the feature mapThe first in the reconstruction characteristic sequence corresponding to each pixel pointThe derivative value corresponding to each reconstructed key parameter,is the first texture feature mapIn the reference feature sequence corresponding to each pixel pointThe value of each of the reference key parameters,is the first texture feature mapIn the reference feature sequence corresponding to each pixel pointAnd the derivative value corresponding to each reference key parameter.
In the present embodiment, the first and second electrodes are,the larger the value of (A) is, the firstA first texture reconstructs the feature mapThe greater the amount of information lost from the reconstructed signature sequence corresponding to each pixel point,the larger the value of (A) is, the firstA first oneTexture reconstruction of feature mapThe larger the information loss amount corresponding to the reconstruction characteristic sequence corresponding to each pixel point.
In this embodiment, the information loss amount of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map can be obtained through the above calculation process.
Step S007, obtaining the effective degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the information loss amount; and obtaining the priority corresponding to each first texture reconstruction characteristic graph according to the effective degree.
In this embodiment, the effectiveness of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map is obtained by analyzing the information loss amount and the position offset of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map; obtaining the priority corresponding to each first texture reconstruction feature map according to the effective degree of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map; and taking the priority corresponding to each obtained first texture reconstruction characteristic graph as a basis for subsequently obtaining a target first texture reconstruction characteristic graph corresponding to the target surface image.
(a) The specific process of obtaining the effective degree of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map according to the information loss amount and the position offset of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map is as follows:
in this embodiment, the specific process of obtaining the position offset of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map by analyzing each reconstruction feature sequence corresponding to each pixel point and the reference feature sequence corresponding to each pixel point is as follows: utilizing a DTW algorithm to normalize the reference characteristic sequence corresponding to each pixel point and each reconstruction characteristic sequence corresponding to each pixel point to obtain the corresponding relation between each parameter in each reconstruction characteristic sequence corresponding to each pixel point and each parameter in the reference characteristic sequence corresponding to each pixel point; grouping each parameter in each reconstruction characteristic sequence corresponding to each pixel point and each parameter in the reference characteristic sequence corresponding to each pixel point according to the corresponding relationship between each parameter in each reconstruction characteristic sequence corresponding to each pixel point and each parameter in the reference characteristic sequence corresponding to each pixel point; according to the grouping result, the number of parameters corresponding to each group in the reference characteristic sequence corresponding to each pixel point and the number of reference key parameters in each group are obtained, and the number of parameters corresponding to each group in each reconstruction characteristic sequence corresponding to each pixel point and the number of reconstruction key parameters in each group are obtained; for example, the reference feature sequence corresponding to the 1 st pixel point in the second texture feature map is [00011100], the reconstructed feature sequence corresponding to the 1 st pixel point in a certain first texture feature map is [00011000], then the 1 st, 2 nd, and 3 rd parameters in the reference feature sequence are the 1 st group corresponding to the reference feature sequence, the 4 th, 5 th, and 6 th parameters in the reference feature sequence are the 2 nd group corresponding to the reference feature sequence, and the 7 th and 8 th parameters in the reference feature sequence are the 3 rd group corresponding to the reference feature sequence; the number of the parameters in the 1 st group corresponding to the reference feature sequence is 3, the number of the reference key parameters is 1, the number of the parameters in the 2 nd group corresponding to the reference feature sequence is 3, the number of the reference key parameters is 1, the number of the parameters in the 3 rd group corresponding to the reference feature sequence is 2, and the number of the reference key parameters is 0; then the 1 st, 2 nd and 3 rd parameters in the reconstructed feature sequence are the 1 st group corresponding to the reconstructed feature sequence, the 4 th and 5 th parameters in the reconstructed feature sequence are the 2 nd group corresponding to the reconstructed feature sequence, and the 6 th, 7 th and 8 th parameters in the reconstructed feature sequence are the 3 rd group corresponding to the reconstructed feature sequence; the number of the parameters in the 1 st group corresponding to the reconstruction feature sequence is 3, the number of the reconstruction key parameters is 1, the number of the parameters in the 2 nd group corresponding to the reconstruction feature sequence is 2, the number of the reconstruction key parameters is 0, the number of the parameters in the 3 rd group corresponding to the reconstruction feature sequence is 3, and the number of the reconstruction key parameters is 1.
In this embodiment, the position offset of the reconstructed feature sequence corresponding to each pixel point in each first texture reconstructed feature map is obtained according to the quantity of the parameters corresponding to each group in the reference feature sequence corresponding to each pixel point, the quantity of the reference key parameters in each group, the quantity of the parameters corresponding to each group in each reconstructed feature sequence corresponding to each pixel point, and the quantity of the reconstruction key parameters in each group; calculating the position offset of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic graph according to the following formula:
wherein the content of the first and second substances,is a firstA first texture reconstructs the feature mapThe position offset of the reconstruction characteristic sequence corresponding to each pixel point,is the first texture feature mapThe number of groups in the reference feature sequence corresponding to each pixel point,is the first texture feature mapIn the reference feature sequence corresponding to each pixel pointThe number of parameters in the set is,in the second texture feature mapThe first in the reference characteristic sequence corresponding to each pixel pointThe number of reference key parameters in the set,is a firstA first texture reconstructs the feature mapThe first in the reconstruction characteristic sequence corresponding to each pixel pointThe number of parameters in the set is,is as followsA first texture reconstructs the feature mapThe first in the reconstruction characteristic sequence corresponding to each pixel pointThe number of reconstruction critical parameters in the set.
In the present embodiment, the first and second electrodes are,greater value of (A) indicates the firstA first texture reconstructs the feature mapThe greater the amount of positional offset of the reconstructed feature sequence corresponding to a pixel point,greater values of (A) indicate (B)A first texture reconstructs the feature mapThe larger the position offset corresponding to the reconstruction feature sequence corresponding to each pixel point is.
In this embodiment, the position offset of the reconstructed feature sequence corresponding to each pixel point in each first texture reconstructed feature map can be obtained through the above process; in this embodiment, the error degree of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map is obtained according to the position offset and the information loss amount of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map; calculating the error degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the following formula:
wherein the content of the first and second substances,is a firstA first texture reconstructs the feature mapThe degree of error of the reconstructed feature sequence corresponding to each pixel point,is as followsA first texture reconstructs the feature mapThe position offset of the reconstructed feature sequence corresponding to each pixel point,first, theA first texture reconstructs the feature mapAnd (4) information loss amount of the reconstruction characteristic sequence corresponding to each pixel point. In the present embodiment, the first and second electrodes are,the larger the value of (A) is, the firstA first texture reconstructs the feature mapThe higher the degree of error of the reconstructed feature sequence corresponding to each pixel point.
In this embodiment, because each pixel point on the target surface image corresponds to N-1 error degrees, an error degree curve corresponding to each pixel point on the target surface image is obtained by taking the number of the first texture reconstruction feature maps as the horizontal axis and the error degree corresponding to each pixel point in each first texture reconstruction feature map as the vertical axis; removing noise on an error degree curve corresponding to each pixel point in the target surface image by using median filtering, and recording each error degree corresponding to each pixel point in the target surface image after the noise is removed as each target error degree corresponding to each pixel point in the target surface image; obtaining the target error degree of a reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic map according to each target error degree corresponding to each pixel point in the target surface image; obtaining the effective degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic map according to the target error degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic map; calculating the effective degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the following formula:
wherein, the first and the second end of the pipe are connected with each other,is as followsA first texture reconstructs the feature mapThe significance of the reconstructed feature sequence corresponding to each pixel point,is as followsA first texture reconstructs the feature mapThe target error degree of the reconstruction characteristic sequence corresponding to each pixel point;the larger the value of (A) is, the firstA first texture reconstructs the feature mapThe less effective the reconstructed feature sequence corresponding to a pixel point.
(b) The specific process of obtaining the priority corresponding to each first texture reconstruction feature map according to the effective degree and the credibility of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map is as follows:
in this embodiment, the specific process of analyzing each first texture feature map and the first texture reconstruction feature map corresponding to each first texture feature map to obtain the credibility of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map is as follows: obtaining a pixel value corresponding to each pixel point in the first texture reconstruction characteristic map corresponding to each first texture characteristic map according to the reconstruction characteristic sequence corresponding to each pixel point in the first texture reconstruction characteristic map corresponding to each first texture characteristic map; selecting a maximum pixel value in each first texture reconstruction characteristic image and a maximum pixel value in each first texture characteristic image; then calculating the entropy of the original characteristic sequence corresponding to each pixel point in each first texture characteristic diagram and calculating the entropy of the reconstruction characteristic sequence corresponding to each pixel point in the first texture reconstruction characteristic diagram corresponding to each first texture characteristic diagram; obtaining the credibility of the corresponding reconstruction feature sequence of each pixel in the first texture reconstruction feature map corresponding to each first texture feature map according to the pixel value corresponding to each pixel in each first texture feature map, the entropy of the corresponding original feature sequence of each pixel, the maximum pixel value in each first texture feature map, the pixel value corresponding to each pixel in the first texture reconstruction feature map corresponding to each first texture feature map, the entropy of the corresponding reconstruction feature sequence of each pixel and the maximum pixel value in each first texture reconstruction feature map:
wherein the content of the first and second substances,is as followsThe first texture reconstruction feature map corresponding to the first texture feature mapThe confidence level of the reconstructed feature sequence corresponding to each pixel point,is a firstA first texture feature mapThe pixel value corresponding to each pixel point,is a firstThe maximum pixel value in the first texture feature map,is as followsThe maximum pixel value in the first texture reconstruction feature map corresponding to the first texture feature map,is as followsThe first texture reconstruction feature map corresponding to the first texture feature mapThe pixel value corresponding to each pixel point is calculated,is as followsA first texture feature mapEntropy values of the original feature sequence corresponding to the individual pixel points,is a firstThe first texture reconstruction feature map corresponding to the first texture feature mapAnd entropy values of the reconstruction characteristic sequences corresponding to the pixel points.
In the present embodiment, the first and second electrodes are,the larger the value of (A) is, the firstThe first texture reconstruction feature map corresponding to the first texture feature mapThe higher the credibility of the reconstruction characteristic sequence corresponding to each pixel point is;the smaller the value of (A) is, the firstThe first texture reconstruction feature map corresponding to the first texture feature mapThe higher the confidence level of the reconstructed feature sequence corresponding to each pixel point.
In this embodiment, the validity degree and the credibility degree of the reconstruction feature sequence corresponding to each pixel point on each first texture reconstruction feature map can be obtained according to the above process; obtaining the priority corresponding to each first texture reconstruction feature map according to the effective degree and the credibility of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map; calculating the corresponding priority of each first texture reconstruction feature map according to the following formula:
wherein the content of the first and second substances,is a firstThe priority corresponding to the first texture reconstruction feature map,is as followsThe number of pixel points in the first texture reconstruction feature map, i.e.And the number of pixel points in each first texture reconstruction feature map,is as followsA first texture reconstructs the feature mapReconstruction characteristics corresponding to each pixel pointThe degree of validity of the signature sequence,is as followsA first texture reconstructs the feature mapCredibility of the reconstruction characteristic sequence corresponding to each pixel point;the larger the value of (A), the moreThe greater the value of (a) is,greater values of (A) indicate (B)The first texture reconstruction characteristic graph can clearly reflect the texture information of the target surface image under the condition that the set number of neighborhood pixel points corresponding to the target surface image is proper.
Step S008, obtaining a target first texture reconstruction characteristic diagram corresponding to the target surface image according to the priority; and obtaining the yarn disorder degree of the cheese surface corresponding to the target surface image according to the coordinates of each pixel point in the target first texture reconstruction characteristic diagram and the effective degree of the reconstruction characteristic sequence corresponding to each pixel point.
In this embodiment, a target first texture reconstruction feature map corresponding to the target surface image is obtained by analyzing the priority corresponding to each first texture reconstruction feature map; and analyzing the coordinates of each pixel point in the target first texture reconstruction characteristic diagram and the effectiveness corresponding to each pixel point to obtain the yarn disorder degree of the surface of the cheese corresponding to the target surface image.
This implementationIn the example, the first texture reconstruction feature map corresponding to the maximum priority is taken as a target first texture reconstruction feature map corresponding to the target surface image; obtaining a feature vector corresponding to each pixel point in a target first texture reconstruction feature map corresponding to the target surface image according to the corresponding coordinates of each pixel point in the target first texture reconstruction feature map corresponding to the target surface image and the effective degree of each pixel pointWhereinReconstructing a first texture in a feature map for a targetThe feature vector corresponding to each pixel point,reconstructing a first texture in a feature map for a targetThe abscissa of the point corresponding to each pixel,reconstructing the first texture in the feature map for the targetThe ordinate of the point corresponding to each pixel point,reconstructing the first texture in the feature map for the targetThe significance of the point correspondence of each pixel.
In the embodiment, feature vectors corresponding to each pixel point in the target first texture reconstruction feature map are clustered by using mean shift clustering, the number of pixel points which are not divided into any one cluster is obtained, and the number of the pixel points which are not divided into any one cluster is recorded as the number of discrete points in the target first texture reconstruction feature map; obtaining the yarn disorder degree of the surface of the cheese corresponding to the target surface image according to the number of discrete points in the target first texture reconstruction characteristic diagram; calculating the disorder degree of the surface of the cheese corresponding to the target surface image according to the following formula:
wherein the content of the first and second substances,in order to obtain the yarn disorder degree of the surface of the cheese corresponding to the target surface image,the number of discrete points in the feature map is reconstructed for the target first texture,reconstructing the number of pixel points in the feature map for the target first texture;the larger the value of (A) is, the more serious the yarn disorder degree of the surface of the cheese corresponding to the target surface image is obtained, i.e. theThe larger the value of (A), the more serious the yarn disorder degree of the surface of the cheese corresponding to the target surface image is obtained.
Has the advantages that: in this embodiment, each reference key parameter and each value of the reference key parameter in each reference feature sequence, and each reconstruction key parameter and each value of the reconstruction key parameter in the reconstruction feature sequence are used as a basis for obtaining an information loss amount corresponding to the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map; taking the information loss amount corresponding to the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic map as a basis for obtaining the effective degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic map; taking the effective degree as a basis for obtaining the priority corresponding to each first texture reconstruction feature map; taking the priority as a basis for obtaining a target first texture reconstruction feature map corresponding to the target surface image; the coordinates of all pixel points in the target first texture reconstruction characteristic diagram and the effective degree corresponding to all pixel points are used as a basis for obtaining the yarn disorder degree of the cheese surface corresponding to the target surface image; the method for detecting the defects of the cheese is an automatic processing method, and compared with a manual detection mode, the method can improve the accuracy of detecting the defects of the cheese.
It should be noted that the order of the above-mentioned embodiments of the present invention is merely for description and does not represent the merits of the embodiments, and in some cases, actions or steps recited in the claims may be executed in an order different from the order of the embodiments and still achieve desirable results.
Claims (9)
1. A cheese defect automatic detection method based on a robot is characterized by comprising the following steps:
acquiring a target surface image in the unwinding process of the cheese;
obtaining each first texture feature map corresponding to the target surface image and a pixel value of each pixel point in each corresponding first texture feature map according to the neighborhood pixel points with different set numbers corresponding to each pixel point in the target surface image; obtaining a second texture feature map corresponding to the target surface image and a pixel value of each pixel point in the corresponding second texture feature map according to the first texture feature map;
obtaining an original characteristic sequence corresponding to each pixel point in each first textural feature map according to the pixel value of each pixel point in each first textural feature map; obtaining a reference characteristic sequence corresponding to each pixel point in the second texture characteristic diagram according to the pixel value of each pixel point in the second texture characteristic diagram; obtaining a reconstruction characteristic sequence corresponding to each pixel point in each first texture characteristic image according to the original characteristic sequence and the reference characteristic sequence; obtaining a first texture reconstruction feature map corresponding to each first texture feature map according to the reconstruction feature sequence;
obtaining each reference key parameter and the value of each reference key parameter in the reference feature sequence corresponding to each pixel point in the second texture feature map according to the parameters in the reference feature sequence;
obtaining each reconstruction key parameter and each value of the reconstruction key parameter in the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map according to each reference key parameter and each value of the reference key parameter;
obtaining the information loss amount of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to each reference key parameter in each reference characteristic sequence, the value of each reference key parameter, each reconstruction key parameter in the reconstruction characteristic sequence and the value of each reconstruction key parameter;
obtaining the effective degree of a reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the information loss amount; according to the effective degree, obtaining the corresponding priority of each first texture reconstruction characteristic map;
obtaining a target first texture reconstruction feature map corresponding to the target surface image according to the priority; and obtaining the yarn disorder degree of the cheese surface corresponding to the target surface image according to the coordinates of each pixel point in the target first texture reconstruction characteristic diagram and the effective degree of the reconstruction characteristic sequence corresponding to each pixel point.
2. The automatic cheese defect detection method based on the robot according to claim 1, wherein the pixel values of the pixel points in each first texture feature map corresponding to the target surface image and the pixel points in each corresponding first texture feature map are obtained according to the neighborhood pixel points with different set numbers corresponding to the pixel points in the target surface image; according to the first texture feature map, a method for obtaining a second texture feature map corresponding to the target surface image comprises the following steps:
obtaining each first texture feature map corresponding to the target surface image under the neighborhood pixel points with different set numbers in the same size neighborhood of each pixel point in the target surface image and the LBP value corresponding to each pixel point on each corresponding first texture feature map by using an LBP algorithm; recording the LBP value as the pixel value of each pixel point on each first texture feature map;
and recording a first texture feature map corresponding to the target surface image under the neighborhood pixel points with the maximum set number in the same-size neighborhood of each pixel point in the target surface image as a second texture feature map corresponding to the target surface image.
3. The method according to claim 1, wherein an original feature sequence corresponding to each pixel point in each first texture feature map is obtained according to the pixel value of each pixel point in each first texture feature map; obtaining a reference characteristic sequence corresponding to each pixel point in the second texture characteristic diagram according to the pixel value of each pixel point in the second texture characteristic diagram; obtaining a reconstruction characteristic sequence corresponding to each pixel point in each first texture characteristic image according to the original characteristic sequence and the reference characteristic sequence; the method for obtaining the first texture reconstruction feature map corresponding to each first texture feature map according to the reconstruction feature sequence comprises the following steps:
performing binary conversion on the pixel value of each pixel point in each first textural feature map to obtain a binary number corresponding to each pixel point in each first textural feature map;
obtaining an original characteristic sequence corresponding to each pixel point in each first texture characteristic image according to the binary number of each pixel point in each first texture characteristic image;
binary conversion is carried out on the pixel value of each pixel point in the second texture feature map, and binary numbers corresponding to each pixel point in the second texture feature map are obtained;
obtaining a reference characteristic sequence corresponding to each pixel point in the second texture characteristic diagram according to the binary number corresponding to each pixel point in the second texture characteristic diagram;
carrying out interpolation processing on the original characteristic sequence by utilizing a polynomial interpolation method; recording the original characteristic sequence after the interpolation processing as a target characteristic sequence corresponding to each pixel point in each first texture characteristic image; the length of the target characteristic sequence is the same as that of the reference characteristic sequence;
adjusting each parameter in the target feature sequence according to the difference between the value of each parameter in the target feature sequence and a preset parameter threshold value, and recording the adjusted target feature sequence as a reconstructed feature sequence corresponding to each pixel point in each first texture feature map;
and obtaining a first texture reconstruction feature map corresponding to each first texture feature map according to the reconstruction feature sequence.
4. The method for automatically detecting the defects of the cheese based on the robot as claimed in claim 1, wherein the method for obtaining the values of the reference key parameters and the reference key parameters in the reference feature sequence corresponding to the pixel points in the second texture feature map according to the parameters in the reference feature sequence comprises:
recording the value of each parameter in the reference characteristic sequence as a longitudinal coordinate value corresponding to each parameter in the reference characteristic sequence, and recording the sequence of each parameter in the reference characteristic sequence as an abscissa value corresponding to each parameter in the reference characteristic sequence;
obtaining a derivative value corresponding to each parameter in the reference characteristic sequence according to an abscissa value and a corresponding ordinate value corresponding to each parameter in the reference characteristic sequence;
recording a parameter with a derivative of 0 in the reference characteristic sequence as a reference key parameter in the reference characteristic sequence, and recording a value of the parameter with a derivative of 0 in the reference characteristic sequence as a value of the reference key parameter in the reference characteristic sequence; and the derivative value corresponding to each reference key parameter in the reference characteristic sequence is 0.
5. The robot-based automatic cheese defect detection method according to claim 1 or 4, wherein the method for obtaining the information loss amount of the reconstructed feature sequence corresponding to each pixel point in each first texture reconstructed feature map according to each reference key parameter and each reference key parameter value in each reference feature sequence and each reconstructed key parameter value in the reconstructed feature sequence comprises:
counting the number of the reference key parameters in the reference characteristic sequence;
recording the value of each parameter in the reconstruction feature sequence as the ordinate value corresponding to each parameter in the reconstruction feature sequence, and recording the sequence of each parameter in the reconstruction feature sequence as the abscissa value corresponding to each parameter in the reconstruction feature sequence;
obtaining a derivative value corresponding to each reconstruction key parameter in the reconstruction feature sequence according to the ordinate value and the abscissa value corresponding to each parameter in the reconstruction feature sequence;
and obtaining the information loss amount of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to each reference key parameter, the value of each reference key parameter, the derivative value corresponding to each reference key parameter and the number of the reference key parameters in each reference characteristic sequence, each reconstruction key parameter, the value of each reconstruction key parameter and the derivative value corresponding to each reconstruction key parameter in the reconstruction characteristic sequence.
6. The automatic cheese defect detection method based on robot according to claim 5, wherein the information loss amount of the reconstructed feature sequence corresponding to each pixel point in each first texture reconstructed feature map is calculated according to the following formula:
wherein the content of the first and second substances,is as followsA first texture reconstructs the feature mapThe information loss amount of the reconstruction characteristic sequence corresponding to each pixel point,is the first texture feature mapThe number of reference key parameters in the reference characteristic sequence corresponding to each pixel point,is as followsA first texture reconstructs the feature mapThe first in the reconstruction characteristic sequence corresponding to each pixel pointThe values of the individual reconstruction key parameters are,is a firstA first texture reconstructs the feature mapThe first in the reconstruction characteristic sequence corresponding to each pixel pointThe derivative value corresponding to each reconstructed key parameter,is the first texture feature mapIn the reference feature sequence corresponding to each pixel pointThe value of each of the reference key parameters,is the first texture feature mapIn the reference feature sequence corresponding to each pixel pointDerivative values corresponding to the individual reference key parameters.
7. The method for automatically detecting the defects of the cheese based on the robot as claimed in claim 1, wherein the method for obtaining the effective degree of the reconstruction feature sequence corresponding to each pixel point in each first texture reconstruction feature map according to the information loss amount comprises:
normalizing the reference characteristic sequence and the reconstruction characteristic sequence by using a DTW algorithm to obtain the corresponding relation between each parameter in the reconstruction characteristic sequence and each parameter in the reference characteristic sequence;
according to the corresponding relation, grouping each parameter in the reconstruction feature sequence and each parameter in the reference feature sequence to obtain the number of parameters corresponding to each group in the reference feature sequence, the number of reference key parameters in each group, the number of parameters corresponding to each group in the reconstruction feature sequence and the number of reconstruction key parameters in each group;
obtaining the position offset of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the number of the parameters corresponding to each group in the reference characteristic sequence, the number of the reference key parameters in each group, the number of the parameters corresponding to each group in the reconstruction characteristic sequence and the number of the reconstruction key parameters in each group;
obtaining the error degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the position offset and the information loss;
and obtaining the effective degree of the reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram according to the error degree.
8. The method for automatically detecting the defects of the cheese based on the robot as claimed in claim 1, wherein the method for obtaining the corresponding priority of each first texture reconstruction feature map according to the effectiveness degree comprises the following steps:
according to the reconstruction characteristic sequence, obtaining pixel values corresponding to all pixel points in the first texture reconstruction characteristic diagram corresponding to all the first texture characteristic diagrams;
selecting the maximum pixel value in each first texture reconstruction characteristic image and the maximum pixel value in each first texture characteristic image;
calculating the entropy value of the original characteristic sequence corresponding to each pixel point in each first texture characteristic diagram; calculating an entropy value of a reconstruction characteristic sequence corresponding to each pixel point in each first texture reconstruction characteristic diagram;
obtaining the credibility degree of the reconstruction characteristic sequence corresponding to each pixel point in the first texture reconstruction characteristic diagram corresponding to each first texture characteristic diagram according to the pixel value corresponding to each pixel point in each first texture characteristic diagram, the entropy value of the original characteristic sequence corresponding to each pixel point, the maximum pixel value in each first texture characteristic diagram, the pixel value corresponding to each pixel point in the first texture reconstruction characteristic diagram corresponding to each first texture characteristic diagram, the entropy value of the reconstruction characteristic sequence corresponding to each pixel point and the maximum pixel value in each first texture reconstruction characteristic diagram;
and obtaining the corresponding priority of each first texture reconstruction characteristic graph according to the effective degree and the credibility.
9. The method according to claim 8, wherein the confidence level of the reconstructed feature sequence corresponding to each pixel point in the first texture reconstruction feature map corresponding to each first texture feature map is calculated according to the following formula:
wherein the content of the first and second substances,is a firstThe first texture reconstruction feature map corresponding to the first texture feature mapThe confidence level of the reconstructed feature sequence corresponding to each pixel point,is a firstA first texture feature mapThe pixel value corresponding to each pixel point is calculated,is a firstThe maximum pixel value in the first texture feature map,is as followsThe maximum pixel value in the first texture reconstruction feature map corresponding to the first texture feature map,is as followsThe first texture reconstruction feature map corresponding to the first texture feature mapThe pixel value corresponding to each pixel point,is as followsA first texture feature mapEntropy values of the original feature sequence corresponding to the individual pixel points,is as followsThe first texture reconstruction feature map corresponding to the first texture feature mapAnd entropy values of the reconstruction characteristic sequences corresponding to the pixel points.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211144530.4A CN115272301B (en) | 2022-09-20 | 2022-09-20 | Automatic cheese defect detection method based on robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211144530.4A CN115272301B (en) | 2022-09-20 | 2022-09-20 | Automatic cheese defect detection method based on robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115272301A true CN115272301A (en) | 2022-11-01 |
CN115272301B CN115272301B (en) | 2022-12-23 |
Family
ID=83756488
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211144530.4A Active CN115272301B (en) | 2022-09-20 | 2022-09-20 | Automatic cheese defect detection method based on robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115272301B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106570889A (en) * | 2016-11-10 | 2017-04-19 | 河海大学 | Detecting method for weak target in infrared video |
CN113688844A (en) * | 2021-08-13 | 2021-11-23 | 上海商汤智能科技有限公司 | Neural network training method and device, electronic equipment and storage medium |
CN113850808A (en) * | 2021-12-01 | 2021-12-28 | 武汉泰盛包装材料有限公司 | Multilayer corrugated paper arrangement defect detection method and device based on image processing |
CN114510969A (en) * | 2022-01-20 | 2022-05-17 | 中国人民解放军战略支援部队信息工程大学 | Noise reduction method for coordinate time series |
CN114549448A (en) * | 2022-02-17 | 2022-05-27 | 中国空气动力研究与发展中心超高速空气动力研究所 | Complex multi-type defect detection and evaluation method based on infrared thermal imaging data analysis |
CN114897894A (en) * | 2022-07-11 | 2022-08-12 | 海门市芳华纺织有限公司 | Method for detecting defects of cheese chrysanthemum core |
CN115019159A (en) * | 2022-08-09 | 2022-09-06 | 济宁安泰矿山设备制造有限公司 | Method for quickly identifying pump bearing fault |
-
2022
- 2022-09-20 CN CN202211144530.4A patent/CN115272301B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106570889A (en) * | 2016-11-10 | 2017-04-19 | 河海大学 | Detecting method for weak target in infrared video |
CN113688844A (en) * | 2021-08-13 | 2021-11-23 | 上海商汤智能科技有限公司 | Neural network training method and device, electronic equipment and storage medium |
CN113850808A (en) * | 2021-12-01 | 2021-12-28 | 武汉泰盛包装材料有限公司 | Multilayer corrugated paper arrangement defect detection method and device based on image processing |
CN114510969A (en) * | 2022-01-20 | 2022-05-17 | 中国人民解放军战略支援部队信息工程大学 | Noise reduction method for coordinate time series |
CN114549448A (en) * | 2022-02-17 | 2022-05-27 | 中国空气动力研究与发展中心超高速空气动力研究所 | Complex multi-type defect detection and evaluation method based on infrared thermal imaging data analysis |
CN114897894A (en) * | 2022-07-11 | 2022-08-12 | 海门市芳华纺织有限公司 | Method for detecting defects of cheese chrysanthemum core |
CN115019159A (en) * | 2022-08-09 | 2022-09-06 | 济宁安泰矿山设备制造有限公司 | Method for quickly identifying pump bearing fault |
Also Published As
Publication number | Publication date |
---|---|
CN115272301B (en) | 2022-12-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115641329B (en) | Lithium battery diaphragm defect detection method and system | |
CN114882044B (en) | Metal pipe surface quality detection method | |
CN114279357B (en) | Die casting burr size measurement method and system based on machine vision | |
CN115330628B (en) | Video frame-by-frame denoising method based on image processing | |
CN113610773B (en) | Gasket hole quality detection method, system, device and storage medium | |
CN115841434B (en) | Infrared image enhancement method for gas concentration analysis | |
CN111179233B (en) | Self-adaptive deviation rectifying method based on laser cutting of two-dimensional parts | |
CN115082462A (en) | Method and system for detecting appearance quality of fluid conveying pipe | |
CN111027398A (en) | Automobile data recorder video occlusion detection method | |
CN114494210A (en) | Plastic film production defect detection method and system based on image processing | |
CN113610772B (en) | Method, system, device and storage medium for detecting spraying code defect at bottom of pop can bottle | |
CN109540917B (en) | Method for extracting and analyzing yarn appearance characteristic parameters in multi-angle mode | |
CN108489996A (en) | A kind of defect inspection method of insulator, system and terminal device | |
CN111415339B (en) | Image defect detection method for complex texture industrial product | |
CN116485797B (en) | Artificial intelligence-based paint color difference rapid detection method | |
CN112330598A (en) | Method and device for detecting stiff silk defects on chemical fiber surface and storage medium | |
CN112258444A (en) | Elevator steel wire rope detection method | |
CN111861990A (en) | Method, system and storage medium for detecting bad appearance of product | |
CN113920122A (en) | Cable defect detection method and system based on artificial intelligence | |
CN115239661A (en) | Mechanical part burr detection method and system based on image processing | |
CN115272301B (en) | Automatic cheese defect detection method based on robot | |
CN112529853A (en) | Method and device for detecting damage of netting of underwater aquaculture net cage | |
CN115994870B (en) | Image processing method for enhancing denoising | |
CN107993193A (en) | The tunnel-liner image split-joint method of surf algorithms is equalized and improved based on illumination | |
CN111127450A (en) | Bridge crack detection method and system based on image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |