CN115809981A - Robot sorting method based on target detection - Google Patents

Robot sorting method based on target detection Download PDF

Info

Publication number
CN115809981A
CN115809981A CN202210859117.XA CN202210859117A CN115809981A CN 115809981 A CN115809981 A CN 115809981A CN 202210859117 A CN202210859117 A CN 202210859117A CN 115809981 A CN115809981 A CN 115809981A
Authority
CN
China
Prior art keywords
point
pixel point
edge
foreground
sampling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210859117.XA
Other languages
Chinese (zh)
Inventor
丁玉涛
宋欢
蔡晶晶
翟煜锦
李冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Polytechnic Institute
Original Assignee
Henan Polytechnic Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Polytechnic Institute filed Critical Henan Polytechnic Institute
Priority to CN202210859117.XA priority Critical patent/CN115809981A/en
Publication of CN115809981A publication Critical patent/CN115809981A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of article sorting, in particular to a robot sorting method based on target detection, which comprises the following steps: acquiring a textile image, and graying the textile image; carrying out multi-scale sampling on the original gray level image; performing foreground segmentation on each scale sampling image, and determining a foreground region corresponding to each scale sampling image; determining a gradient direction line, a normalized gradient amplitude value and an edge characteristic point pair corresponding to each pixel point in the foreground region; performing neighborhood variance mean processing on a sampling region corresponding to each edge feature point included in the edge feature point pair, and determining a material feature value and a brightness feature value corresponding to the edge feature point; determining a sampling effect corresponding to the foreground area; carrying out textile classification on the foreground area with the best sampling effect; and controlling the robot to sort and classify a plurality of textiles to be recovered. According to the invention, the sorting of the old textiles can be realized by controlling the robot, and the sorting accuracy and efficiency of the robot are improved.

Description

Robot sorting method based on target detection
Technical Field
The invention relates to the technical field of article sorting, in particular to a robot sorting method based on target detection.
Background
With the development of the automation industry in China, various automatic devices are widely applied to various industries, wherein the sorting robot is used as one of the automatic devices, is used for automatically sorting and selecting products or other articles on a transmission assembly line, and is widely applied to industries such as warehousing and logistics, industrial production, resource recovery and the like. In resource recovery, the sorting and recovery of the old textiles are the premise for improving the added value of the old textiles. Old textiles of different materials often need different recycling methods, so that sorting of the old textiles of different materials is very important. At present, when old textiles made of different materials are sorted, firstly, images of the old textiles made of the different materials are obtained, then, edge detection is carried out on the obtained images, edges are obtained through the edge detection and determined as edges of the old textiles, the edges with similar colors or brightness serve as the same category, and finally, a robot is controlled to sort and classify the old textiles made of the different categories.
However, when the above-described manner is adopted, there are often technical problems as follows:
firstly, the old textiles made of different materials often have the same color or brightness, and because the old textiles made of different materials often need different regeneration treatment methods, the old textiles made of different materials often need to be classified into different categories, so that the accuracy of the classification result is often low only by considering whether the color or the brightness are similar, and the accuracy of the robot sorting is low;
second, if there is the fold on the old fabrics, carry out edge detection to the image area that above-mentioned old fabrics corresponds, often can regard a fold edge as the edge of an old fabrics, often can differentiate into many old fabrics with above-mentioned old fabrics, often can lead to the degree of accuracy that differentiates old fabrics low to lead to the robot can carry out a lot of letter sorting to an old fabrics, and then lead to the inefficiency of robot letter sorting.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The invention provides a robot sorting method based on target detection, aiming at solving the technical problems of low accuracy and low efficiency of robot sorting.
The invention provides a robot sorting method based on target detection, which comprises the following steps:
acquiring textile images of a plurality of textiles to be recycled on a recycled textile conveyor belt, and graying the textile images to obtain original gray images;
carrying out multi-scale sampling on the original gray level image to obtain a plurality of scale sampling images;
performing foreground segmentation on each scale sampling image in the multiple scale sampling images, and determining a foreground region corresponding to each scale sampling image to obtain multiple foreground regions;
for each foreground region in the plurality of foreground regions, determining a gradient direction line and a normalized gradient amplitude value corresponding to each pixel point in the foreground region;
for each foreground region in the plurality of foreground regions, determining an edge feature point pair corresponding to the pixel point according to a gradient direction line corresponding to each pixel point in the foreground region, wherein the edge feature point pair comprises two edge feature points;
for each foreground area in the plurality of foreground areas, performing neighborhood variance mean processing on a sampling area corresponding to each edge feature point included in an edge feature point pair corresponding to each pixel point in the foreground area, and determining a material feature value and a brightness feature value corresponding to the edge feature point;
for each foreground area in the plurality of foreground areas, determining a sampling effect corresponding to the foreground area according to a normalized gradient amplitude value corresponding to a pixel point in the foreground area and a material characteristic value and a brightness characteristic value corresponding to two edge characteristic points included in an edge characteristic point pair;
carrying out textile classification on the foreground area with the best sampling effect to obtain a plurality of textile categories, wherein the foreground area with the best sampling effect is the foreground area corresponding to the largest sampling effect in the sampling effects corresponding to the foreground areas;
and controlling a robot to sort and classify the plurality of textiles to be recycled according to the plurality of textile categories.
Further, the performing neighborhood variance-mean processing on the sampling region corresponding to each edge feature point included in the edge feature point pair corresponding to each pixel point in the foreground region to determine a material feature value and a brightness feature value corresponding to the edge feature point includes:
for each pixel point in the sampling region corresponding to the edge feature point, determining a gray level difference value between each neighborhood pixel point corresponding to the pixel point and the pixel point according to the gray level value of the pixel point and the gray level values of a preset number of neighborhood pixel points corresponding to the pixel point, so as to obtain a preset number of gray level difference values corresponding to the pixel point;
determining a gray difference variance corresponding to the pixel points according to a preset number of gray differences corresponding to each pixel point in the sampling region corresponding to the edge feature point;
normalizing the gray difference variance corresponding to each pixel point in the sampling area corresponding to the edge feature point to obtain the normalized variance corresponding to the pixel point;
determining a neighborhood mean value corresponding to the pixel points according to the gray values of a preset number of neighborhood pixel points corresponding to each pixel point in the sampling region corresponding to the edge feature point;
normalizing the neighborhood mean value corresponding to each pixel point in the sampling region corresponding to the edge feature point to obtain a normalized mean value;
and determining a material characteristic value and a brightness characteristic value corresponding to the edge characteristic point according to the number of pixel points in the sampling region corresponding to the edge characteristic point, and the normalized variance and the normalized mean value corresponding to each pixel point in the sampling region corresponding to the edge characteristic point.
Further, the formula for determining the material characteristic value and the brightness characteristic value corresponding to the edge characteristic point is as follows:
Figure BDA0003755591390000031
wherein δ is a material characteristic value corresponding to the edge characteristic point, h is a brightness characteristic value corresponding to the edge characteristic point, J is the number of pixel points in a sampling region corresponding to the edge characteristic point, δ j Is the normalized variance, h, of the jth pixel point in the sampling region corresponding to the edge feature point j Is the normalized mean value corresponding to the jth pixel point in the sampling region corresponding to the edge feature point.
Further, the determining, according to the normalized gradient amplitude value and the material characteristic value and the brightness characteristic value corresponding to the two edge characteristic points included in the edge characteristic point pair corresponding to the pixel point in the foreground region, the sampling effect corresponding to the foreground region includes:
determining a classification coefficient corresponding to the pixel point according to a material characteristic value and a brightness characteristic value corresponding to two edge characteristic points included in the edge characteristic point pair corresponding to each pixel point in the foreground region;
determining the classification degree corresponding to the pixel points according to the classification coefficient corresponding to each pixel point in the foreground region;
determining the normalized gradient amplitude corresponding to each pixel point in the foreground region as the edge point probability corresponding to the pixel point;
determining attention weight corresponding to each pixel point in the foreground region according to the edge point probability corresponding to each pixel point in the foreground region;
and determining the sampling effect corresponding to the foreground area according to the attention weight and the classification degree corresponding to each pixel point in the foreground area.
Further, the formula for determining the classification coefficient corresponding to the pixel point is as follows:
Figure BDA0003755591390000032
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003755591390000033
is the classification coefficient, δ, of the pixel point correspondence 1 Is the material characteristic value corresponding to the first edge characteristic point in the two edge characteristic points included in the edge characteristic point pair corresponding to the pixel point, delta 2 Is the material characteristic value, h, corresponding to the second edge characteristic point in the two edge characteristic points included in the edge characteristic point pair corresponding to the pixel point 1 Is the brightness characteristic value h corresponding to the first edge characteristic point in the two edge characteristic points included in the edge characteristic point pair corresponding to the pixel point 2 Is the brightness characteristic value corresponding to the second edge characteristic point in the two edge characteristic points included in the edge characteristic point pair corresponding to the pixel point.
Further, the formula for determining the classification degree corresponding to the pixel point is as follows:
Figure BDA0003755591390000041
wherein f is the classification degree of the pixel point correspondence,
Figure BDA0003755591390000042
is the classification coefficient corresponding to the pixel point.
Further, the formula for determining the attention weight corresponding to each pixel point in the foreground region is as follows:
Figure BDA0003755591390000043
wherein, w i Is the attention weight, p, corresponding to the ith pixel point in the foreground region i And N is the number of pixel points in the foreground region.
Further, the formula for determining the sampling effect corresponding to the foreground region is as follows:
Figure BDA0003755591390000044
wherein X is the sampling effect corresponding to the foreground region, N is the number of pixel points in the foreground region, and w is i Is the attention weight, f, corresponding to the ith pixel point in the foreground region i Is the classification degree corresponding to the ith pixel point in the foreground region.
Further, the classifying the textiles in the foreground region with the best sampling effect to obtain multiple textile categories includes:
when a target pixel point exists in the foreground region with the best sampling effect, determining the target pixel point as an actual boundary point, wherein the target pixel point is a pixel point of which the corresponding classification coefficient is greater than or equal to a preset classification coefficient threshold value and the corresponding edge point probability is greater than or equal to a preset edge point probability threshold value;
all the determined actual boundary points are adjacently connected to obtain a plurality of boundaries;
performing connected domain detection on the foreground areas with the obtained multiple boundaries to obtain multiple connected domains;
classifying the plurality of connected domains according to the pixel points in each connected domain of the plurality of connected domains to obtain a plurality of connected domain categories;
determining the plurality of textile categories according to the plurality of connected domain categories.
Further, the classifying the plurality of connected domains according to the pixel point in each of the plurality of connected domains includes:
normalizing the gray value corresponding to each pixel point in the connected domain, determining the normalized gray value corresponding to the pixel point, and obtaining a normalized image corresponding to the connected domain;
determining a normalized gray level mean value corresponding to the connected domain according to the normalized gray level value corresponding to each pixel point in the connected domain;
determining a normalized two-dimensional entropy corresponding to the connected domain according to the normalized image corresponding to the connected domain;
combining the normalized gray level mean value and the normalized two-dimensional entropy corresponding to the connected domain into a classification coordinate point corresponding to the connected domain;
and when the distance between the classification coordinate points corresponding to two connected domains in the plurality of connected domains is smaller than a preset classification threshold value, dividing the two connected domains into the same connected domain category.
The invention has the following beneficial effects:
according to the robot sorting method based on target detection, the old textiles can be sorted by controlling the robot, and the accuracy and efficiency of robot sorting are improved. Firstly, textile images of a plurality of textiles to be recycled on a recycled textile conveyor belt are obtained, and the textile images are grayed to obtain an original gray image. In actual conditions, the obtained textile image can objectively reflect the condition of the textile to be recycled, so that the gray-level textile image can be conveniently analyzed subsequently, and the categories of the plurality of textiles to be recycled can be objectively and accurately determined. Therefore, the sorting of the plurality of textiles to be recycled can be conveniently carried out subsequently. And secondly, carrying out multi-scale sampling on the original gray level image to obtain a plurality of scale sampling images. Because the sampling scale is larger, the edge information on the scale sampling image is more obvious and is not easily influenced by interference information such as stains. In contrast, detailed material information that can distinguish whether an edge is a wrinkled edge or a textile edge to be recycled tends to be lost during the sampling process. The edge information may characterize edges on the scaled sampled image. The smaller the sampling scale, the more abundant the detail information on the scale-sampled image tends to be. In contrast, some useless interference information (such as fine stains on the textiles to be recycled) often influences the judgment of the materials of the textiles to be recycled. The detailed information may include the material and brightness of the textile to be recycled. Therefore, an optimal scale sampling image can be selected, and the accurate position and the type of the textile to be recycled can be obtained through subsequent analysis of the optimal scale sampling image. Secondly, because the optimal scale sampling image is often obtained by screening among the plurality of scale sampling images, the number of the scale sampling images in the plurality of scale sampling images is not too small. And then, performing foreground segmentation on each scale sampling image in the plurality of scale sampling images, and determining a foreground region corresponding to each scale sampling image to obtain a plurality of foreground regions. Because, the scale sampling image often includes: a conveyor belt and an area on the conveyor belt comprised of a plurality of pieces of a textile to be recycled. The background of the dimensionally sampled image is often the photographed conveyor belt. The foreground of the dimension sampling image is often a plurality of textiles to be recycled which are shot. Therefore, the foreground segmentation is carried out on the scale sampling image, a foreground area which only comprises a plurality of textiles to be recovered can be obtained, and the subsequent analysis of the textiles to be recovered can be facilitated. Then, for each foreground region in the plurality of foreground regions, a gradient direction line and a normalized gradient amplitude value corresponding to each pixel point in the foreground region are determined. And then, for each foreground area in the plurality of foreground areas, determining an edge feature point pair corresponding to the pixel point according to the gradient direction line corresponding to each pixel point in the foreground area, wherein the edge feature point pair comprises two edge feature points. Because the edge feature point pair corresponding to the pixel point is determined according to the gradient direction line corresponding to the pixel point, whether the pixel point is the edge point of the textile to be recycled can be judged subsequently by analyzing the edge feature point pair corresponding to the pixel point, and the accuracy of judging which area is the area where the textile to be recycled is located can be improved. And continuously, for each foreground area in the plurality of foreground areas, performing neighborhood variance mean processing on the sampling area corresponding to each edge feature point included in the edge feature point corresponding to each pixel point in the foreground area, and determining a material feature value and a brightness feature value corresponding to the edge feature point. And then, for each foreground area in the plurality of foreground areas, determining a sampling effect corresponding to the foreground area according to the normalized gradient amplitude value corresponding to the pixel point in the foreground area and the material characteristic value and the brightness characteristic value corresponding to the two edge characteristic points included in the edge characteristic point pair. The material characteristic value can represent information in the aspect of the material of the textile to be recycled, which is reflected by pixel points in sampling areas corresponding to the edge characteristic points under different sampling scales. The brightness characteristic value can represent information in brightness of the textile to be recycled reflected by pixel points in sampling areas corresponding to the edge characteristic points under different sampling scales. Therefore, the material characteristic value and the brightness characteristic value are comprehensively considered, and the accuracy of determining the sampling effect is improved. And then, carrying out textile classification on the foreground area with the best sampling effect to obtain a plurality of textile categories, wherein the foreground area with the best sampling effect is the foreground area corresponding to the maximum sampling effect in the sampling effects corresponding to the foreground areas. Therefore, the accuracy of classifying the above-mentioned plurality of textiles to be recycled is improved. And finally, controlling the robot to sort and classify the plurality of textiles to be recycled according to the categories of the textiles. In actual conditions, sometimes adopt artifical mode, treat to retrieve many fabrics and sort, means such as the letter sorting personnel rely on hand touch, look to a glance promptly, treat to retrieve many fabrics and sort. Although the method does not need to acquire the textile image, the judgment of the sorting result is often greatly influenced by human subjective factors, the sorting accuracy and efficiency are often low, and the sorting result is often unstable. In addition, since the textiles to be recycled are often worn out and damaged, the textiles to be recycled may be biochemically contaminated, thereby posing a threat to the health of sorting personnel. Therefore, it is very important to sort the textiles to be recycled by using the sorting robot. Therefore, the invention can realize the sorting of the old textiles by controlling the robot, and improves the accuracy and efficiency of the robot sorting.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow diagram of some embodiments of a robotic sorting method based on object detection in accordance with the present invention;
FIG. 2 is a schematic view of an apparatus used to acquire an image of a textile according to the present invention;
FIG. 3 is a schematic view of a gradient direction line according to the present invention;
FIG. 4 is a schematic diagram of pairs of edge feature points according to the present invention;
FIG. 5 is a schematic view of a sampling region according to the present invention;
FIG. 6 is a schematic diagram of a plurality of boundaries in accordance with the present invention.
Detailed Description
To further explain the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the embodiments, structures, features and effects of the technical solutions according to the present invention will be given with reference to the accompanying drawings and preferred embodiments. In the following description, different references to "one embodiment" or "another embodiment" do not necessarily refer to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The invention provides a robot sorting method based on target detection, which comprises the following steps:
acquiring textile images of a plurality of textiles to be recycled on a recycled textile conveyor belt, and graying the textile images to obtain original gray images;
carrying out multi-scale sampling on the original gray level image to obtain a plurality of scale sampling images;
performing foreground segmentation on each scale sampling image in the multiple scale sampling images, and determining a foreground region corresponding to each scale sampling image to obtain multiple foreground regions;
for each foreground area in the plurality of foreground areas, determining a gradient direction line and a normalized gradient amplitude value corresponding to each pixel point in the foreground area;
for each foreground area in the plurality of foreground areas, determining an edge characteristic point pair corresponding to each pixel point according to a gradient direction line corresponding to each pixel point in the foreground area;
for each foreground area in the plurality of foreground areas, performing neighborhood variance mean processing on an edge feature point corresponding to each pixel point in the foreground area to a sampling area corresponding to each edge feature point, and determining a material feature value and a brightness feature value corresponding to the edge feature point;
for each foreground area in the plurality of foreground areas, determining a sampling effect corresponding to the foreground area according to a normalized gradient amplitude value corresponding to a pixel point in the foreground area and a material characteristic value and a brightness characteristic value corresponding to two edge characteristic points included in an edge characteristic point pair;
carrying out textile classification on the foreground area with the best sampling effect to obtain a plurality of textile categories;
and according to the classes of the textiles, controlling the robot to sort and classify a plurality of textiles to be recovered.
The following steps are detailed:
referring to fig. 1, a flow diagram of some embodiments of a robotic sorting method based on object detection according to the present invention is shown. The robot sorting method based on target detection comprises the following steps:
s1, acquiring textile images of a plurality of textiles to be recycled on a recycled textile conveyor belt, and graying the textile images to obtain original gray images.
In some embodiments, textile images of a plurality of textiles to be recycled on a recycling textile conveyor belt may be acquired and grayed to obtain an original grayscale image.
Wherein, the recovered textile conveyor belt can be a conveyor belt for conveying the textiles to be recovered. The textiles to be recycled in the above-mentioned plurality of textiles to be recycled may be, but are not limited to: pieces of cloth, gloves or clothing to be recycled. The textile image may be an image of a plurality of textiles to be recycled. The original gray image may be a grayed textile image.
As an example, as shown in fig. 2, 201 may be a robot for sorting pieces of textiles to be recycled. 202 may be a camera that takes images of the textile. 203 may be a stand to which camera 202 is fixed. 204 and 205 may be the edges of a recycled textile conveyor belt. The direction of travel of the recycled textile conveyor belt may be the direction indicated by the arrow. The quadrangle star, the heptagon star and the rhombus on the conveyor belt can be a sweater, a shirt and a suit respectively.
And S2, carrying out multi-scale sampling on the original gray image to obtain a plurality of scale sampling images.
In some embodiments, the original grayscale image may be subjected to multi-scale sampling, so as to obtain a plurality of scale-sampled images.
The scale sampling images in the plurality of scale sampling images can be images at different sampling scales. For example, the number of the above-described plurality of scale-sampled images may be 10.
As an example, the original grayscale image may be subjected to pyramid sampling to obtain a plurality of scale sampled images. The number of the above-mentioned plurality of scale-sampled images may be 3. The size of the original grayscale image may be m × n. The sampling scale corresponding to the first one of the multiple scaled sampled images may be 2 0 . The size of the first scale sample image may be m × n. The second one of the plurality of scaled sampled images may correspond to a sampling scale of 2 1 . The size of the second scale sampled image may be
Figure BDA0003755591390000081
The sampling scale corresponding to the third one of the multiple scaled sampled images may be 2 2 . The size of the third scale sampling image may be
Figure BDA0003755591390000082
And S3, performing foreground segmentation on each scale sampling image in the multiple scale sampling images, and determining a foreground region corresponding to each scale sampling image to obtain multiple foreground regions.
In some embodiments, foreground segmentation may be performed on each of the multiple scaled sampled images, and a foreground region corresponding to each scaled sampled image is determined, so as to obtain multiple foreground regions.
The foreground areas in the plurality of foreground areas can be areas where a plurality of textiles to be recycled are shot on the scale sampling image.
This step can be implemented by the prior art, and is not described herein again.
And S4, for each foreground area in the plurality of foreground areas, determining a gradient direction line and a normalized gradient amplitude value corresponding to each pixel point in the foreground area.
In some embodiments, for each foreground region of the plurality of foreground regions, a gradient direction line and a normalized gradient magnitude value corresponding to each pixel point in the foreground region may be determined.
The gradient direction line corresponding to the pixel point may be a line passing through the pixel point and along the gradient direction corresponding to the pixel point. The normalized gradient amplitude corresponding to the pixel point may be a gradient obtained by normalizing the gradient magnitude of the pixel point.
As an example, for each foreground region in the plurality of foreground regions, a gradient direction line and a normalized gradient magnitude value corresponding to each pixel point in the foreground region may be determined by a sobel operator. For example, as shown in fig. 3, the gradient direction line corresponding to the pixel point 301 may be 302. The gradient direction corresponding to the pixel point 301 may be the direction indicated by the arrow of the gradient direction line 302.
And S5, determining edge characteristic point pairs corresponding to the pixel points according to the gradient direction lines corresponding to the pixel points in the foreground areas for each foreground area in the plurality of foreground areas.
In some embodiments, for each foreground region in the plurality of foreground regions, an edge feature point pair corresponding to the pixel point may be determined according to a gradient direction line corresponding to each pixel point in the foreground region.
Wherein the edge feature point pair may include two edge feature points. The edge feature point included in the edge feature point pair corresponding to the pixel point may be a pixel point having the largest gradient direction line passing through the pixel point in the neighborhood of the pixel point.
As an example, the edge feature point included in the edge feature point pair corresponding to the pixel point may be a pixel point having the largest gradient direction line passing through the pixel point in eight neighborhoods of the pixel point. As shown in fig. 4, a solid black square may be the pixel point. The 8 white filled squares can be the pixel points in the eight neighborhoods of the pixel point respectively. 402 may be a gradient direction line corresponding to the pixel point. The pixel point 401 and the pixel point 403 may be two edge feature points included in an edge feature point pair corresponding to the pixel point.
And S6, for each foreground area in the plurality of foreground areas, performing neighborhood variance mean processing on the sampling area corresponding to each edge feature point included in the edge feature point corresponding to each pixel point in the foreground area, and determining a material feature value and a brightness feature value corresponding to the edge feature point.
In some embodiments, for each foreground region of the plurality of foreground regions, neighborhood variance-mean processing may be performed on a sampling region corresponding to each edge feature point included in an edge feature point pair corresponding to each pixel point in the foreground region, so as to determine a material feature value and a brightness feature value corresponding to the edge feature point.
The sampling region corresponding to the edge feature point may be a real object characterized by the edge feature point, and is a region on the original gray level image. For example, as shown in fig. 5, a sampling region corresponding to one edge feature point 504 in the scaled sampling image 502 may be a sampling region 503 in the original grayscale image 501. There are multiple pixel points in the sampling region 503.
As an example, this step may comprise the steps of:
the method comprises the following steps that firstly, for each pixel point in a sampling region corresponding to the edge feature point, the gray difference value between each neighborhood pixel point corresponding to the pixel point and the pixel point is determined according to the gray value of the pixel point and the gray values of a preset number of neighborhood pixel points corresponding to the pixel point, and the preset number of gray difference values corresponding to the pixel point are obtained.
The neighborhood pixel corresponding to the pixel can be a pixel in the neighborhood of the pixel. The preset number may be a preset number. For example, the preset number may be 8. The gray difference between the neighborhood pixel point corresponding to the pixel point and the pixel point may be the difference between the gray value of the neighborhood pixel point and the gray value of the pixel point.
For example, when the preset number is 8, the 8 neighborhood pixels corresponding to the pixel point may be 8 pixels in the eight neighborhoods of the pixel point.
And secondly, determining the gray difference variance corresponding to the pixel points according to a preset number of gray differences corresponding to each pixel point in the sampling area corresponding to the edge feature points.
The gray scale difference variance corresponding to a pixel point may be the variance of a preset number of gray scale differences corresponding to the pixel point.
And thirdly, normalizing the gray difference variance corresponding to each pixel point in the sampling area corresponding to the edge feature point to obtain the normalized variance corresponding to the pixel point.
And fourthly, determining a neighborhood mean value corresponding to the pixel points according to the gray values of a preset number of neighborhood pixel points corresponding to each pixel point in the sampling region corresponding to the edge feature points.
The neighborhood mean value corresponding to the pixel point may be a mean value of gray values of a preset number of neighborhood pixel points corresponding to the pixel point.
And fifthly, normalizing the neighborhood mean value corresponding to each pixel point in the sampling region corresponding to the edge feature point to obtain a normalized mean value.
And sixthly, determining a material characteristic value and a brightness characteristic value corresponding to the edge characteristic point according to the number of pixel points in the sampling area corresponding to the edge characteristic point, and the normalized variance and the normalized mean value corresponding to each pixel point in the sampling area corresponding to the edge characteristic point.
For example, the formula for determining the correspondence between the material characteristic value and the brightness characteristic value corresponding to the edge characteristic point may be:
Figure BDA0003755591390000101
whereinAnd δ is a material characteristic value corresponding to the edge characteristic point. h is the luminance feature value corresponding to the edge feature point. J is the number of pixel points in the sampling region corresponding to the edge feature point. Delta j Is the normalized variance of the corresponding jth pixel point in the sampling region corresponding to the above edge feature point. h is j Is the normalized mean value corresponding to the jth pixel point in the sampling region corresponding to the edge feature point.
The material characteristic value can represent information in the aspect of the material of the textile to be recycled, which is reflected by pixel points in sampling areas corresponding to the edge characteristic points under different sampling scales. The larger the material characteristic value corresponding to the edge characteristic point is, the coarser the texture of the textile area to be recovered corresponding to the edge characteristic point is. The smaller the material characteristic value corresponding to the edge characteristic point is, the finer the texture of the textile area to be recovered corresponding to the edge characteristic point is. For example, when the textile to be recovered is a sweater, the texture of the sweater is often coarse, so the variance of the difference between the gray value of one edge feature point in the area where the sweater is shot and the gray value of a pixel point in the neighborhood of the edge feature point is often large, and therefore, the material feature value corresponding to the edge feature point is often large. The brightness characteristic value can represent information in brightness of the textile to be recycled reflected by pixel points in sampling areas corresponding to the edge characteristic points under different sampling scales. The larger the brightness characteristic value corresponding to the edge characteristic point is, the larger the brightness of the to-be-recycled textile area corresponding to the edge characteristic point is. The smaller the brightness characteristic value corresponding to the edge characteristic point is, the smaller the brightness of the textile area to be recycled corresponding to the edge characteristic point is.
And S7, determining the sampling effect corresponding to the foreground area according to the normalized gradient amplitude value corresponding to the pixel point in the foreground area and the material characteristic value and the brightness characteristic value corresponding to the two edge characteristic points included in the edge characteristic point pair for each foreground area in the plurality of foreground areas.
In some embodiments, for each foreground region in the foreground regions, a sampling effect corresponding to the foreground region may be determined according to the normalized gradient amplitude corresponding to the pixel point in the foreground region and the material characteristic value and the brightness characteristic value corresponding to two edge characteristic points included in the edge characteristic point pair.
As an example, this step may include the steps of:
the method comprises the following steps of firstly, determining classification coefficients corresponding to pixel points according to material characteristic values and brightness characteristic values corresponding to two edge characteristic points included in edge characteristic point pairs corresponding to each pixel point in the foreground area.
For example, the formula for determining the correspondence between the classification coefficients corresponding to the pixel points may be:
Figure BDA0003755591390000111
wherein the content of the first and second substances,
Figure BDA0003755591390000112
is the classification coefficient corresponding to the pixel point. Delta. For the preparation of a coating 1 The material characteristic value corresponding to the first edge characteristic point in the two edge characteristic points included in the edge characteristic point pair corresponding to the pixel point. Delta. For the preparation of a coating 2 The material characteristic value corresponding to the second edge characteristic point in the two edge characteristic points included in the edge characteristic point pair corresponding to the pixel point is adopted. h is 1 The luminance characteristic value corresponding to the first edge characteristic point of the two edge characteristic points included in the edge characteristic point pair corresponding to the pixel point is obtained. h is 2 The luminance characteristic value corresponding to the second edge characteristic point of the two edge characteristic points included in the edge characteristic point pair corresponding to the pixel point is obtained.
The more the classification coefficient corresponding to the pixel point approaches to 1, the more the pixel point is often the boundary point of the textile to be recycled. The more the classification coefficient corresponding to the pixel point approaches to 0, the more the pixel point tends to be a non-boundary point (e.g., a wrinkle point) of the textile to be recycled.
Because the edge feature point pair corresponding to the pixel point includes two edge feature points located on two sides of the gradient direction of the pixel point, when the difference between the two edge feature points is larger, it is often indicated that the two edge feature points are more likely to be located on different textiles to be recycled, the possibility that the pixel point is a boundary point of the textiles to be recycled is often larger, and otherwise, it is often indicated that the pixel point is more likely to be a non-boundary point of the textiles to be recycled. Therefore, the larger the difference between the material characteristic values or the brightness characteristic values corresponding to the two edge characteristic points included in the edge characteristic point pair corresponding to the pixel point is, the more likely the pixel point is to be the boundary point of the textile to be recycled, and conversely, the more likely the pixel point is to be the non-boundary point of the textile to be recycled.
And secondly, determining the classification degree corresponding to each pixel point according to the classification coefficient corresponding to each pixel point in the foreground area.
The classification degree corresponding to the pixel point can be the accuracy degree of the judgment result of judging whether the pixel point is the boundary point of the textile to be recycled.
For example, the formula for determining the classification degree corresponding to the pixel point may be:
Figure BDA0003755591390000121
where f is the classification degree corresponding to the pixel point.
Figure BDA0003755591390000122
Is the classification coefficient corresponding to the pixel point.
The more the classification coefficient corresponding to the pixel point approaches to 0 or 1, the more the pixel point can be judged to be the boundary point of the textile to be recovered. Therefore, the more the classification coefficient corresponding to a pixel point approaches 0 or 1, the higher the degree of classification corresponding to the pixel point tends to be.
And thirdly, determining the normalized gradient amplitude corresponding to each pixel point in the foreground area as the edge point probability corresponding to the pixel point.
The edge point probability corresponding to the pixel point may be the probability that the pixel point is the boundary point of the textile to be recovered.
In an actual situation, the larger the gradient corresponding to a pixel point is, the more likely the pixel point is to be an edge point. Therefore, the larger the normalized gradient amplitude corresponding to the pixel point is, the more likely the pixel point is to be a boundary point of the textile to be recycled. Since the value range of the normalized gradient amplitude value can be [0,1], the normalized gradient amplitude value corresponding to the pixel point can be used as the edge point probability corresponding to the pixel point.
And fourthly, determining attention weight corresponding to each pixel point in the foreground area according to the edge point probability corresponding to each pixel point in the foreground area.
For example, the formula for determining the attention weight corresponding to each pixel point in the foreground region may be:
Figure BDA0003755591390000123
wherein, w i Is the attention weight corresponding to the ith pixel point in the foreground region. p is a radical of formula i Is the probability of the edge point corresponding to the ith pixel point in the foreground region. N is the number of pixels in the foreground region.
The larger the attention weight corresponding to the pixel point is, the more likely the pixel point is to be a boundary point of the textile to be recycled.
And fifthly, determining the sampling effect corresponding to the foreground area according to the attention weight and the classification degree corresponding to each pixel point in the foreground area.
For example, the formula for determining the sampling effect corresponding to the foreground region may be:
Figure BDA0003755591390000131
wherein, X is the sampling effect corresponding to the foreground region. N is the number of pixels in the foreground region. w is a i The attention right corresponding to the ith pixel point in the foreground regionAnd (4) heavy. f. of i Is the classification degree corresponding to the ith pixel point in the foreground region.
The larger the attention weight or the classification degree corresponding to each pixel point in the foreground area is, the better the sampling effect corresponding to the foreground area is. The larger the sampling effect corresponding to the foreground area is, the better the sampling effect corresponding to the foreground area is.
And S8, carrying out textile classification on the foreground area with the best sampling effect to obtain a plurality of textile categories.
In some embodiments, the foreground region with the best sampling effect may be subjected to textile classification to obtain a plurality of textile categories.
The foreground region with the best sampling effect may be a foreground region corresponding to the largest sampling effect among the sampling effects corresponding to each foreground region in the plurality of foreground regions. The textile categories of the above-mentioned plurality of textile categories may be of different kinds. For example, the number of textile categories in the plurality of textile categories may be 3. The plurality of textile categories may be Mao Yilei, shirts, and suits, respectively.
As an example, this step may include the steps of:
step one, when a target pixel point exists in the foreground area with the best sampling effect, the target pixel point is determined as an actual boundary point.
The target pixel point may be a pixel point in the foreground region with the best sampling effect, where the corresponding classification coefficient is greater than or equal to a preset classification coefficient threshold, and the corresponding edge point probability is greater than or equal to a preset edge point probability threshold. The classification coefficient threshold may be a minimum classification coefficient when the pixel is an edge pixel. The edge point probability threshold may be a minimum edge point probability when the pixel is an edge pixel. For example, the classification coefficient threshold and the edge point probability threshold may each be 0.7.
And secondly, performing adjacent connection on all the determined actual boundary points to obtain a plurality of boundaries.
For example, as shown in fig. 6, the actual boundary points 602, 603, 604, 605, and 606 in the foreground region 601 with the best sampling effect may be connected adjacently to obtain 2 boundaries. Wherein, a boundary is composed of an actual boundary point 602, an actual boundary point 603 and an actual boundary point 604. Another boundary is made up of actual boundary point 605 and actual boundary point 606.
And thirdly, detecting connected domains of the foreground regions with the obtained multiple boundaries to obtain multiple connected domains.
Among them, connected domain detection is the prior art.
And fourthly, classifying the plurality of connected domains according to the pixel points in each connected domain of the plurality of connected domains to obtain a plurality of connected domain categories.
Among the plurality of connected domain categories, the connected domain categories may be different categories.
For example, this step may include the following substeps:
the first substep is to normalize the gray value corresponding to each pixel point in the connected domain, determine the normalized gray value corresponding to the pixel point, and obtain the normalized image corresponding to the connected domain.
The normalized gray value corresponding to the pixel point may be a gray value obtained by normalizing the gray value of the pixel point. The normalized image corresponding to the connected domain can represent the connected domain obtained after the gray value of each pixel point in the connected domain is normalized.
And a second substep of determining a normalized gray level mean value corresponding to the connected domain according to the normalized gray level value corresponding to each pixel point in the connected domain.
The normalized gray level mean value corresponding to the connected domain may be a mean value of normalized gray levels corresponding to the pixels in the connected domain.
And a third substep of determining the normalized two-dimensional entropy corresponding to the connected domain according to the normalized image corresponding to the connected domain.
The normalized two-dimensional entropy corresponding to the connected domain may be a two-dimensional entropy of a normalized image corresponding to the connected domain.
And a fourth substep of combining the normalized grayscale mean value and the normalized two-dimensional entropy corresponding to the connected domain into a classification coordinate point corresponding to the connected domain.
The normalized grayscale mean corresponding to the connected component may be an abscissa of the classification coordinate point corresponding to the connected component. The normalized two-dimensional entropy corresponding to a connected component can be the ordinate of the classification coordinate point corresponding to the connected component.
And a fifth substep of classifying two connected domains of the plurality of connected domains into the same connected domain category when the distance between the classification coordinate points corresponding to the two connected domains is smaller than a preset classification threshold.
The classification threshold may be a minimum distance between classification coordinate points corresponding to two connected domains when the two connected domains are not in the same connected domain category. For example, the classification threshold may be 0.1.
And fifthly, determining the multiple textile categories according to the multiple connected domain categories.
In practical situations, a connected domain often corresponds to a textile to be recycled. Often, one connected domain category corresponds to one textile category.
And S9, controlling the robot to sort and classify a plurality of textiles to be recycled according to the categories of the textiles.
In some embodiments, the robot may be controlled to sort and classify the plurality of textiles to be recycled according to the plurality of textile categories.
Wherein, the robot can be a sorting robot.
As an example, the robot may be controlled to grab the textiles to be recycled in different textile categories, and the robot may be controlled to place the grabbed textiles to be recycled in different textile categories into a preset position, so as to sort and classify the plurality of textiles to be recycled.
According to the robot sorting method based on target detection, the old textiles can be sorted by controlling the robot, and the accuracy and efficiency of robot sorting are improved. Firstly, textile images of a plurality of textiles to be recycled on a recycled textile conveyor belt are obtained, and the textile images are grayed to obtain an original gray image. In actual conditions, the obtained textile image can objectively reflect the condition of the textile to be recycled, so that the gray-level textile image can be conveniently analyzed subsequently, and the categories of the plurality of textiles to be recycled can be objectively and accurately determined. Therefore, the sorting of the plurality of textiles to be recycled can be conveniently carried out subsequently. And secondly, carrying out multi-scale sampling on the original gray level image to obtain a plurality of scale sampling images. Because the larger the sampling scale is, the more obvious the edge information on the scale sampling image is, and the edge information is not easily influenced by the interference information such as dirt. In contrast, detailed material information that can distinguish whether an edge is a wrinkled edge or a textile edge to be recycled is often lost during the sampling process. The edge information may characterize edges on the scaled sampled image. The smaller the sampling scale, the more sufficient the detail information on the scale-sampled image. In contrast, some useless interference information (such as fine stains on the textiles to be recycled) often influences the judgment of the materials of the textiles to be recycled. The detailed information may include the material and brightness of the textile to be recycled. Therefore, an optimal scale sampling image can be selected, and the accurate position and the type of the textile to be recycled can be obtained through subsequent analysis of the optimal scale sampling image. Secondly, since the optimal scale sampling image is often obtained by screening among the plurality of scale sampling images, the number of scale sampling images in the plurality of scale sampling images is not likely to be too small. And then, performing foreground segmentation on each scale sampling image in the plurality of scale sampling images, and determining a foreground region corresponding to each scale sampling image to obtain a plurality of foreground regions. Because, the scale sampling image often includes: a conveyor belt and an area on the conveyor belt comprised of a plurality of pieces of a textile to be recycled. The background of the dimensionally sampled image is often the photographed conveyor belt. The foreground of the dimension sampling image is often a plurality of textiles to be recycled which are shot. Therefore, the foreground segmentation is carried out on the scale sampling image to obtain a foreground region only comprising a plurality of textiles to be recovered, and the subsequent analysis on the plurality of textiles to be recovered can be facilitated. Then, for each foreground region in the plurality of foreground regions, a gradient direction line and a normalized gradient amplitude value corresponding to each pixel point in the foreground region are determined. And then, for each foreground area in the plurality of foreground areas, determining an edge feature point pair corresponding to the pixel point according to the gradient direction line corresponding to each pixel point in the foreground area, wherein the edge feature point pair comprises two edge feature points. Because the edge feature point pair corresponding to the pixel point is determined according to the gradient direction line corresponding to the pixel point, whether the pixel point is the edge point of the textile to be recycled can be judged subsequently by analyzing the edge feature point pair corresponding to the pixel point, and the accuracy of judging which area is the area where the textile to be recycled is located can be improved. And continuously, for each foreground area in the plurality of foreground areas, performing neighborhood variance mean processing on the sampling area corresponding to each edge feature point included in the edge feature point corresponding to each pixel point in the foreground area, and determining a material feature value and a brightness feature value corresponding to the edge feature point. And then, for each foreground area in the plurality of foreground areas, determining a sampling effect corresponding to the foreground area according to the normalized gradient amplitude value corresponding to the pixel point in the foreground area and the material characteristic value and the brightness characteristic value corresponding to the two edge characteristic points included in the edge characteristic point pair. The material characteristic value can represent information in the aspect of the material of the textile to be recycled, which is reflected by pixel points in sampling areas corresponding to the edge characteristic points under different sampling scales. The brightness characteristic value can represent information in brightness of the textile to be recycled reflected by pixel points in sampling areas corresponding to the edge characteristic points under different sampling scales. Therefore, the material characteristic value and the brightness characteristic value are comprehensively considered, and the accuracy of determining the sampling effect is improved. And then, carrying out textile classification on the foreground area with the best sampling effect to obtain a plurality of textile categories, wherein the foreground area with the best sampling effect is the foreground area corresponding to the maximum sampling effect in the sampling effects corresponding to the foreground areas. Therefore, the accuracy of classifying the above-mentioned plurality of textiles to be recycled is improved. And finally, controlling the robot to sort and classify the plurality of textiles to be recycled according to the categories of the textiles. In actual conditions, sometimes adopt artifical mode, treat to retrieve many fabrics and sort, means such as the letter sorting personnel rely on hand touch, look to a glance promptly, treat to retrieve many fabrics and sort. Although the method does not need to acquire the textile image, the judgment of the sorting result is often greatly influenced by artificial subjective factors, so that the sorting accuracy and efficiency are often low, and the sorting result is often unstable. In addition, since the textiles to be recycled are often worn out and damaged, the textiles to be recycled may be biochemically contaminated, thereby posing a threat to the health of sorting personnel. Therefore, it is very important to sort the textiles to be recycled by using the sorting robot. Therefore, the invention can realize the sorting of the old textiles by controlling the robot, and improves the accuracy and efficiency of the robot sorting.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A robot sorting method based on target detection is characterized by comprising the following steps:
acquiring textile images of a plurality of textiles to be recycled on a recycled textile conveyor belt, and graying the textile images to obtain an original gray image;
carrying out multi-scale sampling on the original gray level image to obtain a plurality of scale sampling images;
performing foreground segmentation on each scale sampling image in the plurality of scale sampling images, and determining a foreground region corresponding to each scale sampling image to obtain a plurality of foreground regions;
for each foreground region in the plurality of foreground regions, determining a gradient direction line and a normalized gradient amplitude value corresponding to each pixel point in the foreground region;
for each foreground region in the plurality of foreground regions, determining an edge feature point pair corresponding to the pixel point according to a gradient direction line corresponding to each pixel point in the foreground region, wherein the edge feature point pair comprises two edge feature points;
for each foreground area in the plurality of foreground areas, performing neighborhood variance mean processing on an edge feature point corresponding to each pixel point in the foreground area and a sampling area corresponding to each edge feature point included in the foreground area, and determining a material feature value and a brightness feature value corresponding to the edge feature point;
for each foreground region in the plurality of foreground regions, determining a sampling effect corresponding to the foreground region according to a normalized gradient amplitude value corresponding to a pixel point in the foreground region and a material characteristic value and a brightness characteristic value corresponding to two edge characteristic points included in an edge characteristic point pair;
carrying out textile classification on the foreground area with the best sampling effect to obtain a plurality of textile categories, wherein the foreground area with the best sampling effect is the foreground area corresponding to the maximum sampling effect in the sampling effects corresponding to all foreground areas in the plurality of foreground areas;
and controlling the robot to sort and classify the plurality of textiles to be recycled according to the plurality of textile categories.
2. The method according to claim 1, wherein the performing neighborhood variance mean processing on the sampling region corresponding to each edge feature point included in the edge feature point pair corresponding to each pixel point in the foreground region to determine a material feature value and a brightness feature value corresponding to the edge feature point comprises:
for each pixel point in the sampling region corresponding to the edge feature point, determining a gray level difference value between each neighborhood pixel point corresponding to the pixel point and the pixel point according to the gray level value of the pixel point and the gray level values of a preset number of neighborhood pixel points corresponding to the pixel point, so as to obtain a preset number of gray level difference values corresponding to the pixel point;
determining a gray difference variance corresponding to the pixel points according to a preset number of gray differences corresponding to each pixel point in the sampling region corresponding to the edge feature point;
normalizing the gray difference variance corresponding to each pixel point in the sampling area corresponding to the edge feature point to obtain the normalized variance corresponding to the pixel point;
determining a neighborhood mean value corresponding to the pixel points according to the gray values of a preset number of neighborhood pixel points corresponding to each pixel point in the sampling region corresponding to the edge feature point;
normalizing the neighborhood mean value corresponding to each pixel point in the sampling region corresponding to the edge feature point to obtain a normalized mean value;
and determining a material characteristic value and a brightness characteristic value corresponding to the edge characteristic point according to the number of pixel points in the sampling region corresponding to the edge characteristic point, and the normalized variance and the normalized mean value corresponding to each pixel point in the sampling region corresponding to the edge characteristic point.
3. The method according to claim 2, wherein the formula for determining the material characteristic value and the brightness characteristic value corresponding to the edge characteristic point is as follows:
Figure FDA0003755591380000021
wherein, δ is the material characteristic value corresponding to the edge characteristic point, h is the edge characteristicCorresponding brightness characteristic value, J is the number of pixel points in the sampling region corresponding to the edge characteristic point, delta j Is the normalized variance, h, of the jth pixel point in the sampling region corresponding to the edge feature point j Is the normalized mean value corresponding to the jth pixel point in the sampling region corresponding to the edge feature point.
4. The method according to claim 1, wherein the determining the sampling effect corresponding to the foreground region according to the normalized gradient amplitude value corresponding to the pixel point in the foreground region and the material characteristic value and the brightness characteristic value corresponding to the two edge characteristic points included in the edge characteristic point pair comprises:
determining a classification coefficient corresponding to the pixel point according to a material characteristic value and a brightness characteristic value corresponding to two edge characteristic points included in the edge characteristic point pair corresponding to each pixel point in the foreground region;
determining the classification degree corresponding to the pixel points according to the classification coefficient corresponding to each pixel point in the foreground region;
determining the normalized gradient amplitude corresponding to each pixel point in the foreground region as the edge point probability corresponding to the pixel point;
determining attention weight corresponding to each pixel point in the foreground region according to the edge point probability corresponding to each pixel point in the foreground region;
and determining the sampling effect corresponding to the foreground area according to the attention weight and the classification degree corresponding to each pixel point in the foreground area.
5. The method according to claim 4, wherein the formula for determining the classification coefficient corresponding to the pixel point is as follows:
Figure FDA0003755591380000031
wherein the content of the first and second substances,
Figure FDA0003755591380000032
is the classification coefficient, δ, of the pixel point correspondence 1 Is the material characteristic value corresponding to the first edge characteristic point in the two edge characteristic points included in the edge characteristic point pair corresponding to the pixel point, delta 2 Is the material characteristic value, h, corresponding to the second edge characteristic point in the two edge characteristic points included in the edge characteristic point pair corresponding to the pixel point 1 Is the brightness characteristic value h corresponding to the first edge characteristic point in the two edge characteristic points included in the edge characteristic point pair corresponding to the pixel point 2 Is the brightness characteristic value corresponding to the second edge characteristic point in the two edge characteristic points included in the edge characteristic point pair corresponding to the pixel point.
6. The method according to claim 4, wherein the formula for determining the classification degree corresponding to the pixel point is as follows:
Figure FDA0003755591380000033
wherein f is the classification degree of the pixel point correspondence,
Figure FDA0003755591380000034
is the classification coefficient corresponding to the pixel point.
7. The method according to claim 4, wherein the formula for determining the attention weight corresponding to each pixel point in the foreground region is:
Figure FDA0003755591380000035
wherein, w i Is the attention weight, p, corresponding to the ith pixel point in the foreground region i Is the first in the foreground regionAnd the probability of the edge points corresponding to the i pixel points, wherein N is the number of the pixel points in the foreground region.
8. The method according to claim 4, wherein the formula for determining the sampling effect corresponding to the foreground region is:
Figure FDA0003755591380000036
wherein X is the sampling effect corresponding to the foreground region, N is the number of pixel points in the foreground region, and w is i Is the attention weight, f, corresponding to the ith pixel point in the foreground region i Is the classification degree corresponding to the ith pixel point in the foreground region.
9. The method of claim 4, wherein the classifying the textiles for the foreground region with the best sampling effect to obtain a plurality of textile categories comprises:
when a target pixel point exists in the foreground region with the best sampling effect, determining the target pixel point as an actual boundary point, wherein the target pixel point is a pixel point of which the corresponding classification coefficient is greater than or equal to a preset classification coefficient threshold value and the corresponding edge point probability is greater than or equal to a preset edge point probability threshold value;
performing adjacent connection on all the determined actual boundary points to obtain a plurality of boundaries;
carrying out connected domain detection on the foreground areas with the obtained multiple boundaries to obtain multiple connected domains;
classifying the plurality of connected domains according to the pixel points in each connected domain of the plurality of connected domains to obtain a plurality of connected domain categories;
determining the plurality of textile categories according to the plurality of connected domain categories.
10. The method of claim 9, wherein classifying the plurality of connected domains according to the pixel points within each of the plurality of connected domains comprises:
normalizing the gray value corresponding to each pixel point in the connected domain, determining the normalized gray value corresponding to the pixel point, and obtaining a normalized image corresponding to the connected domain;
determining a normalized gray level mean value corresponding to the connected domain according to the normalized gray level value corresponding to each pixel point in the connected domain;
determining a normalized two-dimensional entropy corresponding to the connected domain according to the normalized image corresponding to the connected domain;
combining the normalized gray level mean value and the normalized two-dimensional entropy corresponding to the connected domain into a classification coordinate point corresponding to the connected domain;
and when the distance between the classification coordinate points corresponding to two connected domains in the plurality of connected domains is smaller than a preset classification threshold value, the two connected domains are divided into the same connected domain category.
CN202210859117.XA 2022-07-20 2022-07-20 Robot sorting method based on target detection Pending CN115809981A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210859117.XA CN115809981A (en) 2022-07-20 2022-07-20 Robot sorting method based on target detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210859117.XA CN115809981A (en) 2022-07-20 2022-07-20 Robot sorting method based on target detection

Publications (1)

Publication Number Publication Date
CN115809981A true CN115809981A (en) 2023-03-17

Family

ID=85482347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210859117.XA Pending CN115809981A (en) 2022-07-20 2022-07-20 Robot sorting method based on target detection

Country Status (1)

Country Link
CN (1) CN115809981A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612126A (en) * 2023-07-21 2023-08-18 青岛国际旅行卫生保健中心(青岛海关口岸门诊部) Container disease vector biological detection early warning method based on artificial intelligence

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612126A (en) * 2023-07-21 2023-08-18 青岛国际旅行卫生保健中心(青岛海关口岸门诊部) Container disease vector biological detection early warning method based on artificial intelligence
CN116612126B (en) * 2023-07-21 2023-09-19 青岛国际旅行卫生保健中心(青岛海关口岸门诊部) Container disease vector biological detection early warning method based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN102305798B (en) Method for detecting and classifying glass defects based on machine vision
CN108648168A (en) IC wafer surface defects detection methods
Rabby et al. A modified canny edge detection algorithm for fruit detection & classification
CN108169236A (en) A kind of cracks of metal surface detection method of view-based access control model
CN108364291A (en) Grey cloth rapid detection method based on computer vision technique
Çelik et al. Fabric defect detection using linear filtering and morphological operations
CN108647706A (en) Article identification classification based on machine vision and flaw detection method
Ko et al. Defect detection of polycrystalline solar wafers using local binary mean
Kwak et al. Automated defect inspection and classification of leather fabric
Sanghadiya et al. Surface defect detection in a tile using digital image processing: Analysis and evaluation
CN115809981A (en) Robot sorting method based on target detection
Abdellah et al. Defect detection and identification in textile fabric by SVM method
Oni et al. Patterned fabric defect detection and classification (FDDC) techniques: a review
Golkar et al. Ceramic tile border defect detection algorithms in automated visual inspection system
Ma et al. An algorithm for fabric defect detection based on adaptive canny operator
Tolba et al. A self-organizing feature map for automated visual inspection of textile products
Brzakovic et al. An approach to defect detection in materials characterized by complex textures
CN111402225A (en) Cloth folding false detection defect discrimination method
CN112907510B (en) Surface defect detection method
Islam et al. Computer vision-based quality inspection system of transparent gelatin capsules in pharmaceutical applications
Alimohamadi et al. Defect detection in textiles using morphological analysis of optimal Gabor wavelet filter response
Kopaczka et al. Fully Automatic Faulty Weft Thread Detection using a Camera System and Feature-based Pattern Recognition.
CN113591923A (en) Engine rocker arm part classification method based on image feature extraction and template matching
Phothisonothai et al. Automated determination of watermelon ripeness based on image color segmentation and rind texture analysis
Abedini et al. Defect detection on IC wafers based on neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination