CN117765528A - Method, device and storage medium for classifying objects - Google Patents

Method, device and storage medium for classifying objects Download PDF

Info

Publication number
CN117765528A
CN117765528A CN202311651108.2A CN202311651108A CN117765528A CN 117765528 A CN117765528 A CN 117765528A CN 202311651108 A CN202311651108 A CN 202311651108A CN 117765528 A CN117765528 A CN 117765528A
Authority
CN
China
Prior art keywords
contour
target
information
target object
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311651108.2A
Other languages
Chinese (zh)
Inventor
童晓蕾
孙照焱
汪海山
文明珠
张志超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xndt Technology Co ltd
Original Assignee
Xndt Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xndt Technology Co ltd filed Critical Xndt Technology Co ltd
Priority to CN202311651108.2A priority Critical patent/CN117765528A/en
Publication of CN117765528A publication Critical patent/CN117765528A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

A method, apparatus, and storage medium for classifying an object are disclosed. The method comprises the following steps: acquiring an X-ray image of a target object to be classified, wherein the target object has a fruit pit or has fruit pits and fruit flesh; extracting a plurality of profile information including an appearance and an interior of the object based on the X-ray image; calculating target feature information related to classification according to the plurality of profile information of the target object; and classifying the target object based on the target characteristic information. By utilizing the scheme of the application, the accuracy and the reliability of target object classification can be improved.

Description

Method, device and storage medium for classifying objects
Technical Field
The present application relates generally to the field of image processing technology. More particularly, the present application relates to a method, apparatus, and computer-readable storage medium for classifying an object.
Background
In the detection classification of foods with fruit pits or with fruit pits and fruit flesh, a visible light camera is often used for collecting appearance images of the foods so as to extract external characteristics of the foods for evaluation and judgment. Wherein the external characteristics refer to the external, size or weight characteristics of the food having the fruit pit or having the fruit pit and the fruit pulp. However, the information inside the food cannot be accurately reflected only by the detection and classification method of the external features, so that it is difficult to accurately distinguish different types of foods.
In view of the foregoing, it is desirable to provide a solution for classifying objects so as to improve the accuracy and reliability of object classification.
Disclosure of Invention
In order to solve at least one or more of the technical problems mentioned above, the present application proposes, in various aspects, a solution for classifying objects.
In a first aspect, the present application provides a method for classifying an object, comprising: acquiring an X-ray image of a target object to be classified, wherein the target object has a fruit pit or has fruit pits and fruit flesh; extracting a plurality of profile information including an appearance and an interior of the object based on the X-ray image; calculating target feature information related to classification according to the plurality of profile information of the target object; and classifying the target object based on the target characteristic information.
In one embodiment, before extracting the plurality of profile information of the object based on the X-ray image, the method further includes: a preprocessing operation is performed on the X-ray image to obtain a preprocessed X-ray image.
In another embodiment, wherein the preprocessing operation includes at least one or more of denoising, smoothing, or correction.
In yet another embodiment, wherein extracting a plurality of profile information including an appearance and an interior of the object based on the X-ray image comprises: binarizing the X-ray image to obtain a binarized image; and performing an edge detection operation based on the binarized image to extract a plurality of profile information including the appearance and the interior of the object.
In yet another embodiment, the method further comprises: and performing filling and/or expanding operations on the binarized image.
In yet another embodiment, wherein the plurality of profile information comprises inner profile information, outer profile information, pit profile information, or pulp profile information; the target characteristic information at least comprises dimension information, area information, interval information or meat yield information of the outline.
In yet another embodiment, wherein calculating classification-related object feature information from the plurality of profile information of the object comprises: acquiring convex hull information of each contour according to the inner contour information, the outer contour information, the pit contour information and the pulp contour information; and calculating target characteristic information related to classification according to the convex hull information of each contour, wherein the convex hull information of each contour comprises convex point positions, convex hull areas and convex hull numbers.
In yet another embodiment, wherein in response to the object having a kernel, and calculating classification-related object feature information from convex hull information for each contour comprises: calculating the length and width of the outer contour of the object according to the bump positions of the outer contour, and calculating the length-width ratio of the outer contour of the object; calculating the length and width of the kernel outline in the target object according to the bump positions of the kernel outline, and calculating the length-width ratio of the kernel outline of the target object; respectively calculating the cavity area and the pit area in the target object according to the convex hull area of the inner contour and the convex hull area of the pit contour, and calculating the area ratio of the pit area to the cavity area in the target object; or calculating the contour distances from the inner contour to the outer contour of the object in multiple directions according to the salient point positions of the outer contour and the salient point positions of the inner contour.
In yet another embodiment, wherein classifying the object based on the object characteristic information includes: comparing the aspect ratio of the outer contour of the target object, the aspect ratio of the kernel contour of the target object, the area ratio or the contour pitch with corresponding preset thresholds; and classifying the target objects according to the shape size or the quality according to the comparison result.
In yet another embodiment, wherein calculating classification-related object feature information from convex hull information for each contour in response to the object having a kernel and a pulp comprises: respectively calculating the corresponding contour radius according to the convex hull area of the outer contour and the convex hull area of the inner contour, and calculating the contour distance between the outer contour and the inner contour in the target object; calculating the perimeter and/or the diameter of the outer contour in the target object according to the bump position of the outer contour; calculating the flesh yield of the target object according to the convex hull area of the pit outline, the convex hull area of the flesh outline and the convex hull area of the outer outline; or calculating the number of the rooms for wrapping the pulp in the target object according to the number of the convex hulls of the pulp outline.
In yet another embodiment, wherein classifying the object based on the object characteristic information includes: comparing the outline interval between the outer outline and the inner outline in the target object, the perimeter and/or the diameter of the outer outline in the target object, the flesh yield of the target object or the number of rooms for wrapping the flesh in the target object with corresponding preset thresholds; and classifying the target objects according to the shape size or the quality according to the comparison result.
In yet another embodiment, wherein in response to the target having a pit, the target is betel nut.
In yet another embodiment, wherein the target is durian in response to the target having a kernel and a pulp.
In a second aspect, the present application provides an apparatus for classifying an object, comprising: a processor; and a memory in which program instructions for classifying objects are stored, which program instructions, when executed by the processor, cause the apparatus to carry out the embodiments of the aforementioned first aspect.
In a third aspect, the present application provides a computer-readable storage medium having stored thereon computer-readable instructions for classifying an object, which when executed by one or more processors, implement the embodiments of the first aspect described above.
By the scheme for classifying the target object provided above, the embodiment of the present application classifies the target object by extracting a plurality of profile information reflecting the appearance and interior of the target object based on the X-ray image of the target object (having a kernel or having a kernel and a pulp), calculating target feature information related to classification from the plurality of profile information. Based on the method, the appearance and the internal information of the object with the fruit pits or with the fruit pits and the fruit flesh can be obtained at the same time, so that richer object characteristic information can be obtained, and the accuracy and the reliability of object classification are greatly improved.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present application will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present application are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar or corresponding parts and in which:
FIG. 1 is an exemplary flow diagram illustrating a method for classifying objects according to an embodiment of the present application;
FIG. 2 is an exemplary schematic diagram of acquiring an X-ray image of a target object;
fig. 3 is an exemplary diagram illustrating the target object being betel nut according to an embodiment of the present application;
fig. 4 is an exemplary diagram illustrating calculation of target characteristic information of a target object being betel nut according to an embodiment of the present application;
fig. 5 is an exemplary schematic diagram illustrating the object being durian according to an embodiment of the present application;
fig. 6 is an exemplary diagram illustrating calculation of target feature information for a target object to be durian according to an embodiment of the present application;
fig. 7 is an exemplary block diagram illustrating an apparatus for classifying objects according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It should be understood that the terms "comprises" and "comprising," when used in this specification and in the claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application is for the purpose of describing particular embodiments only, and is not intended to be limiting of the application. As used in the specification and claims of this application, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the present specification and claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Specific embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is an exemplary flow diagram illustrating a method 100 for classifying an object according to an embodiment of the present application. As shown in fig. 1, at step S101, an X-ray image of a target object to be classified is acquired, wherein the target object has a kernel or has a kernel and pulp. In one implementation scenario, an X-ray image of an object to be classified may be acquired by, for example, an X-ray acquisition system (e.g., as shown in fig. 2). In some embodiments, the target with kernel may be, for example, betel nut, hazelnut, hawaii nut, etc. The object having the kernel and the pulp may be, for example, durian, mangosteen, jackfruit, or the like. Based on the X-ray image of the object obtained as described above, at step S102, a plurality of profile information including the appearance and the interior of the object is extracted based on the X-ray image. In some implementations, the plurality of profile information may include inner profile information, outer profile information, pit profile information, or pulp profile information. For example, when the object has a pit, it correspondingly contains inner contour information, outer contour information, and pit contour information. When the object has a pit and a pulp, it correspondingly contains inner contour information, outer contour information, pit contour information and pulp contour information.
In one embodiment, a preprocessing operation may be performed on the X-ray image to obtain a preprocessed X-ray image before extracting the plurality of profile information of the object based on the X-ray image, and further, a subsequent analysis process may be performed based on the preprocessed X-ray image. In one implementation scenario, the foregoing preprocessing operations may include, but are not limited to, one or more of denoising, smoothing, or correction. For example, operations such as image enhancement may also be included. Based on the foregoing X-ray image or the preprocessed X-ray image, in one embodiment, the X-ray image may be binarized to obtain a binarized image, to perform an edge detection operation based on the binarized image, thereby extracting a plurality of profile information including the appearance and the interior of the object. In one implementation scenario, padding and/or dilation operations may also be performed on the binarized image to ensure the integrity and connectivity of the contours. In some embodiments, a plurality of profile information including the appearance and interior of the object may be extracted using, for example, a canny detection algorithm.
Next, at step S103, target feature information related to classification is calculated from a plurality of profile information of the target object. In one embodiment, the convex hull information of each contour can be obtained according to the inner contour information, the outer contour information, the kernel contour information and the pulp contour information, and then the target characteristic information related to classification is calculated according to the convex hull information of each contour. The convex hull information of each contour comprises convex hull positions, convex hull areas and convex hull numbers. That is, the embodiment of the present application obtains the convex hulls of the inner contour, the outer contour, the pit contour and the pulp contour according to the contour information (for example, the contour coordinates), so as to determine the target feature information related to classification according to the positions of the convex points, the areas of the convex hulls and the number of the convex hulls in the convex hulls of the respective contours. In some embodiments, the aforementioned target characteristic information may include at least size information, area information, pitch information, or rate of meat information of the outline. The size information of the profiles may include, for example, a length, a width, a perimeter, a diameter, etc. of the profiles, the area information may include an area of each profile or an area ratio between the profiles, and the pitch information may include a pitch between the profiles. For objects with pit and pulp, the pulp yield can also be extracted.
Specifically, in one implementation scenario, in response to the object having a kernel, the length and width of the outer contour of the object may be calculated according to the bump position of the outer contour, and the aspect ratio of the outer contour of the object may be calculated; calculating the length and width of the kernel outline in the target object according to the bump positions of the kernel outline, and calculating the length-width ratio of the kernel outline of the target object; respectively calculating the cavity area and the pit area in the target object according to the convex hull area of the inner contour and the convex hull area of the pit contour, and calculating the area ratio of the pit area to the cavity area in the target object; or calculating the contour distances from the inner contour to the outer contour of the object in multiple directions according to the salient point positions of the outer contour and the salient point positions of the inner contour.
It will be appreciated that for a target object having a kernel, the length of the outer contour is the maximum of the distances between two bumps in the transverse direction of the outer contour, and the width of the outer contour is the maximum of the distances between two bumps in the longitudinal direction of the outer contour, so that the aspect ratio of the outer contour of the target object can be determined. Similarly, the length and width of the pit contour in the target and the aspect ratio of the pit contour of the target can be determined. The cavity area and the pit area in the target object are respectively the convex hull area of the inner contour and the convex hull area of the pit contour, so that the area ratio of the pit area to the cavity area in the target object can be determined. In addition, the contour pitches of the inner contour to the outer contour of the object in the plurality of directions include the contour pitches of the inner contour to the outer contour of the object in four directions of up, down, left, and right. The object characteristic information of the object having the kernel will be described in detail later with reference to fig. 3 to 4.
In another implementation scene, responding to the object with fruit pits and fruit flesh, respectively calculating the corresponding contour radius according to the convex hull area of the outer contour and the convex hull area of the inner contour, and calculating the contour interval between the outer contour and the inner contour in the object; calculating the perimeter and/or the diameter of the outer contour in the target object according to the bump position of the outer contour; calculating the flesh yield of the target object according to the convex hull area of the pit outline, the convex hull area of the flesh outline and the convex hull area of the outer outline; or calculating the number of rooms for wrapping the pulp in the target object according to the number of convex hulls of the pulp outline.
It will be appreciated that for objects having fruit pits and fruit flesh, the convex hull area of the outer contour and the convex hull area of the inner contour may be equivalent to the area of a circle, i.e. the outer contour area and the inner contour area are equivalent to the area of a circle, and then according to the area formula s=pi r of a circle 2 And calculating the contour radius corresponding to each of the outer contour and the inner contour. The perimeter of the outer contour in the object is the sum of the distances between every two convex points in the convex hull of the outer contour, and the diameter of the outer contour in the object is the maximum value of the distances between every two convex points in the convex hull of the outer contour. The pulp yield of the target is that of pure fruitThe ratio of the flesh area to the outer contour area, while the pure flesh area is the difference between the convex hull area of the flesh contour and the convex hull area of the pit contour. In addition, for the scene that the target object is durian, the number of the rooms for wrapping the pulp in the target object can be calculated according to the number of the convex hulls of the pulp outline, for example, the number of the convex hulls of the pulp outline is 3, and the durian is 3 rooms. The target characteristic information of the target object having the pit and the pulp will be described in detail later with reference to fig. 5 to 6.
After the above-described target feature information is obtained, at step S104, the targets are classified based on the target feature information. In one embodiment, in response to the object having a kernel, the aspect ratio of the outer contour of the object, the aspect ratio of the kernel contour of the object, the area ratio, or the contour pitch are compared with corresponding preset thresholds, so as to classify the object according to the appearance size or quality according to the comparison result. In general, for a target object having a kernel such as hazelnut and hawaii, the closer the aspect ratio of the outer contour of the target object is to 1, the closer the contour is to a circle, and the size of the outer contour of the target object can be determined. The area ratio of the pit area in the target object to the cavity area is smaller than a preset threshold value, which means that the pit in the target object is smaller and the quality of the target object is poorer. For betel nuts as the target, the area ratio of the pit area in the target to the cavity area is smaller than a preset threshold, which means that the pit in the target is smaller and the quality of the target is better. In some embodiments, the size and quality of the outline of the object may also be determined by combining the aspect ratio of the outline of the object with the area ratio of the pit area in the object to the cavity area. In addition, the peel thickness of the target object can be determined according to the contour spacing, and the target object with thick peel has poor quality (such as hazelnut, hawaii fruit, etc.). However, the thicker the peel of betel nut, the better the quality.
In another embodiment, in response to the object having a pit and a pulp, the contour spacing between the outer contour and the inner contour in the object, the circumference and/or diameter of the outer contour in the object, the yield of pulp in the object, or the number of rooms around which pulp is wrapped in the object are compared with corresponding preset thresholds to classify the object according to the size of the outer contour or quality according to the comparison result. Similarly, the peel thickness of the target can be determined according to the contour spacing, and the quality of the target with thick peel is poor. The closer the ratio of the perimeter to the diameter of the outer contour in the target is to pi, the closer the target is to a circle, and the better the quality is; the diameter of the outer contour in the target is smaller than the threshold value, which means that the larger the fruit diameter is, the better the quality is. In addition, the quality of the target object can be judged according to the meat yield or the number of the rooms of the pulp wrapped in the target object and the corresponding threshold value. For example, a yield above a threshold or a number of flesh compartments above a threshold indicates a better quality of the target.
As can be seen from the above description, in the embodiment of the present application, a plurality of profile information reflecting the appearance and the interior of the object is extracted through the X-ray image of the object having the fruit pit or having the fruit pit and the fruit pulp, and then the object feature information related to classification is calculated according to the plurality of profile information, so that the object is classified. Therefore, richer target characteristic information containing the appearance and the internal information of the target object can be obtained, and the accuracy and the reliability of the classification of the target object are greatly improved.
Fig. 2 is an exemplary schematic diagram of acquiring an X-ray image of a target object. As shown in fig. 2, the object 201 to be classified is first transported to the X-ray acquisition system via, for example, a drive mechanism (e.g., a conveyor belt) 202. The X-ray acquisition system may comprise an X-ray source 203 arranged above the transmission 202 and a detector 204 arranged below the transmission 202, as exemplarily shown in the figures. In an implementation scenario, when the object 201 is transmitted to the X-ray acquisition system via the transmission mechanism 202, the X-ray source 203 emits a light source to the object 201, and the detector 204 receives the light source reflected by the object to acquire data about the object. When acquiring data related to an object, the X-ray acquisition system scans the object in a row in the longitudinal direction of the plane of the transmission mechanism 202, whereby after scanning multiple rows of objects, the data related to the object can be processed into a two-dimensional X-ray image via the data processing unit 205. In one implementation scenario, the aforementioned object may be a food item having a pit (e.g., betel nut) or having a pit and a pulp (e.g., durian).
Fig. 3 is an exemplary schematic diagram illustrating that the target object is betel nut according to an embodiment of the present application. The X-ray image of betel nut is shown in fig. 3 (a). In some embodiments, based on the X-ray image of the betel nut, preprocessing operations such as denoising, smoothing, correction, or image enhancement may be performed thereon. Then, a binarization operation may be performed on the X-ray image of betel nut to obtain a binarized image, for example, as shown in fig. 3 (b). In some implementations, padding and/or dilation operations may also be performed on the binarized image to ensure the integrity and connectivity of the contours. Further, as shown in fig. 3 (C), for example, a canny detection algorithm may be used to extract an inner contour C1, an outer contour C2, and a kernel contour C3 containing betel nuts, and a convex hull corresponding to each of the foregoing contours may be obtained by a convex hull technique. For example, the convex hull C1 corresponding to the inner contour C1, the convex hull C2 corresponding to the outer contour C2, and the convex hull C3 corresponding to the kernel contour C3 shown in the (d) diagram of fig. 3. According to the obtained convex hulls of the outlines, the length-width ratio of the betel nut outer outline, the length-width ratio of the kernel outline, the area ratio of the kernel area to the cavity area or the outline spacing from the inner outline to the outer outline in multiple directions can be calculated.
Fig. 4 is an exemplary diagram illustrating calculation of target characteristic information of a target object being betel nut according to an embodiment of the present application. As shown in fig. 4 (a), assuming that the maximum value of the two bump distances in the lateral direction of the betel nut outline is d1 and the maximum value of the two bump distances in the longitudinal direction of the betel nut outline is d2, the aspect ratio r1=d1/d 2 of the betel nut outline can be obtained. Assuming that the pit area of betel nuts is s1 and the cavity area is s2, the area ratio of the pit area to the cavity area R2=s1/s 2 can be obtained. Assuming that the maximum value of the distances between the two convex points in the transverse direction on the betel nut kernel contour is d3 and the maximum value of the distances between the two convex points in the longitudinal direction on the betel nut kernel contour is d4, the aspect ratio R3 = d3/d4 of the betel nut kernel contour can be obtained. As shown in fig. 4 (b), the distances D1, D2, D3 and D4 between the inner contour and the outer contour in the four directions of up, down, left and right are shown.
From the foregoing, the betel nuts can be classified according to the shape size or quality by comparing the aspect ratio R1 of the betel nut outer contour, the area ratio R2 of the pit area to the cavity area, the aspect ratio R3 of the pit contour, or the contour pitches D1, D2, D3 and D4 of the betel nut inner contour to the outer contour with the corresponding threshold values. For example, for betel nuts, the betel nuts can be classified according to their shape and size according to the aspect ratio R1 of the outer contour of the betel nuts by setting a threshold value of the roundness of the betel nuts (i.e., the aspect ratio, the closer the value is to 1, the closer the contour is to a circle); calculating thickness information of fiber shells through contour distances D1 to D4 from the inner contour to the outer contour of betel nuts, and classifying betel nuts according to quality by setting thickness thresholds (the thickness of the shells which are closer to the middle part has influence on quality from the middle position to the two side positions); betel nuts are classified according to quality by the area ratio R2 of the pit area to the cavity area and the area threshold (the smaller the area ratio is, the better the betel nut quality is). Alternatively, betel nuts are classified according to quality by the aspect ratio R3 of the pit contour and the pit roundness threshold (flat betel nut pit is better than round betel nut pit). Based on this, classification of betel nuts can be achieved.
Fig. 5 is an exemplary schematic diagram illustrating that the object is durian according to an embodiment of the present application. The X-ray image of durian is shown in figure 5 (a). In some embodiments, preprocessing operations such as denoising, smoothing, correction, or image enhancement may be performed on the durian based on its X-ray image. Next, a binarization operation may be performed on the X-ray image of durian to obtain a binarized image, for example, as shown in fig. 5 (b). In some implementations, padding and/or dilation operations may also be performed on the binarized image to ensure the integrity and connectivity of the contours. Further, as shown in fig. 5 (c), for example, a canny detection algorithm may be used to extract an inner contour G1, an outer contour G2, a pulp contour G3, and a kernel contour G4 containing durian, and a convex hull corresponding to each of the foregoing contours may be obtained by a convex hull technique. For example, a convex hull G1 corresponding to an inner contour G1, a convex hull G2 corresponding to an outer contour G2, a convex hull G3 corresponding to a pulp contour G3, and a convex hull G4 corresponding to a pit contour G4 shown in fig. 5 (d). According to the obtained convex hulls of the contours, the contour radius, the contour interval between the outer contour and the inner contour, the perimeter and/or the diameter of the outer contour, the meat yield or the number of rooms for wrapping the pulp, which correspond to the outer contour and the inner contour of durian, can be calculated.
Fig. 6 is an exemplary diagram illustrating target feature information of a calculation target object that is durian according to an embodiment of the present application. As shown in fig. 6, assuming that the area of the outer contour of durian is denoted as S1, the area of the inner contour of durian is denoted as S2, which are equivalent to the areas of circles, respectively, viaAnd->Respectively obtaining radius r1 of the durian outer contour and radius r2 of the durian inner contour, and further can be based on r 1 -r 2 The contour spacing between the outer contour and the inner contour of durian, i.e. the peel thickness, is obtained. Further, the perimeter C of the durian outer contour is calculated according to the sum of the distances between every two convex points in the convex hull of the durian outer contour, and the maximum value D of the distances between every two convex points in the convex hull of the durian outer contour is taken as the fruit diameter size, for example, as shown by a straight line in the figure. Further, assuming that the total convex hull area of the durian flesh profile (e.g., 5 flesh convex hulls are included in the figure) is denoted as S3, the total convex hull area of the pit profile (e.g., 5 pit convex hulls are included in the figure) is denoted as S4, the yield w= (S3-S4)/S1 of durian can be calculated, where S3-S4 are pure flesh areas and S1 is the area of the durian outer profile. The number of durian shells n can also be calculated according to the number of convex hulls of durian pulp, for example, the figure contains 5 pulp convex hulls, and the number of durian shells n=5.
From the foregoing, it can be seen that by spacing the contour distance r between the outer contour and the inner contour of durian 1 -r 2 The perimeter C of the outer contour of the durian, the fruit diameter size D, the yield w of the durian or the number n of the durian rooms are compared with corresponding thresholds, and the durian can be classified according to the appearance size or the quality. For durian, for example, it may be determined whether the relationship of perimeter C of the outer contour of durian to fruit diameter dimension D is close to c=pi D. In this scenario, pi can be set as a threshold, indicating that if the ratio of C to D is closer to piThe closer the fruit is to round, the more full the fruit is, and the better the quality is, so that durian is classified according to the quality. And (3) comparing the relation between the fruit diameter D of the durian and the threshold value by setting the fruit diameter threshold value, and classifying the durian according to the shape and the size. Further, a peel thickness threshold may be set by comparing the contour spacing r between the outer and inner contours of durian 1 -r 2 And the relation with the peel thickness threshold value is that the peel thickness is thicker and the quality is poorer when the relation exceeds the threshold value, so that durian is classified according to the quality. In addition, durian flesh generally has 3 to 5 rooms, and durian is considered to be 5 rooms full, so that the threshold value of the number of durian rooms can be set to 5, and durian can be classified according to the number of rooms by comparing the relation between the number of durian rooms n and the threshold value. In addition, the durian can be classified according to quality by setting a threshold value of the meat yield, and the smaller the meat yield, the worse the quality is indicated. In addition, the ratio S4/S3 of the area S4 of the durian kernel contour to the area S3 of the flesh contour can also be calculated as the kernel duty ratio. And setting a kernel duty ratio threshold, and classifying durian according to quality when the quality is considered to be poor if the kernel duty ratio threshold is exceeded. Based on this, classification of durian can be achieved.
As can be seen from the above description, in the embodiment of the present application, by performing analysis processing on an X-ray image of a target object having a kernel or having a kernel and a pulp, a plurality of profile information of the appearance and the interior of the target object are extracted, and then, target feature information related to classification is calculated according to the plurality of profile information, so as to classify the target object. The accuracy and reliability of object classification are greatly improved by the richer object characteristic information including the appearance and the internal information of the object.
Fig. 7 is an exemplary block diagram illustrating an apparatus 700 for classifying objects according to an embodiment of the present application. As shown in fig. 7, a device 700 of the present application may include a processor 701 and a memory 702, wherein the processor 701 and the memory 702 communicate over a bus. The memory 702 stores program instructions for classifying objects, which when executed by the processor 701, cause the implementation of the method steps according to the previous description in connection with the accompanying drawings: acquiring an X-ray image of a target object to be classified, wherein the target object has a fruit pit or has fruit pits and fruit flesh; extracting a plurality of profile information including an appearance and an interior of the object based on the X-ray image; calculating target feature information related to classification according to the plurality of profile information of the target object; and classifying the target object based on the target characteristic information.
Those skilled in the art will also appreciate from the foregoing description, taken in conjunction with the accompanying drawings, that embodiments of the present application may also be implemented in software programs. The present application thus also provides a computer readable storage medium. The computer readable storage medium has stored thereon computer readable instructions for classifying an object, which when executed by one or more processors, implement the method for classifying an object described in connection with fig. 1 of the present application.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
It should be noted that although the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in that particular order or that all of the illustrated operations be performed in order to achieve desirable results. Rather, the steps depicted in the flowcharts may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
It should be understood that when the terms "first," "second," "third," and "fourth," etc. are used in the claims, the specification and the drawings of this application, they are used merely to distinguish between different objects and not to describe a particular sequence. The terms "comprises" and "comprising," when used in the specification and claims of this application, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application is for the purpose of describing particular embodiments only, and is not intended to be limiting of the application. As used in the specification and claims of this application, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the present specification and claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
While various embodiments of the present application have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous modifications, changes, and substitutions will occur to those skilled in the art without departing from the spirit and scope of the present application. It should be understood that various alternatives to the embodiments of the present application described herein may be employed in practicing the application. The appended claims are intended to define the scope of the application and are therefore to cover all equivalents and alternatives falling within the scope of these claims.

Claims (15)

1. A method for classifying an object, comprising:
acquiring an X-ray image of a target object to be classified, wherein the target object has a fruit pit or has fruit pits and fruit flesh;
extracting a plurality of profile information including an appearance and an interior of the object based on the X-ray image;
calculating target feature information related to classification according to the plurality of profile information of the target object; and
and classifying the target objects based on the target characteristic information.
2. The method of claim 1, wherein prior to extracting the plurality of profile information of the object based on the X-ray image, further comprising:
a preprocessing operation is performed on the X-ray image to obtain a preprocessed X-ray image.
3. The method of claim 2, wherein the preprocessing operation includes at least one or more of denoising, smoothing, or correction.
4. The method of claim 1, wherein extracting a plurality of profile information including an appearance and an interior of the object based on the X-ray image comprises:
binarizing the X-ray image to obtain a binarized image; and
and performing edge detection operation based on the binarized image to extract a plurality of profile information including the appearance and the interior of the object.
5. The method of claim 4, further comprising:
and performing filling and/or expanding operations on the binarized image.
6. The method of claim 4, wherein the plurality of profile information comprises inner profile information, outer profile information, pit profile information, or pulp profile information; the target characteristic information at least comprises dimension information, area information, interval information or meat yield information of the outline.
7. The method of claim 6, wherein calculating classification-related object feature information from the plurality of profile information of the object comprises:
acquiring convex hull information of each contour according to the inner contour information, the outer contour information, the pit contour information and the pulp contour information; and
and calculating target characteristic information related to classification according to the convex hull information of each contour, wherein the convex hull information of each contour comprises convex point positions, convex hull areas and convex hull numbers.
8. The method of claim 7, wherein in response to the object having a kernel, and computing classification-related object feature information from convex hull information for each contour comprises:
calculating the length and width of the outer contour of the object according to the bump positions of the outer contour, and calculating the length-width ratio of the outer contour of the object;
calculating the length and width of the kernel outline in the target object according to the bump positions of the kernel outline, and calculating the length-width ratio of the kernel outline of the target object;
respectively calculating the cavity area and the pit area in the target object according to the convex hull area of the inner contour and the convex hull area of the pit contour, and calculating the area ratio of the pit area to the cavity area in the target object; or alternatively
And calculating the contour distances from the inner contour to the outer contour of the object in multiple directions according to the salient point positions of the outer contour and the salient point positions of the inner contour.
9. The method of claim 8, wherein classifying the target based on the target characteristic information comprises:
comparing the aspect ratio of the outer contour of the target object, the aspect ratio of the kernel contour of the target object, the area ratio or the contour pitch with corresponding preset thresholds; and
and classifying the target objects according to the appearance size or the quality according to the comparison result.
10. The method of claim 7, wherein calculating classification-related target feature information from convex hull information for each contour in response to the target object having a kernel and a pulp comprises:
respectively calculating the corresponding contour radius according to the convex hull area of the outer contour and the convex hull area of the inner contour, and calculating the contour distance between the outer contour and the inner contour in the target object;
calculating the perimeter and/or the diameter of the outer contour in the target object according to the bump position of the outer contour;
calculating the flesh yield of the target object according to the convex hull area of the pit outline, the convex hull area of the flesh outline and the convex hull area of the outer outline; or alternatively
And calculating the number of rooms for wrapping pulp in the target object according to the number of convex hulls of the pulp outline.
11. The method of claim 10, wherein classifying the target object based on the target characteristic information comprises:
comparing the outline interval between the outer outline and the inner outline in the target object, the perimeter and/or the diameter of the outer outline in the target object, the flesh yield of the target object or the number of rooms for wrapping the flesh in the target object with corresponding preset thresholds; and
and classifying the target objects according to the appearance size or the quality according to the comparison result.
12. The method of claim 8 or 9, wherein the target is betel nut in response to the target having a pit.
13. The method of claim 10 or 11, wherein the target is durian in response to the target having a kernel and a pulp.
14. An apparatus for classifying an object, comprising:
a processor; and
a memory in which program instructions for classifying an object are stored, which program instructions, when executed by the processor, cause the apparatus to carry out the method according to any one of claims 1-13.
15. A computer readable storage medium having stored thereon computer readable instructions for classifying an object, which computer readable instructions, when executed by one or more processors, implement the method of any of claims 1-13.
CN202311651108.2A 2023-12-04 2023-12-04 Method, device and storage medium for classifying objects Pending CN117765528A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311651108.2A CN117765528A (en) 2023-12-04 2023-12-04 Method, device and storage medium for classifying objects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311651108.2A CN117765528A (en) 2023-12-04 2023-12-04 Method, device and storage medium for classifying objects

Publications (1)

Publication Number Publication Date
CN117765528A true CN117765528A (en) 2024-03-26

Family

ID=90315396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311651108.2A Pending CN117765528A (en) 2023-12-04 2023-12-04 Method, device and storage medium for classifying objects

Country Status (1)

Country Link
CN (1) CN117765528A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160187199A1 (en) * 2014-08-26 2016-06-30 Digimarc Corporation Sensor-synchronized spectrally-structured-light imaging
US20210248430A1 (en) * 2018-08-31 2021-08-12 Nec Corporation Classification device, classification method, and recording medium
CN114354637A (en) * 2022-01-20 2022-04-15 河北工业大学 Fruit quality comprehensive grading method and device based on machine vision and X-ray
CN114841974A (en) * 2022-05-11 2022-08-02 北京市真我本色科技有限公司 Nondestructive testing method and system for internal structure of fruit, electronic equipment and medium
CN114998614A (en) * 2022-08-08 2022-09-02 浪潮电子信息产业股份有限公司 Image processing method, device and equipment and readable storage medium
CN115908257A (en) * 2022-10-19 2023-04-04 盒马(中国)有限公司 Defect recognition model training method and fruit and vegetable defect recognition method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160187199A1 (en) * 2014-08-26 2016-06-30 Digimarc Corporation Sensor-synchronized spectrally-structured-light imaging
US20210248430A1 (en) * 2018-08-31 2021-08-12 Nec Corporation Classification device, classification method, and recording medium
CN114354637A (en) * 2022-01-20 2022-04-15 河北工业大学 Fruit quality comprehensive grading method and device based on machine vision and X-ray
CN114841974A (en) * 2022-05-11 2022-08-02 北京市真我本色科技有限公司 Nondestructive testing method and system for internal structure of fruit, electronic equipment and medium
CN114998614A (en) * 2022-08-08 2022-09-02 浪潮电子信息产业股份有限公司 Image processing method, device and equipment and readable storage medium
CN115908257A (en) * 2022-10-19 2023-04-04 盒马(中国)有限公司 Defect recognition model training method and fruit and vegetable defect recognition method

Similar Documents

Publication Publication Date Title
CN109377485B (en) Machine vision detection method for instant noodle packaging defects
Li et al. Automatic detection of common surface defects on oranges using combined lighting transform and image ratio methods
CN115375676A (en) Stainless steel product quality detection method based on image recognition
CN110717888B (en) Automatic identification method for intravascular Optical Coherence Tomography (OCT) vessel wall inner contour
CN115239704B (en) Accurate detection and repair method for wood surface defects
US8520892B2 (en) Method and apparatus for detecting objects
Utai et al. Mass estimation of mango fruits (Mangifera indica L., cv.‘Nam Dokmai’) by linking image processing and artificial neural network
CN106815819B (en) More strategy grain worm visible detection methods
CN109800619B (en) Image recognition method for citrus fruits in mature period
CN111652825B (en) Edge tracking straight line segment rapid detection device and method based on gradient direction constraint
CN117252882B (en) Cylinder head quality detection method and system
CN114219794A (en) Method and system for evaluating surface quality of shaving board based on machine vision
CN114820612B (en) Roller surface defect detection method and system based on machine vision
CN109993758A (en) Dividing method, segmenting device, computer equipment and storage medium
CN111161263A (en) Package flatness detection method and system, electronic equipment and storage medium
CN111353992B (en) Agricultural product defect detection method and system based on textural features
CN111476804B (en) Efficient carrier roller image segmentation method, device, equipment and storage medium
CN111951215A (en) Image detection method and device and computer readable storage medium
CN115546232A (en) Liver ultrasonic image working area extraction method and system and electronic equipment
CN113256591A (en) Device and method for rapidly detecting defects of wide glass
CN117765528A (en) Method, device and storage medium for classifying objects
CN115082449B (en) Electronic component defect detection method
CN113780421B (en) Brain PET image identification method based on artificial intelligence
Mohana et al. Stem-calyx recognition of an apple using shape descriptors
KKa et al. Image processing tools and techniques used in computer vision for quality assessment of food products: a review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination