CN116309835A - Image processing method, device, equipment and storage medium for food sorting - Google Patents

Image processing method, device, equipment and storage medium for food sorting Download PDF

Info

Publication number
CN116309835A
CN116309835A CN202310234500.0A CN202310234500A CN116309835A CN 116309835 A CN116309835 A CN 116309835A CN 202310234500 A CN202310234500 A CN 202310234500A CN 116309835 A CN116309835 A CN 116309835A
Authority
CN
China
Prior art keywords
image
food
sorting
picked
impurity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310234500.0A
Other languages
Chinese (zh)
Inventor
汪嘉杰
戴至修
聂磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202310234500.0A priority Critical patent/CN116309835A/en
Publication of CN116309835A publication Critical patent/CN116309835A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/34Sorting according to other particular properties
    • B07C5/342Sorting according to other particular properties according to optical properties, e.g. colour
    • B07C5/3422Sorting according to other particular properties according to optical properties, e.g. colour using video scanning devices, e.g. TV-cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/36Sorting apparatus characterised by the means used for distribution
    • B07C5/361Processing or control devices therefor, e.g. escort memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30128Food products
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The disclosure provides an image processing method, an image processing device and a storage medium for food sorting, relates to the technical field of artificial intelligence, and particularly relates to the fields of computer vision, image processing, deep learning and the like. The specific implementation scheme is as follows: dividing at least one target area in the food image by using an image division model; determining an impurity region to be picked in the at least one target region; acquiring picking position information of impurities in the food based on the centroid coordinates of the impurity region to be picked; wherein the picking position information is used for indicating the sorting execution device to pick the impurity. According to the technical scheme, the accuracy of the picking position information can be improved. And sort the execution equipment and pick impurity according to this positional information that picks, also can avoid causing the food fracture because of picking the focus skew to promote and pick efficiency and precision, reduce the human cost simultaneously.

Description

Image processing method, device, equipment and storage medium for food sorting
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular, to the fields of computer vision, image processing, deep learning, and the like.
Background
In conventional food processing modes, raw material handling often involves manual handling and sorting processes, where the sorting process includes impurity picking and classification. Some foods have a low dead weight, but are mostly fragile and easily deformed. The manual sorting of the foods has the problems of high labor cost, low efficiency, unstable precision caused by inconsistent evaluation standards and the like.
Disclosure of Invention
The present disclosure provides an image processing method, apparatus, device and storage medium for food sorting.
According to an aspect of the present disclosure, there is provided an image processing method for food sorting, including:
dividing at least one target area in the food image by using an image division model;
determining an impurity region to be picked in the at least one target region;
acquiring picking position information of impurities in the food based on the centroid coordinates of the impurity region to be picked; wherein the picking position information is used for indicating the sorting execution device to pick the impurity.
According to another aspect of the present disclosure, there is provided an image processing apparatus for sorting food products, including:
the image segmentation module is used for segmenting at least one target area in the food image by utilizing the image segmentation model;
A region determination module for determining an impurity region to be picked in the at least one target region;
the position determining module is used for obtaining picking position information of impurities in the food based on the barycenter coordinates of the impurity region to be picked; wherein the picking position information is used for indicating the sorting execution device to pick the impurity.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform a method according to any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method according to any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a food sorting system including an image acquisition device, a sorting performing apparatus, and an electronic apparatus according to an embodiment of the present disclosure; the image acquisition device is used for acquiring food images and sending the food images to the electronic equipment; the sorting execution device is used for receiving sorting indication information from the electronic device and sorting impurities in food based on the sorting indication information.
By adopting the technical scheme of the embodiment of the disclosure, the method has the following beneficial effects:
because the image segmentation model can segment out irregularly shaped target areas, the centroid coordinates of the impurity area to be picked, which is determined in at least one target area, can accurately represent the centroid positions of irregularly shaped food impurities, and pick position information can be obtained by utilizing the centroid coordinates, so that the accuracy of the pick position information can be improved. And sort the execution equipment and pick impurity according to this positional information that picks, also can avoid causing the food fracture because of picking the focus skew to promote and pick efficiency and precision, reduce the human cost simultaneously.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram of an exemplary application scenario according to an embodiment of the present disclosure;
fig. 2 is a flow diagram of an image processing method for food sorting according to an embodiment of the present disclosure;
FIG. 3 is a schematic illustration of a target area in an embodiment of the present disclosure;
fig. 4 is a flow diagram of an image processing method for food sorting according to another embodiment of the present disclosure;
FIG. 5 is a schematic diagram of another exemplary application scenario according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of yet another exemplary application scenario according to an embodiment of the present disclosure;
fig. 7 is a flow diagram of an image processing method for food sorting according to yet another embodiment of the present disclosure;
fig. 8 is a schematic block diagram of an image processing apparatus for food sorting provided in an embodiment of the present disclosure;
fig. 9 is a schematic block diagram of an image processing apparatus for food sorting provided by another embodiment of the present disclosure;
fig. 10 is a schematic block diagram of an image processing apparatus for food sorting provided in yet another embodiment of the present disclosure;
fig. 11 is a schematic block diagram of an image processing apparatus for food sorting provided in yet another embodiment of the present disclosure;
Fig. 12 is a schematic block diagram of an image processing apparatus for food sorting provided in yet another embodiment of the present disclosure;
fig. 13 is a schematic block diagram of an image processing apparatus for food sorting provided in yet another embodiment of the present disclosure;
fig. 14 is a schematic block diagram of an image processing apparatus for food sorting provided in yet another embodiment of the present disclosure;
fig. 15 is a block diagram of an electronic device for implementing an image processing method for food sorting according to an embodiment of the present disclosure;
fig. 16 is a schematic block diagram of a food sorting system according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In order to facilitate understanding of the image processing method for food sorting according to the embodiment of the present disclosure, an application scenario of the embodiment of the present disclosure is first described below. Fig. 1 is a schematic diagram of an exemplary application scenario of an embodiment of the present disclosure. As shown in fig. 1, in this application scenario, an electronic device 101 is included. The electronic device 101 is, for example, a terminal, a server or other processing device in a stand-alone, multi-machine or clustered system, where the terminal may be a UE (User Equipment), a mobile device, a PDA (Personal Digital Assistant ), a handheld device, a computing device, an in-vehicle device, a wearable device, etc. In some alternative implementations, the electronic device 101 may implement the image processing method for food sorting of the embodiments of the present disclosure by way of a processor invoking computer readable instructions stored in a memory. The method may determine pick location information based on the food item image.
As shown in fig. 1, in this application scenario, an image acquisition device 102 and a sorting execution device 103 are also included.
The image capturing device 102 is configured to capture images of food, specifically, the image capturing device 102 captures images of food by facing the carrier plate 104 on which the food is placed. The image pickup device 102 is also configured to transmit the food item image to the electronic apparatus 101, so that the electronic apparatus 101 determines the picking position information based on the food item image. In practical applications, the image capturing device 102 may be fixed at the end of the sorting performing device 103 as in the example of fig. 1, or may be separated from the sorting performing device 103.
The sort execution device 103 is used for sorting the foreign substances in the food products according to the sorting position information sent by the electronic device 101. The sort execution device 103 may be an automated robot or a robotic arm, for example. Illustratively, the end of the sorting performing device 103 may be provided with a suction pen, which picks out impurities in the food products in a suction manner.
The following describes the technical scheme in the embodiments of the present disclosure in detail with reference to the accompanying drawings.
Fig. 2 is a flow diagram of an image processing method for food sorting according to an embodiment of the present disclosure. The method can be applied to an image processing apparatus. The apparatus may be deployed in the above-described electronic device by way of example, but is not limited thereto. As shown in fig. 2, the method may include:
Step S210, segmenting at least one target area in the food image by utilizing an image segmentation model;
step S220, determining an impurity region to be picked in at least one target region;
step S230, based on the barycenter coordinates of the impurity region to be picked, picking position information of impurities in the food is obtained; wherein the picking position information is used to instruct the sort execution device to pick the foreign objects.
In the above step S210, the image segmentation model may refer to a neural network model based on deep learning. Alternatively, the image segmentation model may be a model employing a network framework such as OCRnet (Object Contextual Representation Network, object context feature representation network), U-net (U-network), or Fast-SCNN (Fast Segmentation Convolutional Neural Network, fast-segment convolutional neural network).
Illustratively, the food item image is input to an image segmentation model in which the food item image may be semantically segmented into a plurality of image regions, including at least one target region. Here, the target region may refer to a segmented image region of a specific form, such as a dot-like region, a stripe-like region, or the like within a predetermined scale range. Fig. 3 shows a schematic diagram of a target area in an embodiment of the present disclosure. As shown in fig. 3, a dot-like region 302 and stripe- like regions 301 and 303 divided in the food image are target regions. In practical applications, target areas such as dot areas and stripe areas that are segmented in a food image are often image areas where impurities are located.
Training the image segmentation model with a large number of sample images can enable the image segmentation model to have the ability to identify and segment out the region of the specific morphology. Optionally, before the step S210, a plurality of sample images may be obtained through data enhancement, and the image segmentation model is trained to converge by using the plurality of sample images. Wherein, obtaining a plurality of sample images through data enhancement can include: and carrying out data enhancement on a plurality of food images containing impurities with different scales and different categories to obtain a plurality of sample images. The data enhancement mode may be to perform operations such as turning, rotating, clipping, scaling, translating on each food image, so as to obtain one or more enhancement images corresponding to each food image, where the food image and the corresponding enhancement image may be used as sample images.
In the embodiment of the present disclosure, the impurity region to be picked may refer to an image region in which impurities to be picked are located in the food. In practical application, all or part of the impurities may be picked up, and accordingly, all or part of the target region obtained by dividing may be regarded as the impurity region to be picked up. Illustratively, in the above step S220, at least one target region may be filtered or screened according to a rule configured in advance, thereby obtaining an impurity region to be picked. Wherein the number of impurity regions to be picked may be one or more.
In the above step S230, the coordinates of the centroid of the impurity region to be picked, that is, the coordinates of the pixels of the centroid point of the region in the image coordinate system are determined according to the coordinates of the pixels of the impurity region to be picked. It will be appreciated that the coordinates may characterize the location of the centroid of the impurity to be picked. By converting the coordinates, identifiable and understandable picking position information of the sort execution device can be obtained. For example, the pick location information may be coordinates of a centroid of the impurity in a world coordinate system; the electronic device may convert the centroid coordinates of the impurity region to be picked into coordinates in a world coordinate system, and use the coordinates as the picking position information, and the sorting execution device may move the suction pen to a point location corresponding to the world coordinates. For another example, the pick position information may be displacement information of the suction pen to a centroid of the impurity; the electronic device may convert the centroid coordinates of the impurity region to be picked into coordinates in a world coordinate system, and determine displacement information of the suction pen moving to the coordinates based on the coordinates, and the sort execution device may move the suction pen according to the displacement information.
According to the method provided by the embodiment of the disclosure, the image segmentation model can segment the irregularly-shaped target area, so that the centroid coordinates of the impurity area to be picked, which is determined in at least one target area, can accurately represent the centroid position of irregularly-shaped food impurities, and the centroid coordinates are used for obtaining the picking position information, so that the accuracy of the picking position information can be improved. And sort the execution equipment and pick impurity according to this positional information that picks, also can avoid causing the food fracture because of picking the focus skew to promote and pick efficiency and precision, reduce the human cost simultaneously.
Optionally, in some embodiments of the present disclosure, the image processing method for food sorting may further include:
determining a model of a suction pen for picking the impurity based on an area of the impurity region to be picked;
sending sorting instruction information to sorting execution equipment; the sorting indication information is used for indicating the sorting position information and the type of the suction pen.
For example, an area threshold value, such as an area upper limit value and an area lower limit value, corresponding to each model of the suction pen may be preset, and then the suction pen model to which the impurity region to be selected is applicable may be determined based on the size of the area of the impurity region to be selected and the area threshold value corresponding to each model.
For example, assuming that the impurity with a diameter of 2 mm or more is required to be sucked by a large-sized suction pen, firstly 2 mm is converted into a pixel area in an image, for example, 1500 pixels, and when the area of the impurity region to be picked is larger than 1500 pixels, the type of the suction pen is determined to be large.
According to the above-mentioned optional embodiment, under the condition that the electronic equipment indicates to sort the execution equipment and pick position information, still instruct the type of inhaling the pen to sort the execution equipment to can make sort the execution equipment and use the inhaling pen of corresponding model to carry out impurity and pick, avoid leading to picking failure or food fracture because of inhaling the pen size inappropriately, thereby guarantee to pick the precision.
The electronic device may implement the model of the stylus by carrying the model of the stylus in the sorting indication information. Or, the electronic equipment can realize the model of the indication suction pen by carrying the suction pen switching mark in the sorting indication information. For example, a small-sized suction pen is adopted for impurity picking by default, and when the sorting indication information carries a suction pen switching mark, a large-sized suction pen is adopted for impurity picking.
Optionally, in an exemplary embodiment, the step S210 of segmenting at least one target area in the food image using the image segmentation model may include: dividing a food image into M image slices, wherein M is an integer greater than or equal to 2; inputting each image slice in the M image slices into an image segmentation model respectively to obtain a target area which is output by the image segmentation model for each image slice respectively; and obtaining at least one target area according to the target area respectively output by each image slice based on the image segmentation model.
That is, a large-size food image is segmented into a plurality of image slices, and the image slices are input to an image segmentation model, so that the image segmentation model processes small-size image slices. The method comprises the steps of processing a plurality of small-size image slices by utilizing an image segmentation model to obtain target areas in each image slice, and collecting at least one target area in a food image.
For example, the size of the food image acquired by the image acquisition device is 5472×3648, the food image can be uniformly segmented into 36 blocks, and the resolution of each block is 912×608, so that the input image size of the image segmentation model can be set to 912×608.
Because the input image size can influence the calculation parameters and the model complexity in the image segmentation model, and the accuracy and the speed of the image segmentation model are influenced, the embodiment of the disclosure can process the image slice by segmenting the food image, so that the calculation parameters and the model complexity in the image segmentation can be optimized. The processing speed can be greatly improved by processing the small-size image slice for multiple times instead of processing the large-size image slice for one time.
Optionally, in practical application, a deep learning reasoning optimizer such as TensorRT may be further used to accelerate the image segmentation model, so as to further increase the speed of obtaining at least one target area, thereby improving the food sorting efficiency.
Optionally, in an exemplary embodiment, step S230, determining the impurity region to be picked in the at least one target region includes: determining an impurity region to be picked in the at least one target region based on attribute information of each of the at least one target region; wherein the attribute information includes at least one of an area, a pitch, and a brightness.
That is, the target region may be filtered or screened based on at least one of the area, the pitch, and the brightness of the target region, resulting in the impurity region to be picked. The distance between the target areas may refer to a distance between the target area and other target areas, and the distance may specifically be a distance between centroids. The brightness of the target area may be calculated based on the pixel values of the pixels of the target area in the food item image.
According to the embodiment, the requirements of the impurity region to be picked on the attribute layers of area, interval or brightness and the like can be flexibly set according to the actual condition of the food sorting scene, so that the selection standard of the impurity region to be picked can be optimized according to the actual condition, and the impurity region to be picked is favorable for improving the sorting efficiency by selecting a proper impurity region to be picked.
Illustratively, determining the impurity region to be picked in the at least one target region based on attribute information of each of the at least one target region includes: in the case that at least one target area comprises N areas with a distance smaller than a preset distance threshold value, determining K areas with the largest area in the N areas, wherein N is an integer larger than or equal to 2, and K is a positive integer smaller than N; the impurity region to be picked is determined among the K regions.
That is, for a plurality of target areas closer to each other, only one or more areas where the area is largest are reserved. Since in the food picking scene, for impurity clusters formed by impurities closer to each other, one or more impurities with the largest size are often picked, all impurities in the impurity clusters can be picked, and thus, the sorting efficiency can be improved by implementing the above example.
Illustratively, determining the impurity region to be picked in the at least one target region based on attribute information of each of the at least one target region includes: and filtering out areas with areas smaller than a preset area threshold and/or areas with brightness smaller than a preset brightness threshold from at least one target area to obtain the impurity area to be picked.
That is, it is possible to filter impurities having too small an area, or to filter impurities having too small a brightness (light color), or to filter impurities having too small an area and too small a brightness. In practical application, the impurities with small partial area and/or light color can be selected according to the requirement without consideration, so that the sorting efficiency is improved.
Alternatively, the above-described example of determining the region to be picked from the attribute information of the target region may be implemented in combination. For example, after determining K regions with the largest area in the N regions with the interval smaller than the preset distance threshold, filtering out regions with the area smaller than the preset area threshold and/or the brightness smaller than the preset brightness threshold from the K regions, so as to obtain the impurity region to be picked. For another example, after filtering out the areas with areas smaller than the preset area threshold and/or brightness smaller than the preset brightness threshold in at least one target area, if the at least one target area further includes N areas with intervals smaller than the preset distance threshold, determining K areas with the largest areas in the N areas, and taking the K areas as K impurity areas to be picked.
Optionally, in some embodiments of the present disclosure, step S230, based on the centroid coordinates of the impurity region to be picked, obtains the pick position information of the impurity in the food, including: and obtaining the picking position information of the impurities in the food based on a predetermined coordinate conversion relation and the barycenter coordinates of the impurity region to be picked.
Fig. 4 is a flowchart illustrating a method of determining a coordinate transformation relationship in an embodiment of the present disclosure.
As shown in fig. 4, the determination method of the coordinate conversion relationship includes:
step S410, acquiring a plurality of images corresponding to the plurality of points respectively; each image in the plurality of images is acquired by adopting an image acquisition device on a corresponding point position towards the calibration plate; the calibration plate is arranged on the food carrying disc;
step S420, detecting characteristic points on the calibration plate in each image to obtain a plurality of pixel coordinates of the characteristic points;
step S430, obtaining a camera coordinate system of the image acquisition device and a perspective transformation matrix between the world coordinate systems based on a plurality of world coordinates and a plurality of pixel coordinates in the world coordinate system corresponding to the plurality of points;
step S440, obtaining a coordinate conversion relation based on the perspective transformation matrix.
The coordinate conversion relationship is used to obtain picking position information that can be understood by the sorting execution device (corresponding to "hand") based on the pixel coordinates in the image acquired by the image acquisition device (corresponding to "eye"), and therefore, the determination manner of the coordinate conversion relationship may also be referred to as hand-eye calibration.
For example, the image acquisition device may be fixed to the sorting performing device during the hand-eye calibration, i.e. in a constant relative position to the component for picking up impurities in the sorting performing device (e.g. the suction pen). As shown in fig. 5, the image pickup device 510 is installed at the end of the sort execution apparatus 520. The sorting performing device 520 includes a suction pen 521, and the relative position of the suction pen 521 and the image capturing apparatus 510 is unchanged during the movement of the sorting performing device 520. The camera coordinate system of the image acquisition apparatus may be illustrated with reference to (x) of fig. 5 c ,y c ,z c ) The method comprises the steps of carrying out a first treatment on the surface of the The world coordinate system uses the plane of the food carrying plate 540 as xy plane, the food carrying plate is provided with the calibration plate 530, and the world coordinate system can refer to the (x) of fig. 5 w ,y w ,z w )。
In practice, as shown in fig. 6, the calibration plate 610 may be fixed to the food-carrying tray 620 and the image capture device may be moved to a plurality of spots (e.g., 9 spots above 9 areas in fig. 6). On each point location, the world coordinates of the image acquisition device are recorded, the image is acquired, and the pixel coordinates of the feature points 611 on the calibration plate are detected by adopting a feature point detection algorithm. In this way, a plurality of world coordinates and a plurality of pixel coordinates can be obtained for a plurality of points. The feature point 611 may be any corner point on the calibration plate.
The camera coordinate system (x) can be obtained by using the world coordinates and the pixel coordinates obtained in the above process c ,y c ,z c ) And world coordinate system (x w ,y w ,z w ) And obtaining the coordinate conversion relation between the pixel coordinates and the world coordinates based on the perspective transformation matrix. In practical application, the coordinate conversion relation can be obtained by combining a perspective transformation matrix between a camera coordinate system and a world coordinate system and internal parameters and distortion parameters of the image acquisition device.
Specifically, the internal parameters of the image pickup device and the distortion parameters may be predetermined. Then, based on the perspective transformation matrix and internal parameters and distortion parameters of the camera, a conversion formula between the pixel coordinates and coordinates on the world coordinate system is established. And solving a perspective transformation matrix in the formula by using the conversion formula, the internal parameters, the distortion parameters, the world coordinates and the pixel coordinate simultaneous equations. After the perspective transformation matrix is obtained, the coordinate conversion relation between the pixel coordinates and the world coordinates is obtained by utilizing the perspective transformation matrix, the internal parameters and the distortion parameters. The coordinate conversion relationship can be expressed as follows by a matrix:
Figure BDA0004121590200000101
wherein matrix f world_cam Representing the coordinate conversion relation, a ij Representing the elements of row i and column j of the matrix. It will be appreciated that the matrix f is utilized world_cam Any of the coordinates in the food item image (including the centroid coordinates of the impurity region to be picked) may be converted into corresponding world coordinates. Because the world coordinates of the image acquisition device are adopted as the world coordinates corresponding to the point positions in the hand-eye calibration process, the matrix f is utilized world_cam The barycenter coordinate can be converted into the displacement from the image acquisition device to the impurity, and the displacement required by the movement of the suction pen to the impurity center can be obtained by combining the relative position between the image acquisition device and the suction pen. The calculation can be specifically referred to by the following formula:
Figure BDA0004121590200000102
wherein, (x) c ,y c ) Pixel coordinates where the impurity is located (i.e., the centroid coordinates); (x) p ,y p ) The world coordinate of the suction pen (x) q ,y q ) World coordinates of the image-capturing device, i.e
Figure BDA0004121590200000111
The relative position between the image acquisition device and the suction pen; (x) o ,y o ) The displacement (i.e., picking position information) required for the suction pen center to move to the foreign material center.
Because the sorting execution equipment needs to be displaced above the machine table in the hand-eye calibration process in the food sorting scene, the image acquisition device cannot be guaranteed to be always parallel to the machine table and the food carrying disc on the machine table. Therefore, displacement in the z-axis direction occurs during the movement. The common hand-eye calibration adopts affine transformation matrix to represent the transformation between the camera coordinate system and the world coordinate system, so that the calibration precision is lower, and the accuracy of the impurity picking position information is affected. In the embodiment of the disclosure, the perspective transformation matrix is used for representing the transformation between the camera coordinate system and the world coordinate system, so that errors caused by z-axis direction offset can be effectively eliminated, the finally obtained coordinate transformation relationship is more robust, and the precision of picking position information is improved.
From the above description, it can be understood that in the food sorting scene, firstly, internal parameters and distortion parameters of the image acquisition device are determined, and then, hand-eye calibration is performed to obtain a coordinate conversion relationship. During actual sorting, an image acquisition device is adopted to acquire food images on a carrying disc, pick position information and a suction pen model are obtained through electronic equipment based on the food images, and sorting indication information is obtained based on the pick position information and the suction pen model, so that sorting execution equipment operates based on the sorting indication information. In practical applications, the operation of the image acquisition apparatus and the sort execution device may be controlled by a PLC (Programmable Logic Controller ) signal.
In practical applications, after one picking action is completed, a new food image may be collected again and image segmentation may be performed to confirm whether the current picking process of the food is completed. Specifically, in another embodiment of the present disclosure, the method may further include: after the sorting execution device picks the impurity in the food based on the picking position information, acquiring a new food image, and returning to the step of dividing at least one target area in the food image by using an image division model; and confirming that the picking process of the food is completed in the case that the number of the impurity regions to be picked determined based on the at least one target region is less than or equal to a preset number threshold.
Fig. 7 shows a schematic diagram of a control flow of the camera (image pickup device) and the sort execution device according to this embodiment. As shown in fig. 7, according to this embodiment, the control flow of the camera and the sorting performing apparatus includes the following steps:
and step S710, photographing by a camera to obtain a food image.
Step S720, model prediction and post-processing. Here, model prediction refers to dividing at least one target region in a food product image using an image division model. The post-treatment means filtering and screening based on at least one target area to obtain an impurity area to be picked.
Step S730, the PLC sends the picking position information corresponding to the processing result of step S720 to the sorting execution device, and the sorting execution device performs the picking.
And step 740, photographing by a camera to obtain a new food image.
Step S750, model prediction and post-processing.
Step S760, based on the processing result of step S750, determines whether the processing result satisfies the requirement. If the requirement is not satisfied, the PLC returns the picking position information corresponding to the processing result of step S750 to the sorting execution apparatus, and returns to step S730. If the requirement is satisfied, step S770 is performed.
Step S770, camera movement. Specifically, the process moves to the next carrier tray, and returns to step S710.
It can be seen that according to the above embodiment, the impurity picking can be repeatedly performed on the food through multiple iterative processes until the number of impurity regions to be picked in the food meets the requirement, so that the accuracy and effect of the food sorting can be ensured through automatic control.
According to an embodiment of the present disclosure, the present disclosure also provides an image processing apparatus for food sorting. Fig. 8 shows a schematic block diagram of an image processing apparatus for food sorting provided in an embodiment of the present disclosure. As shown in fig. 8, the image processing apparatus for food sorting may include:
an image segmentation module 810 for segmenting at least one target region in the food image using the image segmentation model;
a region determination module 820 for determining an impurity region to be picked in the at least one target region;
a position determining module 830, configured to obtain information of a picking position of the impurity in the food based on the centroid coordinates of the impurity region to be picked; wherein the picking position information is used for indicating the sorting execution device to pick the impurity.
In some embodiments of the present disclosure, as shown in fig. 9, the image processing apparatus for food sorting further includes:
A model determining module 910, configured to determine a model of a suction pen for picking the impurity based on an area of the impurity region to be picked;
an information sending module 920, configured to send sorting instruction information to the sorting execution device; the sorting indication information is used for indicating the sorting position information and the type of the suction pen.
In some embodiments of the present disclosure, as shown in fig. 10, the image segmentation module 810 includes:
a slicing unit 1010 for slicing the food item image into M image slices; wherein M is an integer greater than or equal to 2;
a model processing unit 1020, configured to input each of the M image slices to the image segmentation model, to obtain a target region that is output by the image segmentation model for each image slice, respectively;
and a segmentation result summarizing unit 1030, configured to obtain the at least one target region based on the target regions respectively output by the image segmentation model for each image slice.
In some embodiments of the present disclosure, the area determining module 820 is specifically configured to:
determining an impurity region to be picked in the at least one target region based on attribute information of each of the at least one target region; wherein the attribute information includes at least one of an area, a pitch, and a brightness.
In some embodiments of the present disclosure, as shown in fig. 11, the area determining module 820 includes:
a region screening unit 1110, configured to determine, in a case where the at least one target region includes N regions with a pitch smaller than a preset distance threshold, K regions with a largest area among the N regions; wherein N is an integer greater than or equal to 2, and K is a positive integer less than N;
and a region determining unit 1120 configured to determine the impurity region to be picked among the K regions.
In some embodiments of the present disclosure, as shown in fig. 12, the area determining module 820 includes:
the area filtering unit 1210 is configured to filter, from the at least one target area, an area with an area smaller than a preset area threshold and/or an area with a brightness smaller than a preset brightness threshold, so as to obtain the impurity area to be picked.
In some embodiments of the present disclosure, the location determination module 830 is specifically configured to:
acquiring picking position information of impurities in the food based on a predetermined coordinate conversion relation and centroid coordinates of the impurity region to be picked;
wherein, as shown in fig. 13, the device further comprises:
a calibration image acquisition module 1310, configured to acquire a plurality of images corresponding to the plurality of points, respectively; each image in the plurality of images is obtained by shooting towards the calibration plate on a corresponding point position by adopting an image acquisition device; the calibration plate is arranged on the food carrying disc;
A pixel coordinate determining module 1320, configured to detect a feature point on the calibration board in each image, so as to obtain a plurality of pixel coordinates of the feature point;
a transformation matrix determining module 1330, configured to obtain a perspective transformation matrix between a camera coordinate system of the image acquisition device and the world coordinate system based on a plurality of world coordinates and the plurality of pixel coordinates in the world coordinate system corresponding to the plurality of points;
the conversion relation determining module 1340 is configured to obtain the coordinate conversion relation based on the perspective transformation matrix.
In some embodiments of the present disclosure, as shown in fig. 14, further comprising:
an iteration module 1410, configured to acquire a new food image after the sorting execution device sorts the impurities in the food based on the sorting position information, and return to the step of dividing at least one target area in the food image using an image division model;
and an end confirmation module 1420, configured to confirm that the picking process of the food is completed, in a case that the number of the impurity regions to be picked determined based on the at least one target region is less than or equal to a preset number threshold.
Descriptions of specific functions and examples of each module and subunit block of the apparatus in the embodiments of the present disclosure may refer to related descriptions of corresponding steps in the foregoing method embodiments, which are not repeated herein.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 15 shows a schematic block diagram of an electronic device that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile apparatuses, such as personal digital assistants, cellular telephones, smartphones, wearable devices, and other similar computing apparatuses. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 15, the apparatus 1500 includes a computing unit 1501, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1502 or a computer program loaded from a storage unit 1508 into a Random Access Memory (RAM) 1504. In the RAM 1504, various programs and data required for the operation of the device 1500 may also be stored. The computing unit 1501, the ROM 1502, and the RAM 1504 are connected to each other through a bus 1504. An input/output (I/O) interface 1505 is also connected to bus 1504.
Various components in device 1500 are connected to I/O interface 1505, including: an input unit 1506 such as a keyboard, mouse, etc.; an output unit 1507 such as various types of displays, speakers, and the like; a storage unit 1508 such as a magnetic disk, an optical disk, or the like; and a communication unit 1509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 1509 allows the device 1500 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks.
The computing unit 1501 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The computing unit 1501 performs the respective methods and processes described above, for example, an image processing method for food sorting. For example, in some embodiments, an image processing method for food product sorting may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 1500 via the ROM 1502 and/or the communication unit 1509. When the computer program is loaded into the RAM 1504 and executed by the computing unit 1501, one or more steps of one of the above-described image processing methods for food sorting may be performed. Alternatively, in other embodiments, the computing unit 1501 may be configured to perform an image processing method for food sorting by any other suitable means (e.g. by means of firmware).
In accordance with an embodiment of the present disclosure, the present disclosure also provides a food sorting system, fig. 16 is a schematic block diagram of a food sorting system according to an embodiment of the present disclosure. As shown in fig. 16, the food sorting system includes an image capturing device 1610, a sorting performing device 1630, and an electronic device 1620 in the above-described embodiments of the present disclosure. Wherein, the image acquisition device 1610 is used for acquiring food images and sending the food images to the electronic device 1620; the sort execution device 1630 is configured to receive sort instruction information from the electronic device 1620 and sort impurities in food products based on the sort instruction information.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions, improvements, etc. that are within the principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (20)

1. An image processing method for food sorting, comprising:
dividing at least one target area in the food image by using an image division model;
determining an impurity region to be picked in the at least one target region;
acquiring picking position information of impurities in the food based on the centroid coordinates of the impurity region to be picked; wherein the picking position information is used for indicating the sorting execution device to pick the impurity.
2. The method of claim 1, further comprising:
determining a model of a suction pen for picking the impurity based on the area of the impurity region to be picked;
sending sorting indication information to the sorting execution equipment; the sorting indication information is used for indicating the sorting position information and the type of the suction pen.
3. The method of claim 1 or 2, wherein the segmenting at least one target region in the food item image using the image segmentation model comprises:
slicing the food image into M image slices; wherein M is an integer greater than or equal to 2;
inputting each image slice in the M image slices into the image segmentation model respectively to obtain target areas which are output by the image segmentation model for each image slice respectively;
And obtaining the at least one target area according to the target area respectively output by the image segmentation model for each image slice.
4. The method of any of claims 1-3, wherein the determining an impurity region to be picked in the at least one target region comprises:
determining an impurity region to be picked in the at least one target region based on attribute information of each of the at least one target region; wherein the attribute information includes at least one of an area, a pitch, and a brightness.
5. The method of claim 4, wherein the determining the impurity region to be picked in the at least one target region based on the attribute information of each of the at least one target region comprises:
if the at least one target area comprises N areas with the distance smaller than a preset distance threshold value, determining K areas with the largest area in the N areas; wherein N is an integer greater than or equal to 2, and K is a positive integer less than N;
and determining the impurity region to be picked in the K regions.
6. The method of claim 4, wherein the determining the impurity region to be picked in the at least one target region based on the attribute information of each of the at least one target region comprises:
And filtering out areas with areas smaller than a preset area threshold and/or areas with brightness smaller than a preset brightness threshold from the at least one target area to obtain the impurity area to be picked.
7. The method of any of claims 1-6, wherein the deriving pick location information for the impurity in the food product based on centroid coordinates of the impurity region to be picked comprises:
acquiring picking position information of impurities in the food based on a predetermined coordinate conversion relation and centroid coordinates of the impurity region to be picked;
the method for determining the coordinate conversion relation comprises the following steps:
acquiring a plurality of images corresponding to the plurality of points respectively; each image in the plurality of images is acquired by adopting an image acquisition device on a corresponding point position towards the calibration plate; the calibration plate is arranged on the food carrying disc;
detecting characteristic points on a calibration plate in each image to obtain a plurality of pixel coordinates of the characteristic points;
obtaining a camera coordinate system of the image acquisition device and a perspective transformation matrix between the world coordinate systems based on a plurality of world coordinates and the plurality of pixel coordinates in the world coordinate system corresponding to the plurality of points;
And obtaining the coordinate conversion relation based on the perspective transformation matrix.
8. The method of any of claims 1-7, further comprising:
after the sorting execution device picks the impurity in the food based on the picking position information, acquiring a new food image, and returning to the step of dividing at least one target area in the food image by using an image division model;
and confirming that the picking process of the food is completed in the case that the number of the impurity regions to be picked determined based on the at least one target region is less than or equal to a preset number threshold.
9. An image processing apparatus for sorting food products, comprising:
the image segmentation module is used for segmenting at least one target area in the food image by utilizing the image segmentation model;
a region determination module for determining an impurity region to be picked in the at least one target region;
the position determining module is used for obtaining picking position information of impurities in the food based on the barycenter coordinates of the impurity region to be picked; wherein the picking position information is used for indicating the sorting execution device to pick the impurity.
10. The apparatus of claim 9, further comprising:
A model determining module for determining a model of a suction pen for picking the impurity based on the area of the impurity region to be picked;
the information sending module is used for sending sorting indication information to the sorting execution equipment; the sorting indication information is used for indicating the sorting position information and the type of the suction pen.
11. The apparatus of claim 9 or 10, wherein the image segmentation module comprises:
a slicing unit for slicing the food image into M image slices; wherein M is an integer greater than or equal to 2;
the model processing unit is used for respectively inputting each image slice in the M image slices into the image segmentation model to obtain a target area which is respectively output by the image segmentation model for each image slice;
and the segmentation result summarizing unit is used for obtaining the at least one target area based on the target areas respectively output by the image segmentation model for each image slice.
12. The apparatus according to any one of claims 9-11, wherein the region determination module is specifically configured to:
determining an impurity region to be picked in the at least one target region based on attribute information of each of the at least one target region; wherein the attribute information includes at least one of an area, a pitch, and a brightness.
13. The apparatus of claim 12, wherein the region determination module comprises:
a region screening unit, configured to determine, in a case where the at least one target region includes N regions with a pitch smaller than a preset distance threshold, K regions with a largest area among the N regions; wherein N is an integer greater than or equal to 2, and K is a positive integer less than N;
and a region determination unit configured to determine the impurity region to be picked out from the K regions.
14. The apparatus of claim 12, wherein the region determination module comprises:
and the region filtering unit is used for filtering out regions with areas smaller than a preset area threshold and/or brightness smaller than a preset brightness threshold from the at least one target region to obtain the impurity region to be picked.
15. The apparatus according to any one of claims 9-14, wherein the position determination module is specifically configured to:
acquiring picking position information of impurities in the food based on a predetermined coordinate conversion relation and centroid coordinates of the impurity region to be picked;
wherein the apparatus further comprises:
the calibration image acquisition module is used for acquiring a plurality of images corresponding to the plurality of points respectively; each image in the plurality of images is obtained by shooting towards the calibration plate on a corresponding point position by adopting an image acquisition device; the calibration plate is arranged on the food carrying disc;
The pixel coordinate determining module is used for detecting the characteristic points on the calibration plate in each image to obtain a plurality of pixel coordinates of the characteristic points;
the transformation matrix determining module is used for obtaining a perspective transformation matrix between a camera coordinate system of the image acquisition device and the world coordinate system based on a plurality of world coordinates and the plurality of pixel coordinates in the world coordinate system corresponding to the plurality of points;
and the conversion relation determining module is used for obtaining the coordinate conversion relation based on the perspective transformation matrix.
16. The apparatus of any of claims 9-15, further comprising:
the iteration module is used for acquiring a new food image after the sorting execution equipment sorts the impurities in the food based on the sorting position information, and returning to the step of dividing at least one target area in the food image by using an image division model;
and an end confirmation module configured to confirm that the picking process for the food is completed, in a case where the number of the impurity regions to be picked determined based on the at least one target region is less than or equal to a preset number threshold.
17. An electronic device, comprising:
At least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-8.
19. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-8.
20. A food sorting system comprising an image acquisition device, a sorting execution device and an electronic device according to claim 17;
the image acquisition device is used for acquiring food images and sending the food images to the electronic equipment; the sorting execution device is used for receiving sorting indication information from the electronic device and sorting impurities in food based on the sorting indication information.
CN202310234500.0A 2023-03-13 2023-03-13 Image processing method, device, equipment and storage medium for food sorting Pending CN116309835A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310234500.0A CN116309835A (en) 2023-03-13 2023-03-13 Image processing method, device, equipment and storage medium for food sorting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310234500.0A CN116309835A (en) 2023-03-13 2023-03-13 Image processing method, device, equipment and storage medium for food sorting

Publications (1)

Publication Number Publication Date
CN116309835A true CN116309835A (en) 2023-06-23

Family

ID=86781082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310234500.0A Pending CN116309835A (en) 2023-03-13 2023-03-13 Image processing method, device, equipment and storage medium for food sorting

Country Status (1)

Country Link
CN (1) CN116309835A (en)

Similar Documents

Publication Publication Date Title
US11120254B2 (en) Methods and apparatuses for determining hand three-dimensional data
CN111178250B (en) Object identification positioning method and device and terminal equipment
EP3186780B1 (en) System and method for image scanning
US8755602B2 (en) Electronic apparatus and method for binarizing image
CN111242240B (en) Material detection method and device and terminal equipment
JP7049983B2 (en) Object recognition device and object recognition method
US10713530B2 (en) Image processing apparatus, image processing method, and image processing program
EP3678046B1 (en) Hand detection method and system, image detection method and system, hand segmentation method, storage medium, and device
CN107272899B (en) VR (virtual reality) interaction method and device based on dynamic gestures and electronic equipment
CN111428731A (en) Multi-class target identification and positioning method, device and equipment based on machine vision
CN112288699B (en) Method, device, equipment and medium for evaluating relative definition of image
CN111709428B (en) Method and device for identifying positions of key points in image, electronic equipment and medium
CN111242057A (en) Product sorting system, method, computer device and storage medium
KR20210157194A (en) Crop growth measurement device using image processing and method thereof
CN113159064A (en) Method and device for detecting electronic element target based on simplified YOLOv3 circuit board
CN111191619B (en) Method, device and equipment for detecting virtual line segment of lane line and readable storage medium
CN114202526A (en) Quality detection method, system, apparatus, electronic device, and medium
CN112338898B (en) Image processing method and device of object sorting system and object sorting system
CN116309835A (en) Image processing method, device, equipment and storage medium for food sorting
CN116342585A (en) Product defect detection method, device, equipment and storage medium
CN112001369B (en) Ship chimney detection method and device, electronic equipment and readable storage medium
CN115937950A (en) Multi-angle face data acquisition method, device, equipment and storage medium
CN112651983B (en) Splice graph identification method and device, electronic equipment and storage medium
CN114638846A (en) Pickup pose information determination method, pickup pose information determination device, pickup pose information determination equipment and computer readable medium
CN110728222B (en) Pose estimation method for target object in mechanical arm grabbing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination