CN112241747A - Object sorting method, device, sorting equipment and storage medium - Google Patents

Object sorting method, device, sorting equipment and storage medium Download PDF

Info

Publication number
CN112241747A
CN112241747A CN201910638417.3A CN201910638417A CN112241747A CN 112241747 A CN112241747 A CN 112241747A CN 201910638417 A CN201910638417 A CN 201910638417A CN 112241747 A CN112241747 A CN 112241747A
Authority
CN
China
Prior art keywords
image
mask
sorted
sample image
sorting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910638417.3A
Other languages
Chinese (zh)
Other versions
CN112241747B (en
Inventor
林雨辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SF Technology Co Ltd
Original Assignee
SF Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SF Technology Co Ltd filed Critical SF Technology Co Ltd
Priority to CN201910638417.3A priority Critical patent/CN112241747B/en
Publication of CN112241747A publication Critical patent/CN112241747A/en
Application granted granted Critical
Publication of CN112241747B publication Critical patent/CN112241747B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Economics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Development Economics (AREA)
  • Multimedia (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses an object sorting method, an object sorting device, sorting equipment and a storage medium, and can acquire an object image of an object to be sorted; detecting the object image to obtain an object mask and an object category of the object to be sorted; determining the gravity center position of the object to be sorted according to the object mask of the object to be sorted; and sorting the objects to be sorted according to the gravity center position and the object types. This scheme is through the focus position and the object classification who wait to sort the object in detecting the object image to treat to sort the object based on focus position and object classification are automatic and sort the processing, for manual sorting, improved the precision and the efficiency of sorting the object.

Description

Object sorting method, device, sorting equipment and storage medium
Technical Field
The application relates to the technical field of object management, in particular to an object sorting method, an object sorting device, object sorting equipment and a storage medium.
Background
At present, each city is called for garbage classification, however, the existing garbage classification mainly depends on manual sorting. In the prior art, special personnel are used for uniformly sorting the garbage, on one hand, a large amount of manpower and a large amount of time are consumed, so that the sorting efficiency is low; on the other hand, sorting errors are easy to occur due to the influence of factors such as physical strength or attention reduction and the like through manual sorting, so that the sorting accuracy is low.
Disclosure of Invention
The embodiment of the application provides an object sorting method, an object sorting device, sorting equipment and a storage medium, and can improve the precision and efficiency of sorting objects.
In a first aspect, an embodiment of the present application provides an object sorting method, including:
acquiring an object image of an object to be sorted;
detecting the object image to obtain an object mask and an object category of the object to be sorted;
determining the gravity center position of the object to be sorted according to the object mask of the object to be sorted;
and sorting the objects to be sorted according to the gravity center position and the object types.
In some embodiments, the detecting the object image to obtain the object mask and the object class of the object to be sorted includes:
carrying out size normalization on the object image to obtain an object image with the size normalized;
carrying out pixel value normalization on the object image with the normalized size to obtain a target object image;
and detecting the target object image through the trained detection model to obtain an object mask and an object category of the object to be sorted.
In some embodiments, the performing size normalization on the object image to obtain a size-normalized object image includes:
if the long edge of the object image is larger than a preset length value, reducing the object image to enable the long edge to be the preset length value, and filling the short edge of the object image with a preset numerical value to obtain an object image with a normalized size; or,
and if the long edge of the object image is smaller than the preset length value, amplifying the object image so as to enable the long edge to be the preset length value, and filling the short edge of the object image with a preset numerical value to obtain the object image with the normalized size.
In some embodiments, the detecting the target object image through the trained detection model to obtain the object mask and the object class of the object to be sorted includes:
calculating a candidate object mask, a candidate object boundary box and a candidate object category of the object to be sorted based on the target object image through the trained detection model;
screening out a predicted object mask, a predicted object boundary box and a predicted object category from the candidate object mask, the candidate object boundary box and the candidate object category through a non-maximum suppression algorithm;
and determining the object mask and the object category of the object to be sorted according to the predicted object mask, the predicted object boundary box and the predicted object category.
In some embodiments, the determining the object mask and the object class of the object to be sorted according to the predicted object mask, the predicted object bounding box, and the predicted object class includes:
scaling the prediction bounding box to the size proportion of the object image to obtain a bounding box of the prediction bounding box on the object image;
obtaining a mask image according to the mask prototype and the mask coefficient in the predicted object mask;
zooming the mask image to the size ratio of the object image through interpolation to obtain an object mask arranged in the boundary frame;
and determining the object type of the object to be sorted according to the predicted object type.
In some embodiments, before the detection of the target object image by the trained detection model obtains the object mask and the object class of the object to be sorted, the method further includes:
acquiring a sample image containing an object;
adjusting the chromaticity, the brightness and the saturation of the sample image to obtain an adjusted sample image;
randomly zooming the adjusted sample image to obtain a zoomed sample image;
cutting the zoomed sample image to obtain a cut sample image;
generating a random number, and turning the cut sample image according to the random number to obtain a turned sample image;
and training a detection model according to the overturned sample image to obtain the trained detection model.
In some embodiments, the training a detection model according to the flipped sample image, and obtaining the trained detection model includes:
carrying out size normalization on the turned sample image to obtain a sample image with the size normalized;
carrying out pixel value normalization on the sample image with the normalized size to obtain a target sample image;
intercepting a sample mask image of an object from the sample image, and preprocessing the sample mask image to obtain a preprocessed mask image;
acquiring the category and the bounding box of the object in the sample image;
and training a detection model according to the preprocessed mask image, the class of the object and the bounding box to obtain the trained detection model.
In some embodiments, the cropping the scaled sample image, and obtaining the cropped sample image includes:
randomly cutting the zoomed sample image to obtain a candidate sample image after cutting;
acquiring the central position of a bounding box of an object in the candidate sample image;
and screening out an image which is larger than a preset value in area and contains the center position of the boundary frame of the object from the candidate sample image to obtain a cut sample image.
In some embodiments, the determining the position of the center of gravity of the object to be sorted according to the object mask of the object to be sorted includes:
acquiring a mapping relation between an object in the object image and an object in a three-dimensional space;
and determining the gravity center position of the object to be sorted in the three-dimensional space according to the mapping relation.
In some embodiments, the sorting the objects to be sorted according to the gravity center position and the object class includes:
and grabbing the object to be sorted based on the gravity center position of the object to be sorted in the three-dimensional space, and sorting the object to be sorted to an area corresponding to the object category.
In a second aspect, an embodiment of the present application further provides an object sorting apparatus, including:
the acquisition module is used for acquiring an object image of an object to be sorted;
the detection module is used for detecting the object image to obtain an object mask and an object type of the object to be sorted;
the determining module is used for determining the gravity center position of the object to be sorted according to the object mask of the object to be sorted;
and the sorting module is used for sorting the objects to be sorted according to the gravity center position and the object types.
In some embodiments, the detection module comprises:
the first normalization unit is used for carrying out size normalization on the object image to obtain the object image with the normalized size;
the second normalization unit is used for carrying out pixel value normalization on the object image with the normalized size to obtain a target object image;
and the detection unit is used for detecting the target object image through the trained detection model to obtain an object mask and an object category of the object to be sorted.
In some embodiments, the first normalization unit is specifically configured to: if the long edge of the object image is larger than a preset length value, reducing the object image to enable the long edge to be the preset length value, and filling the short edge of the object image with a preset numerical value to obtain an object image with a normalized size; or if the long edge of the object image is smaller than the preset length value, amplifying the object image to enable the long edge to be the preset length value, and filling the short edge of the object image with a preset numerical value to obtain the object image with the normalized size.
In some embodiments, the detection unit comprises:
the calculating subunit is used for calculating a candidate object mask, a candidate object boundary box and a candidate object category of the object to be sorted based on the target object image through the trained detection model;
the screening subunit is used for screening out a predicted object mask, a predicted object boundary box and a predicted object category from the candidate object mask, the candidate object boundary box and the candidate object category through a non-maximum suppression algorithm;
and the determining subunit is used for determining the object mask and the object category of the object to be sorted according to the predicted object mask, the predicted object boundary box and the predicted object category.
In some embodiments, the determining subunit is specifically configured to:
scaling the prediction bounding box to the size proportion of the object image to obtain a bounding box of the prediction bounding box on the object image;
obtaining a mask image according to the mask prototype and the mask coefficient in the predicted object mask;
zooming the mask image to the size ratio of the object image through interpolation to obtain an object mask arranged in the boundary frame;
and determining the object type of the object to be sorted according to the predicted object type.
In some embodiments, the object sorting apparatus further comprises:
the sample image acquisition module is used for acquiring a sample image containing an object;
the adjusting module is used for adjusting the chromaticity, the brightness and the saturation of the sample image to obtain an adjusted sample image;
the scaling module is used for randomly scaling the adjusted sample image to obtain a scaled sample image;
the cutting module is used for cutting the zoomed sample image to obtain a cut sample image;
the overturning module is used for generating a random number and overturning the cut sample image according to the random number to obtain an overturned sample image;
and the training module is used for training the detection model according to the turned sample image to obtain the trained detection model.
In some embodiments, the training module comprises:
the third normalization unit is used for carrying out size normalization on the turned sample image to obtain a sample image with a normalized size;
the fourth normalization unit is used for carrying out pixel value normalization on the sample image with the normalized size to obtain a target sample image;
the processing unit is used for intercepting a sample mask image of an object from the sample image and preprocessing the sample mask image to obtain a preprocessed mask image;
the acquisition unit is used for acquiring the category and the bounding box of the object in the sample image;
and the training unit is used for training the detection model according to the preprocessed mask image, the class of the object and the boundary box to obtain the trained detection model.
In some embodiments, the cropping module is specifically configured to:
randomly cutting the zoomed sample image to obtain a candidate sample image after cutting;
acquiring the central position of a bounding box of an object in the candidate sample image;
and screening out an image which is larger than a preset value in area and contains the center position of the boundary frame of the object from the candidate sample image to obtain a cut sample image.
In some embodiments, the determining module is specifically configured to:
acquiring a mapping relation between an object in the object image and an object in a three-dimensional space;
and determining the gravity center position of the object to be sorted in the three-dimensional space according to the mapping relation.
In some embodiments, the sorting module is specifically configured to:
and grabbing the object to be sorted based on the gravity center position of the object to be sorted in the three-dimensional space, and sorting the object to be sorted to an area corresponding to the object category.
In a third aspect, an embodiment of the present application further provides a sorting device, which includes a memory and a processor, where the memory stores a computer program, and the processor executes any one of the object sorting methods provided in the embodiments of the present application when calling the computer program in the memory.
In a fourth aspect, the present application further provides a storage medium for storing a computer program, where the computer program is suitable for being loaded by a processor to execute any one of the object sorting methods provided in the embodiments of the present application.
The method and the device for sorting the objects can acquire the object images of the objects to be sorted, detect the object images, obtain the object masks and the object categories of the objects to be sorted, determine the gravity center positions of the objects to be sorted according to the object masks of the objects to be sorted, and sort the objects to be sorted according to the gravity center positions and the object categories. This scheme is through the focus position and the object classification who wait to sort the object in detecting the object image to treat to sort the object based on focus position and object classification are automatic and sort the processing, for manual sorting, improved the precision and the efficiency of sorting the object.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of an object sorting method according to an embodiment of the present application;
fig. 2 is another schematic flow chart of an object sorting method provided in an embodiment of the present application;
FIG. 3 is a schematic structural diagram of a detection model provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an object sorting apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a sorting device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating an object sorting method according to an embodiment of the present application. The execution main body of the object sorting method may be the object sorting device provided in the embodiment of the present application, or a sorting apparatus integrated with the object sorting device, where the object sorting device may be implemented in a hardware or software manner, and the sorting apparatus may be flexibly configured according to actual needs, for example, the sorting apparatus may include a camera and a mechanical arm. The object sorting method may include:
s101, acquiring an object image of an object to be sorted.
The objects to be sorted can be flexibly set according to actual needs, and specific contents are not limited here. For example, the object to be sorted may be garbage, which may be sorted into recyclable garbage, which may include beverage bottles (mineral water bottles, fruit juice bottles, milk tea bottles, etc.), waste paper (newspapers, periodicals, books, various kinds of wrapping paper, office paper, advertising paper, paper boxes, etc.), textiles (clothes, bed sheets, curtains, tablecloths, towels, etc.), or metal products, etc., and non-recyclable garbage, which may include kitchen garbage (leftovers, meals, fruit peels, etc.), and harmful garbage (batteries, paints, waste lamps, waste silver thermometers, overdue medicines, etc.).
For another example, the objects to be sorted may be fruits, and the mixed fruits may be sorted into fruits of the same variety and placed in the same area; or, the same variety of fruits can be sorted into different grades according to size, color and the like, and the fruits of the same grade are placed in the same area of the corresponding grade.
For another example, the objects to be sorted can be medicinal materials, and mixed medicinal materials of different varieties can be sorted into medicinal materials of the same variety and placed in the same area; or, the same variety of medicinal materials can be sorted into different grades according to size, shape, weight, color and the like, and the medicinal materials of the same grade are placed in the same area of the corresponding grade; or sorting out weeds, soil, rotten leaves, damaged or rotten medicinal materials and the like in the medicinal materials.
In some embodiments, the object sorting apparatus may collect an object image of an object to be sorted in the area to be sorted by a preset camera or a camera, etc., so as to perform visual recognition on the object image, and perform sorting processing on the object based on the visual recognition result.
S102, detecting the object image to obtain an object mask and an object type of the object to be sorted.
In some embodiments, detecting the object image and obtaining the object mask and the object category of the object to be sorted may include: carrying out size normalization on the object image to obtain the object image with the normalized size; carrying out pixel value normalization on the object image with the normalized size to obtain a target object image; and detecting the target object image through the trained detection model to obtain an object mask and an object category of the object to be sorted.
After the object images are acquired, the object images can be detected, and in order to improve the detection accuracy and reliability and the detection efficiency, the object sorting device can perform preprocessing such as size normalization and pixel value normalization on the object images in advance.
Specifically, the size normalization may be performed on the object image first to obtain a size-normalized object image, and in some embodiments, the size normalization may be performed on the object image to obtain the size-normalized object image, where the size-normalized object image may include: if the long edge of the object image is larger than the preset length value, reducing the object image to enable the long edge to be the preset length value, and filling the short edge of the object image with a preset numerical value to enable the short edge to be the preset length value, so that the object image with the normalized size is obtained; or if the long edge of the object image is smaller than the preset length value, the object image is amplified so that the long edge is the preset length value, the short edge of the object image is filled with a preset numerical value so that the short edge is the preset length value, and the object image with the normalized size is obtained. The preset length value and the preset numerical value can be flexibly set according to actual needs, for example, the preset length value can be 550 or 600 (the unit can be a pixel point), and the preset numerical value can be 0 or 1.
For example, if the long side (i.e., the longest side) of the object image is larger than 550, the object image is reduced so that the long side is equal to 550, and the short side of the object image is filled with a value 0 so that the length of the short side is 550, so that the size-normalized object image can be obtained. If the long side of the object image is smaller than 550, the object image is enlarged to enable the long side to be equal to 550, the short side of the object image is filled with 0, the length of the short side is enabled to be 550, and therefore the object image with the normalized size can be obtained, the length and the width of the object image with the normalized size are all 550 pixels, and the object image with the normalized size is 550 × 550.
After the size-normalized object image is obtained, the size-normalized object image may be subjected to pixel value normalization to obtain a target object image. For example, the pixel values of the size-normalized object image may be normalized to be between-1 and 1 to obtain the target object image, and for example, the pixel mean value and the pixel variance of the size-normalized object image may be calculated, and then the mean value may be subtracted from the pixel values of the size-normalized object image and divided by the square difference, or the pixel values of the size-normalized object image/255 × 2-1.0, i.e., the pixel values may be normalized to be between-1 and 1.
At this time, in order to improve the detection accuracy, the target object image may be input into the trained detection model, and the target object image is detected by the trained detection model, so as to obtain the object mask and the object type of the object to be sorted.
The detection model may be an example segmentation model (YOLACT, young Only Look At CoefficienTs), which will be described below as an example, but of course, the detection model may also be other types of detection models, and the specific content is not limited herein.
First, the yolcat model needs to be trained, and in some embodiments, before detecting the target object image through the trained detection model and obtaining the object mask and the object type of the object to be sorted, the object sorting method may further include: acquiring a sample image containing an object; adjusting the chromaticity, the brightness and the saturation of the sample image to obtain an adjusted sample image; randomly zooming the adjusted sample image to obtain a zoomed sample image; cutting the zoomed sample image to obtain a cut sample image; generating a random number, and turning the cut sample image according to the random number to obtain a turned sample image; and training the detection model according to the overturned sample image to obtain the trained detection model.
Specifically, the object sorting device may obtain a sample image containing an object, the type of the object may be flexibly set according to actual needs, the sample image may be acquired by a camera or a video camera, or the sample image may be downloaded from a server, and the like.
In order to enrich the training samples and improve the training precision, a series of preprocessing can be performed on the acquired sample images, and firstly, the sample images are adjusted in chromaticity, brightness and saturation, for example, the chromaticity of the sample images can be adjusted to be in the brightness range of 0.6 to 1.4, the brightness of the sample images can be adjusted to be in the brightness range of 0.6 to 1.4, and the saturation of the sample images can be adjusted to be in the saturation range of 0.6 to 1.4, so that the adjusted sample images can be obtained. Alternatively, only the sample image may be chroma-adjusted, or only the sample image may be luma-adjusted, or only the sample image may be saturation-adjusted, or both luma-and saturation-adjusted, and so on.
The adjusted sample image may then be randomly scaled to obtain a scaled sample image, for example, a random number of 0 to 1 may be generated, and if the random number is between 0 and 0.5, the adjusted sample image may be scaled down, and if the random number is between 0.5 and 1, the adjusted sample image may be scaled up. For example, mapping relationships between different value intervals and the reduction scale may be set, after it is determined that the adjusted sample image needs to be reduced, a random number from 0 to 1 may be generated, and a value interval in which the random number is located may be determined, at this time, a current reduction scale may be determined according to the mapping relationship between the value interval and the reduction scale, and the adjusted sample image may be reduced based on the reduction scale. For another example, a mapping relationship between different numerical value sections and an enlargement scale may be set, after it is determined that the adjusted sample image needs to be enlarged, a random number from 0 to 1 may be generated, and the numerical value section in which the random number is located may be determined, in this case, a current enlargement scale may be determined from the mapping relationship between the numerical value sections and the enlargement scale, and the adjusted sample image may be enlarged based on the enlargement scale.
Secondly, the zoomed sample image may be cropped to obtain a cropped sample image, and in some embodiments, the cropping the zoomed sample image to obtain the cropped sample image may include: randomly cutting the zoomed sample image to obtain a candidate sample image after cutting; acquiring the central position of a bounding box of an object in a candidate sample image; and screening out an image which is larger than a preset value in area and contains the center position of the boundary frame of the object from the candidate sample image to obtain a cut sample image.
In order to ensure that the clipped sample image includes an object region and improve reliability of the sample image used for model training, the sample image may be screened, for example, the zoomed sample image may be randomly clipped to obtain a clipped candidate sample image, and then it is detected that an area of the candidate sample image is greater than a preset value, and the preset value may be flexibly set according to actual needs, for example, the preset value may be 80% of the area of the sample image, that is, the area of the detected candidate sample image is greater than 80% of the area of the sample image. If the area of the candidate sample image is smaller than or equal to a preset value, the candidate sample image is removed, and if the area of the candidate sample image is larger than the preset value, a bounding box of an object in the candidate sample image is detected, wherein the size and the shape of the bounding box can be flexibly set according to actual needs, for example, the bounding box can be a quadrilateral bounding box of an internally tangent object. At this time, it may be determined whether the center position of the bounding box of the object is located in the candidate sample image, and if the center position of the bounding box of the object is not located in the candidate sample image, the candidate sample image is rejected, and if the center position of the bounding box of the object is located in the candidate sample image, the candidate sample image is regarded as the clipped sample image.
After obtaining the cropped sample image, a random number may be generated, which may be a number in the range of 0 to 1 (including 0 and 1), and the cropped sample image is flipped according to the random number to obtain the flipped sample image. For example, after a random number of 0 to 1 is generated, if the random number is between 0 and 0.5, the clipped sample image may be flipped left and right, flipped up and down, or diagonally, to obtain a flipped sample image; and if the random number is between 0.5 and 1, the clipped sample image is not inverted.
And finally, training the detection model according to the turned sample image to obtain the trained detection model. In some embodiments, training the detection model according to the flipped sample image, and obtaining the trained detection model may include: carrying out size normalization on the turned sample image to obtain a sample image with the size normalized; carrying out pixel value normalization on the sample image with the normalized size to obtain a target sample image; intercepting a sample mask image of an object from the sample image, and preprocessing the sample mask image to obtain a preprocessed mask image; acquiring the category and the bounding box of an object in a sample image; and training the detection model according to the preprocessed mask image, the class of the object and the bounding box to obtain the trained detection model.
In order to improve the accuracy and efficiency of model training, normalization processing may be performed on the inverted sample image, specifically, size normalization may be performed on the inverted sample image first to obtain a size-normalized sample image: if the long edge of the turned sample image is larger than the preset length value, reducing the turned sample image to enable the long edge to be the preset length value, and filling the short edge of the turned sample image with a preset numerical value to enable the short edge to be the preset length value, so that the sample image with the normalized size is obtained; or if the long edge of the turned sample image is smaller than the preset length value, the turned sample image is amplified to enable the long edge to be the preset length value, the short edge of the turned sample image is filled with a preset numerical value to enable the short edge to be the preset length value, and the sample image with the normalized size is obtained. The preset length value and the preset numerical value can be flexibly set according to actual needs. For example, if the long side of the inverted sample image is larger than 550, the inverted sample image is reduced so that the long side is equal to 550, and the short side of the inverted sample image is filled with a value 0 so that the length of the short side is 550, so that a sample image with a normalized size can be obtained. If the long side of the sample image after the turning is smaller than 550, the sample image after the turning is enlarged to make the long side equal to 550, and the short side of the sample image after the turning is filled with a value 0 to make the length of the short side 550, so that the sample image with the size of 550 × 550 can be obtained.
Then, the pixel value normalization may be performed on the sample image after the size normalization, so as to obtain a target sample image. For example, the pixel values of the size-normalized sample image may be normalized to between-1 and 1 to obtain the target sample image, and for example, the pixel mean and the pixel variance of the size-normalized sample image may be calculated and then the mean is subtracted from the pixel values of the size-normalized sample image and divided by the variance, or the pixel values of the size-normalized sample image/255 × 2-1.0, i.e., the pixel values are normalized to between-1 and 1.
And cutting a mask of the object from the sample image to obtain a sample mask image, wherein the mask of the object may include the contour of the object, and the sample mask image may be an image obtained by cutting the object from the sample image based on the contour of the object. In this case, in order to increase the training speed, the sample mask image may be preprocessed to obtain a preprocessed mask image, and for example, the sample mask image may be scaled to a size of 16 × 16 pixels in length and width to obtain a scaled mask image (i.e., a preprocessed mask image). The object sorting device can also obtain the boundary frame of each object marked on the sample image and the category of each object, the category, the boundary frame and the like of the marked object are real values, and at the moment, the YoLAC model can be trained according to the preprocessed mask image, the category and the boundary frame of the object, so that the trained YoLAC model is obtained. For example, a sample image, a bounding box of an object, a class of the object, a mask image, and the like are input to the YOLACT model, the bounding box of the object, the class of the object, the mask of the object, and the like in the sample image are calculated using the YOLACT model, and the calculated bounding box of the object, the class of the object, the mask of the object, and the like are predicted values. Then, the class of the marked object and the class of the object in the sample image obtained by calculation are converged through a class loss function, the boundary frame of the marked object and the boundary frame of the object in the sample image obtained by calculation are converged through a boundary frame positioning loss function, the extracted mask image and the mask of the object in the sample image obtained by calculation are converged through a mask loss function, and the errors between the true value and the predicted value are reduced by adjusting the parameters of the YOLACT model to proper values so that the loss is low and the gradient does not decrease any more, and the trained YOLACT model can be obtained.
After the trained YOLACT model is obtained, the target object image can be detected through the trained YOLACT model, and an object mask and an object type of the object to be sorted are obtained.
In some embodiments, detecting the target object image through the trained detection model, and obtaining the object mask and the object class of the object to be sorted may include: calculating a candidate object mask, a candidate object boundary box and a candidate object category of the object to be sorted based on the target object image through the trained detection model; screening out a predicted object mask, a predicted object boundary box and a predicted object category from the candidate object mask, the candidate object boundary box and the candidate object category through a non-maximum suppression algorithm; and determining the object mask and the object type of the object to be sorted according to the predicted object mask, the predicted object boundary box and the predicted object type.
Specifically, the candidate object mask, the candidate object bounding box and the candidate object category of the object to be sorted may be calculated based on the target object image through the trained yolcat model, since the obtained candidate object mask may include one or more, the candidate object bounding box may include one or more, and the candidate object category may include one or more, the object mask with the highest probability may be screened from the candidate object masks by using a Non-Maximum suppression algorithm (NMS), the predicted object mask may be obtained, the bounding box with the highest probability may be screened from the candidate object bounding box by using the NMS, the predicted object bounding box may be obtained, and the predicted object category may be obtained by screening the highest probability from the candidate object category by using the NMS. And then, according to the predicted object mask, the predicted object boundary box and the predicted object category, determining the object mask and the object category of the object to be sorted.
In some embodiments, determining the object mask and the object class of the object to be sorted based on the predicted object mask, the predicted object bounding box, and the predicted object class may include: scaling the predicted bounding box to the size proportion of the object image to obtain the bounding box of the predicted bounding box on the object image; obtaining a mask image according to a mask prototype and a mask coefficient in the predicted object mask; the mask image is scaled to the size ratio of the object image through interpolation to obtain an object mask arranged in the boundary frame; and determining the object type of the object to be sorted according to the predicted object type.
Since the normalization process may be performed on the object image during the process of detecting the object image, the detected bounding box, the mask, and the like may be restored into the object image at this time, for example, the predicted bounding box may be scaled to the size ratio of the object image to obtain the true bounding box of the predicted bounding box on the object image, and for example, if the object image is scaled up to 2:1 during the size normalization of the object image, the predicted bounding box may be scaled down to 1:2 at this time, and the scaled-down predicted bounding box may be mapped onto the object image.
The predicted object mask may include a mask prototype and mask coefficients, and the mask image may be obtained by matrix multiplication of the mask prototype (e.g., 138 × k) and the mask coefficients (e.g., W × H × ka) in the predicted object mask. And the mask image can be scaled to the size ratio of the object image through interpolation, the mask image is placed in a boundary frame on the object image to obtain an object mask placed in the boundary frame, namely the real object mask of the object to be sorted is obtained, and the object category of the object to be sorted can be determined according to the predicted object category.
S103, determining the gravity center position of the object to be sorted according to the object mask of the object to be sorted.
After the object mask of the object to be sorted is obtained, in order to conveniently determine the gravity center position of the object to be sorted, binarization processing can be performed on the object mask to obtain a binarized mask image, and the gravity center position of the object to be sorted is determined according to the binarized mask image.
In some embodiments, determining the position of the center of gravity of the object to be sorted according to the object mask of the object to be sorted may include: acquiring a mapping relation between an object in an object image and an object in a three-dimensional space; and determining the gravity center position of the object to be sorted in the three-dimensional space according to the mapping relation.
In order to improve the accuracy of determining the gravity center position of an object to be sorted, the mapping relation between the object in the object image and the object in the three-dimensional space can be obtained, the mapping relation can be flexibly set according to actual needs, the object in the two-dimensional plane on the object image can be restored to a real object in the three-dimensional space according to the mapping relation, and for example, the box in the two-dimensional plane can be restored to a box in the three-dimensional space. Then, the gravity center position of the object to be sorted in the three-dimensional space can be determined according to the mapping relation.
And S104, sorting the objects to be sorted according to the gravity center position and the object types.
After the gravity center position and the object type of the object to be sorted are obtained, the object to be sorted can be sorted according to the gravity center position and the object type of the object to be sorted and the like.
In some embodiments, sorting the objects to be sorted according to the gravity center position and the object class may include: and controlling a mechanical arm or other grabbing equipment to grab the object to be sorted based on the gravity center position of the object to be sorted in the three-dimensional space, and sorting the object to be sorted to an area corresponding to the object category. Owing to can snatch the suitable position of waiting to sort the object through the focus position, consequently can improve the stability of snatching the object, need detect etc. again after avoiding the object to drop, improve the efficiency to the object letter sorting greatly.
The method and the device for sorting the objects can acquire the object images of the objects to be sorted, detect the object images, obtain the object masks and the object categories of the objects to be sorted, determine the gravity center positions of the objects to be sorted according to the object masks of the objects to be sorted, and sort the objects to be sorted according to the gravity center positions and the object categories. This scheme is through the focus position and the object classification who wait to sort the object in detecting the object image to treat to sort the object based on focus position and object classification are automatic and sort the processing, for manual sorting, improved the precision and the efficiency of sorting the object.
The object sorting method according to the above embodiment will be described in further detail below.
Referring to fig. 2, fig. 2 is another schematic flow chart of an object sorting method according to an embodiment of the present application. The object sorting method can be applied to sorting equipment, and the following takes sorting equipment as an example to sort garbage, as shown in fig. 2, the flow of the object sorting method can be as follows:
s201, the sorting equipment acquires images of the garbage to be sorted.
For example, the garbage to be sorted may be classified into recyclable garbage, which may include beverage bottles, waste paper, textile or metal products, etc., and non-recyclable garbage, which may include kitchen garbage, harmful garbage, etc.
The sorting equipment can acquire images of garbage to be sorted from different angles through one or more cameras to obtain a plurality of images.
S202, size normalization is carried out on the image by the sorting equipment, and the image with the normalized size is obtained.
In order to improve the detection accuracy and efficiency, the sorting equipment may pre-process the image, for example, the image may be subjected to size normalization: if the long edge of the image is larger than the preset length value, reducing the image to enable the long edge to be the preset length value, filling the short edge of the image with a preset numerical value to enable the short edge to be the preset length value, and obtaining the image with the normalized size; or if the long edge of the image is smaller than the preset length value, the image is amplified so that the long edge is the preset length value, the short edge of the image is filled with a preset numerical value so that the short edge is the preset length value, and the image with the normalized size is obtained. The preset length value and the preset numerical value can be flexibly set according to actual needs. For example, if the long side of the image is larger than 600, the image is reduced so that the long side is equal to 600, and the short side of the image is filled with a value 0 so that the length of the short side is 600, whereby the image after size normalization can be obtained. If the long side of the image is less than 600, the image is enlarged so that the long side is equal to 600, and the short side of the image is filled with a value 0 so that the length of the short side is 600, so that 600 × 600 pixels of the image with normalized size can be obtained.
S203, the sorting equipment normalizes the pixel values of the image with the normalized size to obtain a target image.
After obtaining the size-normalized image, the sorting device may perform pixel value normalization on the size-normalized image to obtain a target image. For example, the sorting apparatus may normalize the pixel values of the size-normalized image to between-1 and 1: the pixel mean and the pixel variance of the size-normalized image may be calculated, and then the mean may be subtracted from the pixel value of the size-normalized image and divided by the square difference, or/255 x 2-1.0 of the pixel value of the size-normalized image, to obtain the target image.
S204, detecting the target image by the sorting equipment through the trained YOLACT model to obtain a mask and a category of the garbage to be sorted.
The sorting equipment can train the Yonext model in advance according to the training method to obtain a trained Yonext model, and after the trained Yonext model is obtained, the sorting equipment can detect a target image through the trained Yonext model to obtain a mask and a category of garbage to be sorted.
For example, candidate masks, candidate bounding boxes and candidate categories of the garbage to be sorted can be calculated based on the target image through the trained yolcat model, and since the obtained candidate masks, candidate bounding boxes or candidate categories can include one or more masks, candidate bounding boxes or candidate categories, the NMS can be used for respectively screening out the candidate masks, candidate bounding boxes and candidate categories with the highest probability to respectively obtain a prediction mask, a prediction bounding box and a prediction category. Then, according to the prediction mask, the prediction boundary box and the prediction category, determining the mask and the category of the garbage to be sorted.
Since the image can be normalized during the process of detecting the image, the sorting device can restore the detected bounding box, mask, and the like into the image at this time, for example, the predicted bounding box can be scaled to the size of the image, so as to obtain the real bounding box of the predicted bounding box on the image. And, matrix multiplying the mask prototype (e.g., 138 x k) in the predicted mask with the mask coefficient (e.g., w x h x ka) to obtain the mask image. And the mask image can be zoomed to the size ratio of the image through interpolation, and the mask image is placed in a boundary frame on the image to obtain a mask placed in the boundary frame, so that the mask of the real garbage to be sorted is obtained.
For example, as shown in fig. 3, the yolcat model may include a backbone + FPN architecture, which may include two branches: one branch is a Mask prototype (for example, 138 × k) corresponding to an image for detecting garbage, and the other branch is used to detect a Mask coefficient (i.e., Mask coefficient, for example, W × H × ka), a bounding Box (i.e., Box), a type (i.e., Class), etc. corresponding to the image for detecting garbage, and then the two matrices of the Mask prototype and the Mask coefficient are multiplied to obtain a final Mask result, where in fig. 3, backbone may use rescet 101, conv refers to a convolution layer, a refers to the number of anchors used, c refers to the number of categories, and k refers to the dimension of the Mask coefficient, and specific values of parameters, such as W, H, a, c, and k, may be flexibly set according to actual needs.
S205, the sorting equipment determines the gravity center position of the garbage to be sorted according to the mask of the garbage to be sorted.
After the mask of the garbage to be sorted is obtained, the mask can be subjected to binarization processing to obtain a binarized mask image in order to conveniently determine the gravity center position of the garbage to be sorted, and the gravity center position of the garbage to be sorted is determined according to the binarized mask image.
In order to improve the accuracy of determining the gravity center position of the garbage to be sorted, the mapping relation between the garbage in the image and the garbage in the three-dimensional space can be obtained, the mapping relation can be flexibly set according to actual needs, and the garbage of a two-dimensional plane on the image can be reduced to the real garbage in the three-dimensional space according to the mapping relation. Then, the gravity center position of the garbage to be sorted in the three-dimensional space can be determined according to the mapping relation.
S206, the sorting equipment grabs the proper position of the garbage to be sorted based on the gravity center position and sorts the garbage to be sorted to the area corresponding to the category.
The sorting equipment can control a mechanical arm or other grabbing equipment to grab a proper position of the garbage to be sorted based on the gravity center position of the garbage to be sorted in the three-dimensional space, and sort the garbage to be sorted to an area corresponding to the category. Because can snatch the suitable position of waiting to sort rubbish through the focus position, consequently can improve the stability of snatching rubbish, need detect again etc. after avoiding rubbish to fall off, improved the efficiency to rubbish letter sorting greatly. Automatic waste classification has been realized to this embodiment, through the image that the camera obtained, and the position of the classification of rubbish and rubbish in the discernment image, correspond rubbish for better snatching, not only will discern the position of rubbish in the image, will obtain the mask of the pixel granularity of rubbish moreover, so that the focus position of accurate definite rubbish, ensure to grab rubbish steadily, use automatic waste classification can save very big labour.
The sorting equipment can acquire the image of the garbage to be sorted, detect the image through the YOLACT model, obtain the mask and the category of the garbage to be sorted, determine the gravity center position of the garbage to be sorted according to the mask of the garbage to be sorted, and sort the garbage to be sorted according to the gravity center position and the category. This scheme can be fast accurate detect the focus position and the classification of waiting to sort rubbish in the image to treat to sort rubbish automatically and sort and handle based on focus position and classification, reduce the output of manpower in a large number, improved the accuracy and the efficiency of sorting rubbish.
In order to better implement the object sorting method provided by the embodiment of the application, the embodiment of the application also provides a device based on the object sorting method. Wherein the meaning of the noun is the same as that in the above object sorting method, and the specific implementation details can refer to the description in the method embodiment.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an object sorting apparatus according to an embodiment of the present disclosure, wherein the object sorting apparatus 300 may include an obtaining module 301, a detecting module 302, a determining module 303, a sorting module 304, and the like.
The acquiring module 301 is configured to acquire an object image of an object to be sorted.
The detecting module 302 is configured to detect an object image to obtain an object mask and an object type of an object to be sorted.
The determining module 303 is configured to determine a gravity center position of the object to be sorted according to the object mask of the object to be sorted.
And the sorting module 304 is used for sorting the objects to be sorted according to the gravity center position and the object types.
In some embodiments, the detection module 302 may include a first normalization unit, a second normalization unit, a detection unit, and the like, which may be specifically as follows:
the first normalization unit is used for carrying out size normalization on the object image to obtain the object image with the normalized size;
the second normalization unit is used for carrying out pixel value normalization on the object image with the normalized size to obtain a target object image;
and the detection unit is used for detecting the target object image through the trained detection model to obtain an object mask and an object category of the object to be sorted.
In some embodiments, the first normalization unit is specifically configured to: if the long side of the object image is larger than the preset length value, reducing the object image to enable the long side to be the preset length value, and filling the short side of the object image with a preset numerical value to obtain the object image with the normalized size; or if the long edge of the object image is smaller than the preset length value, amplifying the object image to enable the long edge to be the preset length value, and filling the short edge of the object image with a preset numerical value to obtain the object image with the normalized size.
In some embodiments, the detection unit may include a calculation subunit, a screening subunit, a determination subunit, and the like, and specifically may be as follows:
the calculating subunit is used for calculating a candidate object mask, a candidate object boundary frame and a candidate object category of the object to be sorted based on the target object image through the trained detection model;
the screening subunit is used for screening out a predicted object mask, a predicted object boundary box and a predicted object category from the candidate object mask, the candidate object boundary box and the candidate object category through a non-maximum suppression algorithm;
and the determining subunit is used for determining the object mask and the object type of the object to be sorted according to the predicted object mask, the predicted object boundary box and the predicted object type.
In some embodiments, the determining subunit is specifically configured to: scaling the predicted bounding box to the size proportion of the object image to obtain the bounding box of the predicted bounding box on the object image; obtaining a mask image according to a mask prototype and a mask coefficient in the predicted object mask; the mask image is scaled to the size ratio of the object image through interpolation to obtain an object mask arranged in the boundary frame; and determining the object type of the object to be sorted according to the predicted object type.
In some embodiments, the object sorting apparatus may further include a sample image obtaining module, an adjusting module, a scaling module, a cropping module, a flipping module, a training module, and the like, which may specifically be as follows:
the sample image acquisition module is used for acquiring a sample image containing an object;
the adjusting module is used for adjusting the chromaticity, the brightness and the saturation of the sample image to obtain an adjusted sample image;
the scaling module is used for randomly scaling the adjusted sample image to obtain a scaled sample image;
the cutting module is used for cutting the zoomed sample image to obtain a cut sample image;
the overturning module is used for generating a random number and overturning the cut sample image according to the random number to obtain an overturned sample image;
and the training module is used for training the detection model according to the turned sample image to obtain the trained detection model.
In some embodiments, the training module specifically includes a third normalization unit, a fourth normalization unit, a processing unit, an obtaining unit, a training unit, and the like, and specifically may be as follows:
the third normalization unit is used for carrying out size normalization on the turned sample image to obtain a sample image with the size normalized;
the fourth normalization unit is used for carrying out pixel value normalization on the sample image with the normalized size to obtain a target sample image;
the processing unit is used for intercepting a sample mask image of an object from the sample image and preprocessing the sample mask image to obtain a preprocessed mask image;
the acquisition unit is used for acquiring the category and the bounding box of the object in the sample image;
and the training unit is used for training the detection model according to the preprocessed mask image, the class of the object and the bounding box to obtain the trained detection model.
In some embodiments, the cropping module is specifically configured to:
randomly cutting the zoomed sample image to obtain a candidate sample image after cutting;
acquiring the central position of a bounding box of an object in a candidate sample image;
and screening out an image which is larger than a preset value in area and contains the center position of the boundary frame of the object from the candidate sample image to obtain a cut sample image.
In some embodiments, the determining module 303 is specifically configured to:
acquiring a mapping relation between an object in an object image and an object in a three-dimensional space;
and determining the gravity center position of the object to be sorted in the three-dimensional space according to the mapping relation.
In some embodiments, sorting module 304 is specifically configured to:
and grabbing the objects to be sorted based on the gravity center positions of the objects to be sorted in the three-dimensional space, and sorting the objects to be sorted to the areas corresponding to the object categories.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
According to the embodiment of the application, the obtaining module 301 can obtain the object image of the object to be sorted, the detecting module 302 can detect the object image to obtain the object mask and the object type of the object to be sorted, the determining module 303 can determine the gravity center position of the object to be sorted according to the object mask of the object to be sorted, and the sorting module 304 can sort the object to be sorted according to the gravity center position and the object type. According to the scheme, the gravity center position and the object type of the object to be sorted in the object image are detected, the object to be sorted is automatically sorted based on the gravity center position and the object type, and the sorting precision and efficiency of the object are improved.
Accordingly, an embodiment of the present invention further provides a sorting apparatus, as shown in fig. 5, the sorting apparatus may include Radio Frequency (RF) circuits 401, a memory 402 including one or more computer-readable storage media, an input unit 403, a display unit 404, a sensor 405, an audio circuit 406, a Wireless Fidelity (WiFi) module 407, a processor 408 including one or more processing cores, and a power supply 409. Those skilled in the art will appreciate that the sorting device configuration shown in fig. 5 does not constitute a limitation of the sorting device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the RF circuit 401 may be used for receiving and transmitting signals during a message transmission or communication process, and in particular, for receiving downlink information of a base station and then sending the received downlink information to the one or more processors 408 for processing; in addition, data relating to uplink is transmitted to the base station. In general, the RF circuitry 401 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 401 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), and the like.
The memory 402 may be used to store software programs and modules, and the processor 408 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the stored data area may store data (such as audio data, a phone book, etc.) created according to the use of the sorting apparatus, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 408 and the input unit 403 access to the memory 402.
The input unit 403 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, in a particular embodiment, the input unit 403 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 408, and can receive and execute commands from the processor 408. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 403 may include other input devices in addition to the touch-sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 404 may be used to display information input by or provided to the user as well as various graphical user interfaces of the sorting apparatus, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 404 may include a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may overlay the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 408 to determine the type of touch event, and then the processor 408 provides a corresponding visual output on the display panel according to the type of touch event. Although in FIG. 5 the touch-sensitive surface and the display panel are two separate components to implement input and output functions, in some embodiments the touch-sensitive surface may be integrated with the display panel to implement input and output functions.
The sorting apparatus may also include at least one sensor 405, such as light sensors, motion sensors, and other sensors. In particular, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that may turn off the display panel and/or backlight when the sorting apparatus is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the sorting device is stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer and tapping) and the like for recognizing the attitude of the sorting device; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor and the like, which can be configured in the sorting equipment, the detailed description is omitted.
Audio circuitry 406, speakers, and microphones may provide an audio interface between the user and the sorting equipment. The audio circuit 406 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 406 and converted into audio data, which is processed by the audio data output processor 408, and then passed through the RF circuit 401 to be sent to, for example, another sorting apparatus, or output to the memory 402 for further processing. The audio circuitry 406 may also include an ear-bud jack to provide communication of peripheral headphones with the sorting equipment.
WiFi belongs to short distance wireless transmission technology, and the sorting device can help the user send and receive e-mail, browse web page and access streaming media, etc. through WiFi module 407, which provides wireless broadband internet access for the user. Although fig. 5 shows the WiFi module 407, it is understood that it does not belong to the essential constitution of the sorting equipment and can be omitted entirely as required within the scope not changing the essence of the invention.
The processor 408 is the control center for the sorting facility, and is used to connect various sections of the overall sorting facility using various interfaces and lines to perform various functions of the sorting facility and process data by running or executing software programs and/or modules stored in the memory 402 and invoking data stored in the memory 402 to thereby monitor the overall sorting facility. Optionally, processor 408 may include one or more processing cores; preferably, the processor 408 may integrate an application processor, which handles primarily the operating system, user interface, applications, etc., and a modem processor, which handles primarily the wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 408.
The sorting apparatus also includes a power source 409 (e.g., a battery) for powering the various components, which may preferably be logically connected to the processor 408 via a power management system, such that functions for managing charging, discharging, and power consumption are performed via the power management system. The power supply 409 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the sorting apparatus may further include a camera, a bluetooth module, etc., which will not be described herein. Specifically, in this embodiment, the processor 408 in the sorting apparatus loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 408 runs the application program stored in the memory 402, so as to perform the following functions:
acquiring an object image of an object to be sorted; detecting the object image to obtain an object mask and an object category of the object to be sorted; determining the gravity center position of the object to be sorted according to the object mask of the object to be sorted; and sorting the objects to be sorted according to the gravity center position and the object types.
In some embodiments, when the object image is detected, and the object mask and the object class of the object to be sorted are obtained, the processor 408 is further configured to perform: carrying out size normalization on the object image to obtain the object image with the normalized size; carrying out pixel value normalization on the object image with the normalized size to obtain a target object image; and detecting the target object image through the trained detection model to obtain an object mask and an object category of the object to be sorted.
In some embodiments, when the trained detection model is used to detect the target object image, and an object mask and an object category of the object to be sorted are obtained, the processor 408 is further configured to perform: calculating a candidate object mask, a candidate object boundary box and a candidate object category of the object to be sorted based on the target object image through the trained detection model; screening out a predicted object mask, a predicted object boundary box and a predicted object category from the candidate object mask, the candidate object boundary box and the candidate object category through a non-maximum suppression algorithm; and determining the object mask and the object type of the object to be sorted according to the predicted object mask, the predicted object boundary box and the predicted object type.
In some embodiments, before the trained detection model detects the target object image and obtains the object mask and the object class of the object to be sorted, the processor 408 is further configured to: acquiring a sample image containing an object; adjusting the chromaticity, the brightness and the saturation of the sample image to obtain an adjusted sample image; randomly zooming the adjusted sample image to obtain a zoomed sample image; cutting the zoomed sample image to obtain a cut sample image; generating a random number, and turning the cut sample image according to the random number to obtain a turned sample image; and training the detection model according to the overturned sample image to obtain the trained detection model.
In some embodiments, when the detection model is trained according to the flipped sample image, the trained detection model is obtained, the processor 408 is further configured to perform: carrying out size normalization on the turned sample image to obtain a sample image with the size normalized; carrying out pixel value normalization on the sample image with the normalized size to obtain a target sample image; intercepting a sample mask image of an object from the sample image, and preprocessing the sample mask image to obtain a preprocessed mask image; acquiring the category and the bounding box of an object in a sample image; and training the detection model according to the preprocessed mask image, the class of the object and the bounding box to obtain the trained detection model.
In some embodiments, when cropping the scaled sample image, resulting in a cropped sample image, processor 408 is further configured to perform: randomly cutting the zoomed sample image to obtain a candidate sample image after cutting; acquiring the central position of a bounding box of an object in a candidate sample image; and screening out an image which is larger than a preset value in area and contains the center position of the boundary frame of the object from the candidate sample image to obtain a cut sample image.
In some embodiments, when the objects to be sorted are sorted according to the position of the center of gravity and the object class, the processor 408 is further configured to perform: and grabbing the object to be sorted based on the gravity center position, and sorting the object to be sorted to an area corresponding to the object category.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the object sorting method, and are not described herein again.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present application provides a storage medium, in which a computer program is stored, where the computer program can be loaded by a processor to execute the steps in any one of the object sorting methods provided in the present application. For example, the computer program may perform the steps of:
acquiring an object image of an object to be sorted; detecting the object image to obtain an object mask and an object category of the object to be sorted; determining the gravity center position of the object to be sorted according to the object mask of the object to be sorted; and sorting the objects to be sorted according to the gravity center position and the object types.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any object sorting method provided in the embodiments of the present application, the beneficial effects that can be achieved by any object sorting method provided in the embodiments of the present application can be achieved, and the detailed description is omitted here for the details, see the foregoing embodiments.
The foregoing detailed description is directed to a method, an apparatus, a sorting device, and a storage medium for sorting objects provided in the embodiments of the present application, and specific examples are applied in the present application to explain the principles and implementations of the present application, and the descriptions of the foregoing embodiments are only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method of sorting objects, comprising:
acquiring an object image of an object to be sorted;
detecting the object image to obtain an object mask and an object category of the object to be sorted;
determining the gravity center position of the object to be sorted according to the object mask of the object to be sorted;
and sorting the objects to be sorted according to the gravity center position and the object types.
2. The object sorting method according to claim 1, wherein the detecting the object image to obtain the object mask and the object class of the object to be sorted comprises:
carrying out size normalization on the object image to obtain an object image with the size normalized;
carrying out pixel value normalization on the object image with the normalized size to obtain a target object image;
and detecting the target object image through the trained detection model to obtain an object mask and an object category of the object to be sorted.
3. The object sorting method according to claim 2, wherein the detecting the target object image by the trained detection model to obtain the object mask and the object class of the object to be sorted comprises:
calculating a candidate object mask, a candidate object boundary box and a candidate object category of the object to be sorted based on the target object image through the trained detection model;
screening out a predicted object mask, a predicted object boundary box and a predicted object category from the candidate object mask, the candidate object boundary box and the candidate object category through a non-maximum suppression algorithm;
and determining the object mask and the object category of the object to be sorted according to the predicted object mask, the predicted object boundary box and the predicted object category.
4. The object sorting method according to claim 3, wherein the determining the object mask and the object class of the object to be sorted based on the predicted object mask, the predicted object bounding box, and the predicted object class comprises:
scaling the prediction bounding box to the size proportion of the object image to obtain a bounding box of the prediction bounding box on the object image;
obtaining a mask image according to the mask prototype and the mask coefficient in the predicted object mask;
zooming the mask image to the size ratio of the object image through interpolation to obtain an object mask arranged in the boundary frame;
and determining the object type of the object to be sorted according to the predicted object type.
5. The object sorting method according to claim 2, wherein before the trained detection model detects the target object image and obtains the object mask and the object class of the object to be sorted, the method further comprises:
acquiring a sample image containing an object;
adjusting the chromaticity, the brightness and the saturation of the sample image to obtain an adjusted sample image;
randomly zooming the adjusted sample image to obtain a zoomed sample image;
cutting the zoomed sample image to obtain a cut sample image;
generating a random number, and turning the cut sample image according to the random number to obtain a turned sample image;
and training a detection model according to the overturned sample image to obtain the trained detection model.
6. The object sorting method according to claim 5, wherein the training of the detection model according to the flipped sample image, and obtaining the trained detection model comprises:
carrying out size normalization on the turned sample image to obtain a sample image with the size normalized;
carrying out pixel value normalization on the sample image with the normalized size to obtain a target sample image;
intercepting a sample mask image of an object from the sample image, and preprocessing the sample mask image to obtain a preprocessed mask image;
acquiring the category and the bounding box of the object in the sample image;
and training a detection model according to the preprocessed mask image, the class of the object and the bounding box to obtain the trained detection model.
7. The method for sorting objects according to claim 5, wherein the cropping the scaled sample image to obtain a cropped sample image comprises:
randomly cutting the zoomed sample image to obtain a candidate sample image after cutting;
acquiring the central position of a bounding box of an object in the candidate sample image;
and screening out an image which is larger than a preset value in area and contains the center position of the boundary frame of the object from the candidate sample image to obtain a cut sample image.
8. An object sorting apparatus, comprising:
the acquisition module is used for acquiring an object image of an object to be sorted;
the detection module is used for detecting the object image to obtain an object mask and an object type of the object to be sorted;
the determining module is used for determining the gravity center position of the object to be sorted according to the object mask of the object to be sorted;
and the sorting module is used for sorting the objects to be sorted according to the gravity center position and the object types.
9. A sorting device, characterized in that it comprises a processor and a memory, in which a computer program is stored, which when called by the processor performs an object sorting method according to any one of claims 1 to 7.
10. A storage medium for storing a computer program adapted to be loaded by a processor for performing the method of sorting objects according to any of claims 1 to 7.
CN201910638417.3A 2019-07-16 2019-07-16 Object sorting method, device, sorting equipment and storage medium Active CN112241747B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910638417.3A CN112241747B (en) 2019-07-16 2019-07-16 Object sorting method, device, sorting equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910638417.3A CN112241747B (en) 2019-07-16 2019-07-16 Object sorting method, device, sorting equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112241747A true CN112241747A (en) 2021-01-19
CN112241747B CN112241747B (en) 2024-08-20

Family

ID=74166551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910638417.3A Active CN112241747B (en) 2019-07-16 2019-07-16 Object sorting method, device, sorting equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112241747B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114444622A (en) * 2022-04-11 2022-05-06 中国科学院微电子研究所 Fruit detection system and method based on neural network model
CN114494884A (en) * 2022-02-10 2022-05-13 北京工业大学 Automatic garbage sorting multi-target detection method
CN115520786A (en) * 2022-08-25 2022-12-27 江苏广坤铝业有限公司 Aluminum material discharging auxiliary transfer equipment and transfer control method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102867176A (en) * 2012-09-11 2013-01-09 清华大学深圳研究生院 Face image normalizing method
CN104058260A (en) * 2013-09-27 2014-09-24 沈阳工业大学 Robot automatic stacking method based on visual processing
US20170349385A1 (en) * 2014-10-29 2017-12-07 Fives Intralogistics S.P.A. Con Socio Unico A device for feeding items to a sorting machine and sorting machine
CN107961990A (en) * 2017-12-27 2018-04-27 华侨大学 A kind of building waste sorting system and method for sorting
CN108491892A (en) * 2018-04-05 2018-09-04 聊城大学 fruit sorting system based on machine vision
CN109389074A (en) * 2018-09-29 2019-02-26 东北大学 A kind of expression recognition method extracted based on human face characteristic point

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102867176A (en) * 2012-09-11 2013-01-09 清华大学深圳研究生院 Face image normalizing method
CN104058260A (en) * 2013-09-27 2014-09-24 沈阳工业大学 Robot automatic stacking method based on visual processing
US20170349385A1 (en) * 2014-10-29 2017-12-07 Fives Intralogistics S.P.A. Con Socio Unico A device for feeding items to a sorting machine and sorting machine
CN107961990A (en) * 2017-12-27 2018-04-27 华侨大学 A kind of building waste sorting system and method for sorting
CN108491892A (en) * 2018-04-05 2018-09-04 聊城大学 fruit sorting system based on machine vision
CN109389074A (en) * 2018-09-29 2019-02-26 东北大学 A kind of expression recognition method extracted based on human face characteristic point

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494884A (en) * 2022-02-10 2022-05-13 北京工业大学 Automatic garbage sorting multi-target detection method
CN114494884B (en) * 2022-02-10 2024-06-07 北京工业大学 Multi-target detection method for automatic garbage sorting
CN114444622A (en) * 2022-04-11 2022-05-06 中国科学院微电子研究所 Fruit detection system and method based on neural network model
CN115520786A (en) * 2022-08-25 2022-12-27 江苏广坤铝业有限公司 Aluminum material discharging auxiliary transfer equipment and transfer control method
CN115520786B (en) * 2022-08-25 2024-02-27 江苏广坤铝业有限公司 Aluminum product discharging auxiliary transfer equipment and transfer control method

Also Published As

Publication number Publication date
CN112241747B (en) 2024-08-20

Similar Documents

Publication Publication Date Title
CN105867751B (en) Operation information processing method and device
CN111143015B (en) Screen capturing method and electronic equipment
CN112241747B (en) Object sorting method, device, sorting equipment and storage medium
CN109769065B (en) Message display method and device, mobile terminal and storage medium
CN105989572B (en) Picture processing method and device
CN109002759A (en) text recognition method, device, mobile terminal and storage medium
CN106296634B (en) A kind of method and apparatus detecting similar image
CN107193518A (en) The method and terminal device of a kind of presentation of information
CN110070129B (en) Image detection method, device and storage medium
CN110443171B (en) Video file classification method and device, storage medium and terminal
WO2016173350A1 (en) Picture processing method and device
CN109388456B (en) Head portrait selection method and mobile terminal
CN112488914A (en) Image splicing method, device, terminal and computer readable storage medium
CN114706895A (en) Emergency event plan recommendation method and device, storage medium and electronic equipment
CN108600544A (en) A kind of Single-hand control method and terminal
CN103501487A (en) Method, device, terminal, server and system for updating classifier
CN109658198B (en) Commodity recommendation method and mobile terminal
CN109688611B (en) Frequency band parameter configuration method, device, terminal and storage medium
CN108510266A (en) A kind of Digital Object Unique Identifier recognition methods and mobile terminal
CN104834655B (en) A kind of method and apparatus for the mass parameter for showing Internet resources
CN106302101B (en) Message reminding method, terminal and server
CN111007980A (en) Information input method and terminal equipment
CN108520760B (en) Voice signal processing method and terminal
CN108366167B (en) Message reminding method and mobile terminal
CN110490272B (en) Image content similarity analysis method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant