CN117563960A - Automatic appearance detection method and device - Google Patents

Automatic appearance detection method and device Download PDF

Info

Publication number
CN117563960A
CN117563960A CN202311640799.6A CN202311640799A CN117563960A CN 117563960 A CN117563960 A CN 117563960A CN 202311640799 A CN202311640799 A CN 202311640799A CN 117563960 A CN117563960 A CN 117563960A
Authority
CN
China
Prior art keywords
sorting
objects
sorting container
container
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311640799.6A
Other languages
Chinese (zh)
Inventor
熊云飞
吴振威
张聪
陈绪兵
杨业成
冯建龙
罗灿
李海涛
刘海军
童庆武
赵崇禧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Academy Of Scientific And Technical Information
Wuhan Institute of Technology
Wuhan Fiberhome Technical Services Co Ltd
Original Assignee
Hubei Academy Of Scientific And Technical Information
Wuhan Institute of Technology
Wuhan Fiberhome Technical Services Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Academy Of Scientific And Technical Information, Wuhan Institute of Technology, Wuhan Fiberhome Technical Services Co Ltd filed Critical Hubei Academy Of Scientific And Technical Information
Priority to CN202311640799.6A priority Critical patent/CN117563960A/en
Publication of CN117563960A publication Critical patent/CN117563960A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/04Sorting according to size
    • B07C5/10Sorting according to size measured by light-responsive means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/34Sorting according to other particular properties
    • B07C5/3404Sorting according to other particular properties according to properties of containers or receptacles, e.g. rigidity, leaks, fill-level
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/36Sorting apparatus characterised by the means used for distribution
    • B07C5/361Processing or control devices therefor, e.g. escort memory
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/36Sorting apparatus characterised by the means used for distribution
    • B07C5/361Processing or control devices therefor, e.g. escort memory
    • B07C5/362Separating or distributor mechanisms

Landscapes

  • Sorting Of Articles (AREA)

Abstract

The invention relates to the technical field of automatic sorting, and provides an automatic appearance detection method and device, wherein the method comprises the following steps: acquiring image data of the logistics packaging box by using an image acquisition device; performing image recognition on the image data by using an object recognition model to obtain objects contained in the logistics packaging box; extracting the object from the image data for analysis to obtain the size information of the object; detecting a filling state of each sorting container using a sensor; and determining the sorting strategy of the objects by using a preset algorithm according to the size information of the objects and the filling state of each sorting container. The invention can determine the sorting strategy by analyzing the size information of the objects and the filling state of the sorting containers, and avoid the possibility of bin explosion of the sorting containers in large-scale sorting.

Description

Automatic appearance detection method and device
Technical Field
The invention relates to the technical field of automatic sorting, in particular to an automatic appearance detection method and device.
Background
The intelligent sorting is that goods are automatically completed according to instructions from entering sorting equipment to a designated position, so that the sorting equipment of the sorting mode has high processing capacity, and the sorted goods are wide in class and large in quantity, and are favored by a current logistics sorting center.
The intelligent sorting needs to use an automatic appearance detection robot, and the automatic appearance detection robot plays an important role in the intelligent sorting. These robots can quickly and accurately detect the appearance characteristics of goods, such as shape, color, labels, etc., using advanced vision techniques and image processing algorithms. Through the connection with the letter sorting system, outward appearance detection robot can acquire the outward appearance information of goods in real time to compare with preset standard. If the appearance of the goods does not conform to the standard, the robot automatically sorts the goods to the corresponding abnormal processing area for subsequent processing.
However, conventional rule engines and conventional computer vision methods tend to be difficult to handle complex scenes and changing appearances, object recognition accuracy is limited, and conventional sorting strategies tend to be based on simple rules or experience, lacking in intelligence and adaptivity, resulting in the possibility of a bin explosion of the sorting containers used to store the corresponding size bins in a large-scale sorting process.
In view of this, overcoming the drawbacks of the prior art is a problem to be solved in the art.
Disclosure of Invention
The invention aims to solve the technical problems that the existing rule engine and the traditional computer vision method are difficult to process complex scenes and change appearances, the accuracy of object identification is limited, and the existing sorting strategy is based on simple rules or experience, and lacks of intelligence and self-adaptability, so that the possibility of bin explosion exists in a sorting container for storing boxes with corresponding sizes in a large-scale sorting process.
The invention adopts the following technical scheme:
in a first aspect, the present invention provides an automated appearance inspection method comprising:
acquiring image data of the logistics packaging box by using an image acquisition device;
performing image recognition on the image data by using an object recognition model to obtain objects contained in the logistics packaging box;
extracting the object from the image data for analysis to obtain the size information of the object;
detecting a filling state of each sorting container using a sensor;
and determining the sorting strategy of the objects by using a preset algorithm according to the size information of the objects and the filling state of each sorting container.
Preferably, a plurality of image acquisition devices are used for acquiring image data of the logistics packaging box at a plurality of different acquisition angles to obtain objects contained in the plurality of image data;
the step of extracting the object from the image data for analysis to obtain the size information of the object specifically comprises the following steps: and calculating the consistency of the objects contained in the image data, judging whether the objects contained in the image data are the same object according to the consistency, if so, matching the same object contained in the image data to obtain a three-dimensional model of the object, and analyzing the three-dimensional model to obtain the size information of the object.
Preferably, the object recognition model is a deep learning model;
the deep learning model is trained in advance using an image dataset to facilitate image recognition using the trained deep learning model.
Preferably, a laser range finder is also arranged at the position of the image acquisition equipment;
and measuring the distance information between the image acquisition equipment and the object in the logistics packaging box by using the laser range finder so as to obtain the size information of the object by using the distance information and the image data of the object together.
Preferably, the detecting the filling state of each sorting container by using the sensor specifically includes:
detecting weight information of the sorting containers by using a weight sensor, and comparing the weight information with the maximum bearing weight of the sorting containers to obtain the filling state of the sorting containers; or alternatively, the first and second heat exchangers may be,
and scanning the occupied space size in the sorting container by using a laser scanner, and comparing the occupied space size with the space size of the sorting container to obtain the filling state of the sorting container.
Preferably, the determining, according to the size information of the objects and the filling state of each sorting container, a sorting strategy of the objects by using a preset algorithm specifically includes:
Identifying each first sorting container for accommodating the object of the category according to the category of the object, and judging whether the first sorting container capable of accommodating the object exists or not according to the filling state of each first sorting container and the size information of the object;
sorting the objects into a first sorting container capable of holding the objects if there is a first sorting container capable of holding the objects;
if the first sorting container capable of accommodating the objects does not exist, determining the sorting strategy of the objects together based on the self-safety coefficient of the objects, the influence coefficient of the objects on the outside, the self-safety coefficient of the objects existing in each second sorting container and the influence coefficient of the objects existing in each second sorting container on the outside; wherein the second sorting container is a sorting container for accommodating other kinds of objects.
Preferably, the determining the sorting policy of the object based on the self-safety coefficient of the object, the influence coefficient of the object to the outside, the self-safety coefficient of the object existing in each second sorting container, and the influence coefficient of the object existing in each second sorting container to the outside includes:
Taking the self-safety coefficient of the second sorting container with the highest self-safety coefficient of the existing objects in the second sorting container as the self-safety coefficient of the second sorting container;
taking the influence coefficient of the object in the second sorting container to the outside as the influence coefficient of the second sorting container with the lowest influence coefficient;
taking the influence coefficient of the second sorting containers not lower than the self-safety coefficient of the object and not higher than the influence coefficient of the object on the outside as a screening condition, and screening the second sorting containers to obtain a third sorting container meeting the screening condition;
judging whether a third sorting container capable of accommodating the objects exists or not according to the filling state of each third sorting container and the size information of the objects;
if there is a third sorting container capable of holding the objects, the objects are sorted into the third sorting container capable of holding the objects.
Preferably, when there are a plurality of first sorting containers capable of containing the objects, said sorting the objects into the first sorting containers capable of containing the objects, comprises in particular:
calculating the accommodation coefficient of each first sorting container capable of accommodating the objects, and sorting the objects into the first sorting container with the highest accommodation coefficient;
Wherein the accommodative coefficientWherein k1 and k2 are preset coefficients, V o For size information of objects, V c For the filling state of the first sorting container Dis (o, c) is the distance between the object and the first sorting container.
In a second aspect, the present invention provides an automated appearance inspection device comprising an image acquisition module, an object recognition module, a size measurement module, a sorting container management module, and a control and decision module;
the image acquisition module is used for acquiring image data of the logistics packaging box by using image acquisition equipment;
the dimension measuring module is used for carrying out image recognition on the image data by using an object recognition model to obtain objects contained in the logistics packaging box; extracting the object from the image data for analysis to obtain the size information of the object;
the sorting container management module is used for detecting the filling state of each sorting container by using a sensor;
the control and decision module is used for determining the sorting strategy of the objects by using a preset algorithm according to the size information of the objects and the filling state of each sorting container.
In a third aspect, the present invention further provides an automated appearance detection device for implementing the automated appearance detection method of the first aspect, where the device includes:
At least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor for performing the automated appearance detection method of the first aspect.
In a fourth aspect, the present invention also provides a non-volatile computer storage medium storing computer-executable instructions for execution by one or more processors to perform the automated appearance detection method of the first aspect.
The invention can determine the sorting strategy by analyzing the size information of the objects and the filling state of the sorting containers, and avoid the possibility of bin explosion of the sorting containers in large-scale sorting.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings that are required to be used in the embodiments of the present invention will be briefly described below. It is evident that the drawings described below are only some embodiments of the present invention and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 is a schematic flow chart of an automatic appearance detection method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of an automatic appearance detection method according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of an automated appearance inspection method according to an embodiment of the present invention;
fig. 4 is a flow chart of an automatic appearance detection method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an automated appearance inspection method according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an architecture of an automatic appearance detecting device according to an embodiment of the present invention;
fig. 7 is a schematic flow chart of an automatic appearance detection method according to an embodiment of the present invention;
fig. 8 is a schematic diagram of an architecture of an automatic appearance detection device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The terms "first," "second," and the like herein are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", etc. may explicitly or implicitly include one or more such feature. In the description of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
Example 1:
in order to solve the problem that the existing rule engine and the traditional computer vision method are difficult to process complex scenes and change appearances, the accuracy of object identification is limited, and the existing sorting strategy is based on simple rules or experience, and lacks of intelligence and self-adaption, so that the possibility of bin explosion exists in sorting containers for storing boxes with corresponding sizes in a large-scale sorting process, the embodiment 1 of the invention provides an automatic appearance detection method, as shown in fig. 1, which comprises the following steps:
in step 201, acquiring image data of a logistics package using an image acquisition device; the image capture device may be a camera sensor such as a complementary metal Oxide Semiconductor (Complementary Metal-Oxide-Semiconductor, referred to as CMOS) or a charge coupled device (charge coupled device, referred to as CCD) sensor. The camera acquires high resolution image data of the logistics package through proper lens arrangement and light source adjustment, and in a preferred embodiment, proper image processing technology such as denoising, contrast enhancement and the like can be used to improve the image quality. The image data of the logistics package is actually obtained by obtaining an image of an object contained in the logistics package.
In step 202, performing image recognition on the image data by using an object recognition model to obtain objects contained in the logistic packaging box; the object recognition model can be a deep learning model, such as a convolutional neural network, and the image recognition results in the fact that a certain area in the image belongs to a category label and confidence of a certain object.
In step 203, the object is extracted from the image data and analyzed, so as to obtain size information of the object.
In step 204, the filling status of each sorting container is detected using a sensor; wherein the detecting of the filling state of each sorting container using the sensor may be: detecting weight information of the sorting containers by using a weight sensor, and comparing the weight information with the maximum bearing weight of the sorting containers to obtain the filling state of the sorting containers; or, scanning the occupied space size in the sorting container by using a laser scanner, and comparing the occupied space size with the space size of the sorting container to obtain the filling state of the sorting container. Wherein the maximum load weight and the size of the space of the sorting container itself are obtained by a person skilled in the art from an analysis of the characteristics of the sorting container.
In step 205, a sorting strategy for the objects is determined using a preset algorithm based on the size information of the objects and the filling status of each sorting container. The preset algorithm is obtained by a person skilled in the art through empirical analysis, and an alternative implementation mode is as follows: sorting the objects into sorting containers, the filled containers being capable of holding the objects.
The method is suitable for an automatic appearance detection robot, and can determine a sorting strategy by analyzing size information of objects and filling states of sorting containers, so that the possibility of sorting container bin explosion in large-scale sorting is avoided.
In a preferred embodiment, a plurality of image acquisition devices can be used for acquiring image data of the logistics package box at a plurality of different acquisition angles to obtain objects contained in the plurality of image data; the extracting the object from the image data for analysis to obtain the size information of the object, as shown in fig. 2, specifically includes:
in step 301, the consistency of the objects included in the image data is calculated, and whether or not the objects included in the image data are identical is determined based on the consistency. The consistency may be obtained by a plurality of feature analyses of the object, such as an outer contour of the object, feature points, etc.
In step 302, if it is determined that the object is the same object, the same object included in the plurality of image data is matched to obtain a three-dimensional model of the object, and the three-dimensional model is analyzed to obtain size information of the object. The three-dimensional model is obtained by analyzing and synthesizing contour information or shape information of the same object contained in the plurality of image data, and aims to restore the three-dimensional shape and the spatial dimension of the object.
In an alternative embodiment, the object recognition model is a deep learning model; the deep learning model is trained in advance using an image dataset to facilitate image recognition using the trained deep learning model.
In a preferred embodiment, a laser range finder is also arranged at the position of the image acquisition device; and measuring the distance information between the image acquisition equipment and the object in the logistics packaging box by using the laser range finder so as to obtain the size information of the object by using the distance information and the image data of the object together. The step of analyzing the distance information and the image data of the object together to obtain the size information of the object may be to participate the distance information between the object and the image acquisition device in the synthesis of the three-dimensional model in the step 302, so that the generated three-dimensional model is more accurate and the real form of the object is restored more.
Considering that in practical use, when sorting on a large scale, there are often multiple types of objects to be sorted, and for convenience of management, similar objects are often contained in the same sorting container, this embodiment provides a preferred implementation manner, namely, determining a sorting strategy of the objects by using a preset algorithm according to size information of the objects and filling states of the sorting containers, as shown in fig. 3, specifically including:
in step 401, identifying each first sorting container for accommodating the object of the category according to the category of the object, and judging whether there is a first sorting container capable of accommodating the object according to the filling state of each first sorting container and the size information of the object; wherein the sorting containers can be preset by a person skilled in the art for accommodating which kind of objects.
In step 402, if there is a first sorting container capable of holding the objects, the objects are sorted into the first sorting container capable of holding the objects.
In step 403, if there is no first sorting container capable of accommodating the object, determining a sorting strategy of the object together based on the self-safety coefficient of the object, the influence coefficient of the object on the outside, the self-safety coefficient of the object existing in each second sorting container, and the influence coefficient of the object existing in each second sorting container on the outside; wherein the second sorting container is a sorting container for accommodating other kinds of objects.
The self-safety coefficient of the object can be understood as the possible loss when the object is damaged, the higher the self-safety coefficient of the object, otherwise, the lower the loss when the object is damaged, the higher the self-safety coefficient of the object, the loss is usually an economic loss, for example, the higher the price of the object, the higher the self-safety coefficient of the object.
The influence coefficient of the object on the outside represents the degree of influence on the outside and the possibility of damage of the object when the object is damaged due to the outside condition or other reasons, such as liquid exudation when the object with liquid contained therein is damaged, and further influence on other objects in the same sorting container, wherein in order to facilitate comparison with the self safety coefficient of the object, the more likely the object is damaged, and the larger the influence of the object on the outside when the object is damaged, the smaller the influence coefficient of the object on the outside; the smaller the possibility of damage to the object, the smaller the degree of influence on the outside, the larger the influence coefficient of the object on the outside, and at this time, for the convenience of understanding, the influence coefficient of the object on the outside can be understood as the safety level of the object on the outside. The self-safety coefficient of the corresponding object and the influence coefficient of the object on the outside are obtained by the analysis of the class of the object by the person skilled in the art.
According to the method, the objects are stored in the sorting container for containing the objects preferentially, if the objects cannot be sorted in the sorting container of the category to which the objects belong, the safety of the objects, the influence of the objects on the outside, the safety of the objects in the sorting container and the influence of the objects in the sorting container on the outside are comprehensively considered, and the sorting strategy of the objects is determined, so that the safety of each object after sorting is ensured, the loss caused by damage of the objects is reduced, and the safety of the objects can be ensured while the sorting of the objects is realized to the greatest extent.
In practical use, the determining a sorting strategy of the object based on the self-safety coefficient of the object, the influence coefficient of the object on the outside, the self-safety coefficient of the existing object in each second sorting container, and the influence coefficient of the existing object in each second sorting container on the outside together, as shown in fig. 4, specifically includes:
in step 501, the self-safety coefficient of the second sorting container is the highest self-safety coefficient of the existing objects in the second sorting container.
In step 502, the influence coefficient of the object in the second sorting container to the outside is the lowest.
In step 503, the influence coefficient of the second sorting containers is not lower than the self-safety coefficient of the object, and the self-safety coefficient of the second sorting containers is not higher than the influence coefficient of the object to the outside, as a screening condition, and each second sorting container is screened to obtain a third sorting container meeting the screening condition; when the object with lower influence coefficient exists in the second sorting container, the object existing in the second sorting container is considered to have lower safety level to the outside, is easy to damage and has larger influence to the outside, and when the object with higher safety coefficient to be sorted is placed in the second sorting container, if the object with lower safety level to the outside is damaged, larger loss is possibly caused to the object to be sorted; otherwise, if the influence coefficient of the object to be sorted on the outside is lower, the object to be sorted is considered to have lower safety level on the outside, is easy to damage and has larger influence on the outside, and if the object with higher self safety coefficient exists in the second sorting container, the object with higher self safety coefficient is easy to lose when the object to be sorted is damaged, so the embodiment limits in two aspects, and simultaneously ensures the safety of the object to be sorted and the existing object in the sorting container.
In step 504, it is determined whether there are third sorting containers capable of accommodating the objects based on the filling state of each third sorting container and the size information of the objects.
In step 505, if there is a third sorting container capable of holding the objects, the objects are sorted into the third sorting container capable of holding the objects.
For example, as shown in fig. 5, each small square represents an object, the lower left corner of the small square represents the self-safety coefficient of the object, and the upper right corner of the small square represents the influence coefficient of the object to the outside.
In fig. 5, there are two sorting containers, namely a container_1 and a container_2, wherein the highest safety coefficient of the existing objects in the container_1 is 3, and the lowest influence coefficient of the objects in the container_1 on the outside is 2; the highest self safety coefficient of the existing objects in the container_2 is 2, and the lowest influence coefficient of the objects in the container_2 on the outside is 3; the highest self safety coefficient of the existing objects in the container_3 is 3, and the lowest influence coefficient of the objects in the container_3 on the outside is 1; the highest self safety coefficient of the existing objects in the container_4 is 4, and the lowest influence coefficient of the objects in the container_4 on the outside is 3. The self safety coefficient of the object to be sorted is 3, and the influence coefficient of the object to be sorted on the outside is 3; only the container_2 satisfies that the influence coefficient of the second sorting container (e.g., 3 in fig. 5) is not lower than the self-safety coefficient of the object (e.g., 3 in fig. 5), and the self-safety coefficient of the second sorting container (e.g., 2 in fig. 5) is not higher than the influence coefficient of the object to the outside (e.g., 3 in fig. 5), so the container_2 is the third sorting container.
In practical use, when there are a plurality of first sorting containers capable of containing the objects, the sorting of the objects into the first sorting containers capable of containing the objects specifically comprises:
calculating the accommodation coefficient of each first sorting container capable of accommodating the objects, and sorting the objects into the first sorting container with the highest accommodation coefficient; wherein the accommodative coefficient Wherein k1 and k2 are preset coefficients, V o For size information of objects, V c For the filling state of the first sorting container Dis (o, c) is the distance between the object and the first sorting container.
Similarly, when there are a plurality of third sorting containers capable of holding the objects, the sorting of the objects into the third sorting containers capable of holding the objects specifically includes:
calculating the accommodation coefficient of a third sorting container capable of accommodating the objects, and sorting the objects into a first sorting container with the highest accommodation coefficient; wherein the accommodative coefficient Wherein k1 and k2 are preset coefficients, V o For size information of objects, V c For the third sortingThe filling state of the containers, dis (o, c), is the distance between the object and the third sorting container.
Example 2:
on the basis of providing the automated appearance detection method described in embodiment 1, this embodiment further provides an automated appearance detection device for implementing the automated appearance detection method described in embodiment 1, where the device includes an image acquisition module, an object identification module, a size measurement module, a sorting container management module, and a control and decision module.
The image acquisition module is used for acquiring image data of the logistics packaging box by using image acquisition equipment; the dimension measuring module is used for carrying out image recognition on the image data by using an object recognition model to obtain objects contained in the logistics packaging box; extracting the object from the image data for analysis to obtain the size information of the object; the sorting container management module is used for detecting the filling state of each sorting container by using a sensor; the control and decision module is used for determining the sorting strategy of the objects by using a preset algorithm according to the size information of the objects and the filling state of each sorting container.
In practical use, as shown in fig. 6, the apparatus further includes a control and decision module, a human-machine interface module, and a data storage and analysis module, specifically:
The image acquisition module employs a high resolution camera sensor, such as a CMOS or CCD sensor. The camera obtains high-resolution image data of the logistics packaging box through proper lens setting and light source adjustment. Image quality is improved using suitable image processing techniques such as denoising, contrast enhancement, etc. The high resolution image provides more detailed information, thereby improving the accuracy of object recognition and dimensional measurement.
The object recognition module is used for accurately recognizing the object based on a trained deep learning model, such as a convolutional neural network. Model training and inference is performed using an open-source deep learning framework, such as TensorFlow or PyTorch. In the training process, a large-scale image data set is used for carrying out iterative training on the model so as to improve the accuracy and the robustness of object identification. The object recognition module 200 outputs class labels and confidence of objects and uses standard object recognition evaluation metrics, such as accuracy, recall, etc., to evaluate model performance.
The dimension measuring module is used for extracting dimension information of the object by adopting a computer vision technology according to the object identification result. For example, using an edge detection algorithm, such as the Canny algorithm, the boundary of the object is found and the dimensional parameters of the length, width, and height of the boundary are calculated. Shape matching algorithms, such as Hough transforms, are also used to shape fit the object to obtain more accurate dimensional measurements. In addition, a non-contact type dimension measurement is performed in combination with a laser range finder or a depth sensor, such as a structured light or Time of flight (TOF) sensor, so as to improve measurement accuracy.
The sorting container management module is used for monitoring the filling state, weight and size requirements of the sorting containers in real time, and adjusting the container positions and selecting proper container types through intelligent algorithms. A load cell is used to measure the weight of the container and determine if the container is full by a weight threshold. For size requirements, minimum and maximum size constraints for the container are set, and the appropriate container type is automatically selected based on the results of the object identification and size measurement. The intelligent container management avoids overfilling or waste of the container, improves logistics efficiency and saves cost.
The control and decision module is used for realizing intelligent sorting decision based on a rule engine and a machine learning algorithm according to the object identification and the size measurement result. Rules engines are used to define the rules and algorithms of the system, such as assigning objects to respective sorting areas according to object categories and sizes. In addition, machine learning algorithms, such as decision trees, support vector machines, etc., are used to learn and optimize the decision-making decisions. Through continuous training and optimization, the accuracy and efficiency of sorting decisions are improved.
The man-machine interface module is used for providing visual operation interface and real-time information display through touch screen or voice interaction, and is convenient for operators to monitor and manage sorting processes. The operator interface displays the results of the object identification and sizing, the status and location of the container, and the execution of the sorting decisions. And an operator performs operations such as parameter setting, exception handling and the like through a touch screen or a voice instruction, so that real-time monitoring and management of the sorting process are realized.
The data storage and analysis module is used for storing the acquired image data, object identification results and dimension measurement data in a database, and providing a data analysis tool for performance evaluation, system optimization and improvement. By storing and analyzing the data, the system's behavior and performance is known, and potential problems and improvements are found. And analyzing a large amount of data by using a data analysis tool such as data mining, machine learning and the like to find hidden association rules so as to further optimize system performance and algorithm accuracy.
According to the invention, through the combination of the object identification module and the dimension measurement module, the object can be accurately identified and the dimension information thereof can be measured, so that misjudgment or measurement errors in manual operation are avoided, and the detection accuracy and consistency are improved. Through intelligent container management module and control and decision-making module, can real-time supervision container's filling state, weight and size demand to according to object identification and size measurement's result, automatic adjustment container position and select suitable container type, avoid the excessive filling or the waste of container, improve sorting efficiency, reduce logistics processing time and cost. Through integrating each module, the system can realize automatic outward appearance detection and letter sorting process, reduces manual operation and mistake. The automatic treatment improves the working efficiency, reduces the labor cost and provides a more reliable and efficient solution for the logistics industry.
As one embodiment of the invention, the image acquisition module comprises a plurality of camera sensors, and the accuracy of identification and measurement is improved by synchronously acquiring images of a plurality of angles.
In particular, the image acquisition module is mounted at different angles and positions using a plurality of camera sensors, e.g., two or more cameras. These cameras start image acquisition simultaneously by a synchronous trigger mechanism and have the same exposure time and frame rate to maintain consistency of the images. Each camera is provided with a suitable lens and light source to ensure image quality and detail capture of the object at different angles.
During object recognition and dimensional measurement, the multiple angle images provide more information and viewing angle, thereby enhancing the accuracy of recognition and measurement. For example, the appearance and the characteristics of the object are different due to the change of angles, and the accuracy of object identification is improved by collecting images of a plurality of angles and integrating information of different angles. Further, for the size measurement, more viewing angles and boundary information are obtained through images of a plurality of angles, thereby improving the accuracy of the size measurement.
The use of multiple camera sensors provides a stereoscopic effect, similar to binocular vision of the human eye. Depth information and a three-dimensional structure of an object are acquired by processing and analyzing images of a plurality of angles. This principle of stereo vision is used for object recognition and dimensional measurement.
In the aspect of object identification, the characteristics of the shape, texture, color and the like of an object are determined by comparing images of different angles. For example, by calculating differences and consistency between images, the shape profile of an object is detected and different object categories are distinguished. This improves the robustness and accuracy of object recognition.
In terms of dimensional measurement, a three-dimensional model or point cloud representation of an object is created by registering and matching images of multiple angles. The dimensions of the object are then measured by calculating geometrical properties of the model or point cloud, such as boundaries, volumes, etc. The dimensional measurement method of the stereoscopic vision provides more accurate measurement results and avoids measurement errors under a single visual angle.
In summary, by adopting a plurality of camera sensors to perform image acquisition, the accuracy of object identification and dimension measurement can be improved, and the comprehensive information understanding of objects can be enhanced.
As one embodiment of the invention, the object recognition module improves the recognition capability of different objects and scenes by pre-training on a large-scale data set based on the transfer learning of the deep learning model.
Specifically, the object recognition module uses a deep learning model, such as a Convolutional Neural Network (CNN) or a pre-trained image classification model. These models are pre-trained on large-scale image datasets, such as ImageNet datasets, to learn generic object features and visual representations.
In practical applications, the object recognition module applies the pre-trained model to specific scenes and tasks by means of transfer learning. Specifically, the weight parameters of the pre-training model are imported into the object recognition module, and fine adjustment or adjustment is performed according to actual requirements. The fine tuning process typically involves training on a small scale specific data set to accommodate a specific object class and scene.
In order to improve the recognition capability of different objects and scenes, data enhancement techniques such as random clipping, rotation, scaling, etc. are also employed to generate more diversified and rich training samples. This increases the model's ability to adapt to changes and diversity of objects.
The deep learning model learns advanced feature representations and abstract concepts from raw image data through a combination of multi-layer neural networks. The pre-trained model is trained on a large-scale dataset, enabling learning of generic object features and visual representations. These features include edges, textures, shapes, etc., which capture different visual properties of the object.
When the pre-trained model is applied to a specific object recognition task through transfer learning, the model rapidly adapts to new object types and scenes by utilizing the learned general features. The fine tuning process enables the model to better adapt to the requirements of specific tasks by adjusting parameters of the model, and accuracy and robustness of object identification are improved.
In summary, the transfer learning of the deep learning model effectively improves the performance of the object recognition module, reduces the data requirement, adapts to the recognition requirements of different objects and scenes, and brings remarkable beneficial effects for the control system of the automatic appearance detection robot.
As an embodiment of the present invention, the dimension measuring module combines with a laser rangefinder or a depth sensor to achieve accurate dimension measurement.
In particular, the dimension measurement module uses a laser rangefinder or a depth sensor as the measurement device. These devices measure the distance between an object and a sensor by emitting a laser beam or infrared light and using the principle of reflection or time difference of the light.
In practical applications, the size measurement module uses a laser rangefinder or depth sensor in combination with the image acquisition module. The image acquisition module acquires an image of the object, and the laser rangefinder or depth sensor provides distance information between the object and the sensor. By correlating the image with the distance information, accurate measurement of the object size is achieved.
During the measurement, the accuracy of the measurement is ensured by calibration and calibration. The calibration process determines parameters of the laser rangefinder or depth sensor, such as angle of view, distortion, etc., by measuring objects of known dimensions. The calibration process corrects and adjusts the measurement result and the actual size by comparing, so as to improve the measurement accuracy.
A laser rangefinder or depth sensor uses the propagation velocity of light to measure the distance between an object and the sensor. Laser rangefinders typically calculate distance by emitting a laser beam and measuring the time required for the laser beam from emission to reception. The depth sensor acquires distance information by measuring the reflection time of light or the projection mode of light using a technique such as infrared light or structured light.
And combining an image acquisition module to correlate the distance information with the image of the object. The actual size of the object is calculated by identifying the feature points or boundaries of the object in the image and combining the distance information. This measurement method combining the image and distance information provides a more accurate dimensional measurement.
In summary, the combination of the laser range finder or the depth sensor for dimensional measurement provides accurate measurement results, has the advantages of rapidness, strong adaptability, non-contact measurement and the like, and brings remarkable beneficial effects for the control system of the automatic appearance detection robot.
As one implementation mode of the invention, the sorting container management module intelligently selects a proper container type according to the weight attribute of the object, so as to avoid damaging the explosion bin.
Specifically, the sorting container management module obtains weight information of the objects through a weight sensor or a weighing device. These sensors or devices are installed in sorting robotic arms or in systems such as lines for real-time monitoring of the weight of objects.
In practical application, the sorting container management module intelligently selects a proper container type according to the weight attribute of the object. For example, the maximum load weights for the different container types are preset and these information are entered into the sorting container management module. When the sorting robot sorts objects into containers, the sorting container management module will select the appropriate container type based on the weight of the object compared to the maximum load weight of the container.
If the weight of the object exceeds the maximum load weight of the container, the sorting container management module automatically determines and selects another container capable of carrying the weight of the object to avoid the risk of container damage and explosion. At the same time, the module also records and counts the total weight of the objects in each container for subsequent management and processing.
The sorting container management module obtains the weight information of the object through a weight sensor or a weighing device and compares the weight information with the maximum bearing weight of the container. Based on the comparison, the module intelligently selects the appropriate container type to ensure that the object is safely placed in a container capable of carrying its weight.
In summary, the intelligent sorting container management module based on the object weight attribute avoids container damage and bin explosion accidents, improves the working efficiency, provides the functions of data statistics and management, and brings remarkable beneficial effects for the control system of the automatic appearance detection robot.
As an implementation mode of the invention, the control and decision module is based on a reinforcement learning algorithm, and the sorting efficiency and accuracy are improved through continuous learning and optimization.
Specifically, the control and decision module uses reinforcement learning algorithms to make decisions and controls for sorting tasks. Reinforcement learning is a machine learning method that learns how to make optimal decisions to obtain maximum rewards through interactions of agents (e.g., robots) with the environment.
In practical applications, the control and decision module models the sorting task as a reinforcement learning problem. The agent observes the environmental conditions, such as properties of the objects to be sorted, location information, etc., and selects appropriate actions, such as sorting the objects into designated containers, based on the current conditions. After each execution of the action, the smart agent receives feedback from the environment, including the reward signal and the next state observations. By constantly interacting with the environment and feeding back in accordance with the reward signal, the agent learns how to make an optimal decision through reinforcement learning algorithms.
The control and decision module learns the optimal sorting strategy through interaction of the agent with the environment based on the reinforcement learning algorithm. The module uses reinforcement learning methods such as value functions, strategy gradients, etc. to make decisions and control.
In the sorting task, the agent selects an action according to the current environmental state and performs the action. After performing the action, the smart will receive feedback from the environment, including the reward signal and the observations of the next state. The control and decision module uses these feedback information to update the strategy and value functions to improve the performance of decisions and controls. Through constantly interacting with the environment and learning, the agent gradually improves letter sorting efficiency and accuracy.
In conclusion, the control and decision module based on the reinforcement learning algorithm improves sorting efficiency and accuracy through continuous learning and optimization, has the characteristics of autonomous learning capability and adaptation to complex environments, and brings remarkable beneficial effects for the control system of the automatic appearance detection robot.
As one implementation mode of the invention, the human-computer interface module supports remote monitoring and operation, and distributed management and data sharing are realized through cloud service.
Specifically, the human-machine interface module provides a user-friendly interface for remotely monitoring and operating the sorting system. Through the interface, the user views information such as the state, data, video images and the like of the sorting system in real time, and remotely operates and controls the sorting system.
In practical application, the human-computer interface module is connected with the cloud service to realize distributed management and data sharing. Various parts of the sorting system (such as sorting robots, sensors, controllers, etc.) upload data to the cloud service for storage and processing. And the user accesses the cloud service through the human-computer interface module to acquire real-time data and state information.
Meanwhile, the man-machine interface module also supports remote operation and control of the sorting system. The user sends the instruction to the cloud service through the interface, and the cloud service transmits the instruction to corresponding equipment in the sorting system for execution. Thus, the user can remotely monitor and control the sorting system without directly contacting the sorting system.
The human-computer interface module is connected with the cloud service, so that functions of remote monitoring and operation of the sorting system are realized. Each device in the sorting system uploads the data to the cloud service, and the cloud service stores and processes the data and provides the data for a user to access through a human-computer interface.
And the human-computer interface module acquires information such as real-time state, data, video images and the like of the sorting system through cloud service. This information is presented to the user via an interface to allow the user to remotely monitor and understand the operation of the sorting system.
Meanwhile, the user sends the instruction to the cloud service through the human-computer interface module, and the cloud service transmits the instruction to corresponding equipment in the sorting system for execution. In this way, the user remotely operates and controls the sorting system, for example, to start or stop the sorting process, to adjust parameter settings, etc.
In summary, the human-computer interface module based on the cloud service supports remote monitoring and operation, and achieves the functions of distributed management and data sharing. The remote monitoring system can bring convenience for remote monitoring and operation, flexibility for distributed management and advantages of data sharing and analysis, and has obvious beneficial effects for a control system of an automatic appearance detection robot.
As one embodiment of the invention, the data storage and analysis module is used for mining implicit association and rules through big data analysis technology and providing predictive analysis and intelligent decision support.
Specifically, the data storage and analysis module collects, stores and processes a large amount of data generated by the sorting system, and performs data mining and analysis by using a large data analysis technology. Through deep mining and analysis of the data, association and rules are revealed, and predictive analysis and intelligent decision support are provided.
In practice, the data storage and analysis module processes and analyzes data generated by the classification system using various big data analysis techniques, such as machine learning, data mining, statistical analysis, etc. Through these techniques, features are extracted from the data, patterns are discovered, models are built, and predictions and decision support are made.
The data storage and analysis module utilizes big data analysis techniques to mine and analyze the data generated by the classification system. By collecting, storing and processing a large amount of data and applying methods such as machine learning, data mining and the like, the module reveals the association and rules in the data, thereby providing predictive analysis and intelligent decision support.
The workflow of the data storage and analysis module comprises the steps of data collection and storage, data cleaning and preprocessing, feature extraction and selection, model establishment and training, prediction analysis, intelligent decision making and the like. Through a combination of these steps, the module extracts valuable information from the data for prediction and decision support.
In summary, the data storage and analysis module based on the big data analysis technology provides predictive analysis and intelligent decision support by mining implicit associations and rules. The method can bring the capability of predictive analysis and intelligent decision, the insight of finding association and rules and the advantages of data driving optimization, and has obvious beneficial effects for the control system of the automatic appearance detection robot.
The automated appearance inspection method described in embodiment 1 is applicable to this embodiment.
The automatic appearance detection method performed by using the automatic appearance detection device according to the present embodiment, as shown in fig. 7, specifically includes:
in step 601, an image acquisition module is configured and started, specifically: the image acquisition equipment on the robot is configured, and the module is started to acquire image data of the logistics package. This is achieved by a camera, sensor or other suitable device.
In step 602, object recognition is performed using the trained deep learning model, specifically: the acquired image data is input into a trained deep learning model, and objects in the image are identified. The model uses deep learning algorithms in the computer vision field, such as convolutional neural networks, target detection algorithms, and the like. The output results include class labels and confidence levels for the objects.
In step 603, size information of the object is extracted, specifically: the size information of the identified object is extracted based on various computer vision techniques. This includes using image processing and computational geometry techniques such as edge detection, contour extraction, shape analysis, etc., to obtain features such as size, area, perimeter, etc. of the object.
In step 604, the fill status, weight and size requirements of the sorting containers are monitored in real time, specifically: the fill status, weight and size requirements of the sorting containers are monitored in real time by sensors, weighing sensors and other related equipment. This information is used to determine the location and type of container to meet sorting requirements.
In step 605, a sorting decision is made using an intelligent algorithm, specifically: and according to the object identification and the size measurement result, performing sorting decision by using an intelligent algorithm. This includes optimization algorithms, rule engines, machine learning algorithms, etc. to determine the best sorting strategy for high efficiency and accuracy.
In step 606, an intuitive human-machine interface is provided, specifically: an intuitive human-computer interface is provided for an operator, and information is displayed and operation instructions are received in real time through touch screens or voice interaction. This interface displays object recognition results, size information, sorting status, etc., while allowing the operator to control and interact with the robot.
In step 607, the collected data is stored, analyzed and optimized, specifically: and storing the acquired image data, object identification results, size information and other related data. These data are used for subsequent analysis and optimization, such as evaluation of system performance, anomaly detection, decision optimization, etc.
Through the steps, the automatic appearance detection method realizes automatic identification, size measurement and sorting decision of the logistics packaging boxes. The method combines the technologies of image processing, deep learning, intelligent algorithm, man-machine interaction and the like to improve sorting efficiency and accuracy, and provides real-time information display and operation instruction receiving capability. The collected data is stored and analyzed for further optimization of system performance and decision strategies.
As one embodiment of the invention, object recognition uses a multi-level neural network structure, thereby improving recognition capability for complex scenes and under changing illumination conditions.
In particular, object recognition uses a multi-level neural network structure, such as a deep Convolutional Neural Network (CNN). The network structure consists of a plurality of convolution layers, a pooling layer and a full connection layer, and features are effectively extracted from an input image, and object classification and recognition are performed.
In practical application, the multi-level neural network structure for object recognition has the following characteristics:
multi-level feature extraction: the multi-level neural network extracts multi-scale and multi-level features from the input image through layer-by-layer convolution and pooling operations. This allows the network to capture different levels of object features, from low-level edges and textures to high-level shapes and structures, improving recognition.
Context information utilization: the multi-level neural network utilizes local and global context information in processing the image. Through the convolution and pooling operations, the network obtains contextual information around the object at each level, thereby better understanding the background and environment of the object. This is of great importance for handling complex scenes and object recognition under varying lighting conditions.
Nonlinear mapping capability: the multi-level neural network maps the input signal to the high-dimensional feature space through a nonlinear activation function (such as a ReLU), so that the modeling capability is stronger. This allows the network to better adapt to complex object shapes, textures and variations, improving the adaptability to complex scenes and lighting conditions.
Parameter sharing and weight sharing: the convolution layers in the multi-level neural network use the modes of parameter sharing and weight sharing, so that the number of parameters and the computational complexity of the network are reduced. This makes the network lighter and more efficient, suitable for object recognition in resource-limited environments.
Pretraining and fine tuning: the multi-level neural network improves recognition performance through pre-training and fine tuning. Pretraining is performed on a large-scale dataset to learn generic image features. The network is then better adapted to the specific object recognition task by fine-tuning training on the data set of the specific task.
Through the characteristics, the multi-level neural network structure improves the object recognition capability under complex scenes and changing illumination conditions. It is able to extract multi-level features from an input image and to classify and identify using context information. Through parameter sharing and weight sharing, the network has the characteristics of light weight and high efficiency. Pretraining and fine tuning further improves the performance and adaptability of the network.
In the automatic appearance detection method, the object recognition capability of complex scenes and under the condition of changing illumination is effectively improved by using a multi-level neural network structure for object recognition. This helps to improve the accuracy and robustness of the sorting system, enabling more efficient sorting and handling of objects.
Example 3:
fig. 8 is a schematic diagram of an automated appearance inspection device according to an embodiment of the invention. The automated appearance inspection device of the present embodiment includes one or more processors 21 and a memory 22. In fig. 8, a processor 21 is taken as an example.
The processor 21 and the memory 22 may be connected by a bus or otherwise, for example in fig. 8.
The memory 22 is used as a non-volatile computer readable storage medium for storing non-volatile software programs and non-volatile computer executable programs, such as the automated appearance inspection method of embodiment 1. The processor 21 performs the automated appearance detection method by running non-volatile software programs and instructions stored in the memory 22.
The memory 22 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, memory 22 may optionally include memory located remotely from processor 21, which may be connected to processor 21 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The program instructions/modules are stored in the memory 22, which when executed by the one or more processors 21, perform the automated appearance detection method of embodiment 1 described above.
It should be noted that, because the content of information interaction and execution process between modules and units in the above-mentioned device and system is based on the same concept as the processing method embodiment of the present invention, specific content may be referred to the description in the method embodiment of the present invention, and will not be repeated here.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the embodiments may be implemented by a program that instructs associated hardware, the program may be stored on a computer readable storage medium, the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (10)

1. An automated appearance inspection method comprising:
acquiring image data of the logistics packaging box by using an image acquisition device;
performing image recognition on the image data by using an object recognition model to obtain objects contained in the logistics packaging box;
extracting the object from the image data for analysis to obtain the size information of the object;
detecting a filling state of each sorting container using a sensor;
and determining the sorting strategy of the objects by using a preset algorithm according to the size information of the objects and the filling state of each sorting container.
2. The automated appearance inspection method of claim 1, wherein image data of the physical distribution package is acquired at a plurality of different acquisition angles using a plurality of image acquisition devices to obtain objects contained in the plurality of image data;
the step of extracting the object from the image data for analysis to obtain the size information of the object specifically comprises the following steps: and calculating the consistency of the objects contained in the image data, judging whether the objects contained in the image data are the same object according to the consistency, if so, matching the same object contained in the image data to obtain a three-dimensional model of the object, and analyzing the three-dimensional model to obtain the size information of the object.
3. The automated appearance inspection method of claim 1, wherein the object recognition model is a deep learning model;
the deep learning model is trained in advance using an image dataset to facilitate image recognition using the trained deep learning model.
4. The automated appearance inspection method of claim 1, wherein a laser rangefinder is also mounted at the location of the image acquisition device;
and measuring the distance information between the image acquisition equipment and the object in the logistics packaging box by using the laser range finder so as to obtain the size information of the object by using the distance information and the image data of the object together.
5. The automated appearance inspection method of claim 1, wherein the detecting the filling status of each sorting container using the sensor is specifically:
detecting weight information of the sorting containers by using a weight sensor, and comparing the weight information with the maximum bearing weight of the sorting containers to obtain the filling state of the sorting containers; or alternatively, the first and second heat exchangers may be,
and scanning the occupied space size in the sorting container by using a laser scanner, and comparing the occupied space size with the space size of the sorting container to obtain the filling state of the sorting container.
6. The automated appearance inspection method of claim 1, wherein the determining the sorting strategy of the objects using a preset algorithm according to the size information of the objects and the filling state of each sorting container, specifically comprises:
identifying each first sorting container for accommodating the object of the category according to the category of the object, and judging whether the first sorting container capable of accommodating the object exists or not according to the filling state of each first sorting container and the size information of the object;
sorting the objects into a first sorting container capable of holding the objects if there is a first sorting container capable of holding the objects;
if the first sorting container capable of accommodating the objects does not exist, determining the sorting strategy of the objects together based on the self-safety coefficient of the objects, the influence coefficient of the objects on the outside, the self-safety coefficient of the objects existing in each second sorting container and the influence coefficient of the objects existing in each second sorting container on the outside; wherein the second sorting container is a sorting container for accommodating other kinds of objects.
7. The automated appearance inspection method of claim 6, wherein the determining the sorting strategy for the object based on the self-safety coefficient of the object, the influence coefficient of the object on the outside, the self-safety coefficient of the object existing in each second sorting container, and the influence coefficient of the object existing in each second sorting container on the outside includes:
Taking the self-safety coefficient of the second sorting container with the highest self-safety coefficient of the existing objects in the second sorting container as the self-safety coefficient of the second sorting container;
taking the influence coefficient of the object in the second sorting container to the outside as the influence coefficient of the second sorting container with the lowest influence coefficient;
taking the influence coefficient of the second sorting containers not lower than the self-safety coefficient of the object and not higher than the influence coefficient of the object on the outside as a screening condition, and screening the second sorting containers to obtain a third sorting container meeting the screening condition;
judging whether a third sorting container capable of accommodating the objects exists or not according to the filling state of each third sorting container and the size information of the objects;
if there is a third sorting container capable of holding the objects, the objects are sorted into the third sorting container capable of holding the objects.
8. The automated appearance inspection method of claim 6, wherein when there are a plurality of first sorting containers capable of holding the objects, the sorting the objects into the first sorting containers capable of holding the objects, comprises:
calculating the accommodation coefficient of each first sorting container capable of accommodating the objects, and sorting the objects into the first sorting container with the highest accommodation coefficient;
Wherein the accommodative coefficientWherein k1 and k2 are preset coefficients, V o For size information of objects, V c For the filling state of the first sorting container Dis (o, c) is the distance between the object and the first sorting container.
9. An automatic appearance detection device is characterized by comprising an image acquisition module, an object identification module, a size measurement module, a sorting container management module and a control and decision module;
the image acquisition module is used for acquiring image data of the logistics packaging box by using image acquisition equipment;
the dimension measuring module is used for carrying out image recognition on the image data by using an object recognition model to obtain objects contained in the logistics packaging box; extracting the object from the image data for analysis to obtain the size information of the object;
the sorting container management module is used for detecting the filling state of each sorting container by using a sensor;
the control and decision module is used for determining the sorting strategy of the objects by using a preset algorithm according to the size information of the objects and the filling state of each sorting container.
10. An automated appearance inspection device, comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor for performing the automated appearance detection method of any one of claims 1-8.
CN202311640799.6A 2023-11-30 2023-11-30 Automatic appearance detection method and device Pending CN117563960A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311640799.6A CN117563960A (en) 2023-11-30 2023-11-30 Automatic appearance detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311640799.6A CN117563960A (en) 2023-11-30 2023-11-30 Automatic appearance detection method and device

Publications (1)

Publication Number Publication Date
CN117563960A true CN117563960A (en) 2024-02-20

Family

ID=89864088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311640799.6A Pending CN117563960A (en) 2023-11-30 2023-11-30 Automatic appearance detection method and device

Country Status (1)

Country Link
CN (1) CN117563960A (en)

Similar Documents

Publication Publication Date Title
US11527072B2 (en) Systems and methods for detecting waste receptacles using convolutional neural networks
US9361702B2 (en) Image detection method and device
EP3340106A1 (en) Method for assigning particular classes of interest within measurement data
CN111415106A (en) Truck loading rate identification method, device, equipment and storage medium
CN113642474A (en) Hazardous area personnel monitoring method based on YOLOV5
CN116593479B (en) Method, device, equipment and storage medium for detecting appearance quality of battery cover plate
CN115797736A (en) Method, device, equipment and medium for training target detection model and target detection
CN115995058A (en) Power transmission channel safety on-line monitoring method based on artificial intelligence
CN117284663B (en) Garden garbage treatment system and method
CN117243539A (en) Artificial intelligence obstacle surmounting and escaping method, device and control system
CN117563960A (en) Automatic appearance detection method and device
CN113658274B (en) Automatic individual spacing calculation method for primate population behavior analysis
CN116245882A (en) Circuit board electronic element detection method and device and computer equipment
KR20230061612A (en) Object picking automation system using machine learning and method for controlling the same
CN115457494A (en) Object identification method and system based on infrared image and depth information fusion
US12006141B2 (en) Systems and methods for detecting waste receptacles using convolutional neural networks
CN112558554A (en) Task tracking method and system
CN112686162A (en) Method, device, equipment and storage medium for detecting clean state of warehouse environment
CN117152258B (en) Product positioning method and system for intelligent workshop of pipeline production
CN116756835B (en) Template combination design method, device, equipment and storage medium
CN117854211B (en) Target object identification method and device based on intelligent vision
CN116503406B (en) Hydraulic engineering information management system based on big data
CN116843831B (en) Agricultural product storage fresh-keeping warehouse twin data management method and system
US20210229292A1 (en) Confidence-Based Bounding Boxes For Three Dimensional Objects
WO2023091303A1 (en) Methods and systems for grading devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination