WO2022241597A1 - Ai intelligent garbage identification and classification system and method - Google Patents
Ai intelligent garbage identification and classification system and method Download PDFInfo
- Publication number
- WO2022241597A1 WO2022241597A1 PCT/CN2021/094012 CN2021094012W WO2022241597A1 WO 2022241597 A1 WO2022241597 A1 WO 2022241597A1 CN 2021094012 W CN2021094012 W CN 2021094012W WO 2022241597 A1 WO2022241597 A1 WO 2022241597A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- garbage
- unit
- rcnn model
- travel
- faster
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000012549 training Methods 0.000 claims abstract description 27
- 238000012360 testing method Methods 0.000 claims description 29
- 238000001514 detection method Methods 0.000 claims description 19
- 238000011176 pooling Methods 0.000 claims description 16
- 238000007781 pre-processing Methods 0.000 claims description 10
- 238000011156 evaluation Methods 0.000 claims description 9
- 230000009466 transformation Effects 0.000 description 5
- 229910052500 inorganic mineral Inorganic materials 0.000 description 3
- 239000011707 mineral Substances 0.000 description 3
- 230000032258 transport Effects 0.000 description 3
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000004064 recycling Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001010 compromised effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
Definitions
- the technical field of garbage classification of the present invention relates to an AI intelligent garbage identification and classification system and method.
- Embodiments of the present invention provide an AI intelligent garbage identification and classification system and method to solve the technical problem of low efficiency of manual garbage sorting.
- the present invention provides a kind of AI intelligent garbage identification classification method, comprises the following steps:
- said building the pre-trained Faster-RCNN model includes the following steps: building Conv layers for extracting feature maps of pictures, wherein said Conv layers include three layers of conv, pooling, and relu; building a region generation network layer, Use the area generation network layer to generate the detection frame, and initially extract the target candidate area in the picture; build the area pooling layer, obtain and analyze the feature map and the target candidate area, and then extract the candidate feature map; build the classification layer, and use the border regression to obtain the detection frame The final precise position, and the target category is determined through the candidate feature map.
- the training step of training the pre-trained Faster-RCNN model includes: preprocessing the pictures in the image data set; the pre-trained Faster-RCNN model tests the pre-processed pictures to obtain Test result; preset average accuracy rate AP threshold value, evaluate according to the test result, if the evaluation result is lower than the average accuracy rate AP threshold value, then modify the pre-trained Faster-RCNN model parameters until the test result reaches the average Accuracy rate AP threshold.
- the amount of travel includes X-axis travel, Y-axis travel and Z-axis travel, wherein the Z-axis travel is a preset value, and the X-axis travel and the Y-axis travel are based on The real-time coordinates are calculated.
- the calculation steps of the X-axis travel amount and the Y-axis travel amount calculated according to the coordinates include:
- the present invention also provides an AI intelligent garbage identification and classification system, including: a robotic arm subsystem, a conveyor belt, an image acquisition unit and a control module, the conveyor belt is used to transport garbage to be sorted; the image acquisition unit , used to acquire on-site images of garbage transported on the conveyor belt, and input the on-site images to the control module; the control module includes an identification unit, a training unit, a coordinate unit and a travel unit, and the identification unit is built with
- the pre-trained Faster-RCNN model is used to identify garbage categories;
- the training unit is used to obtain image data sets that include multiple garbage categories, and specify recognition objects, and input the image data sets of multiple garbage categories to the pre-trained
- the trained Faster-RCNN model is trained to obtain the trained Faster-RCNN model;
- the coordinate unit is used to obtain the real-time coordinates of the garbage in the scene image;
- the travel distance unit is used to determine the travel distance according to the coordinates , and transmit the amount of travel to the subsystem of the robotic arm;
- control module further includes a reset unit, and the reset unit is used to control the control system of the robotic arm to return to the initial position after the captured garbage is placed in a designated position.
- the travel distance unit includes a preset subunit and a calculation subunit, the preset subunit is used to obtain a preset Z-axis travel distance, and the calculation subunit is used to calculate the X-axis according to the real-time coordinates Amount of travel and Y axis travel.
- the identification unit includes: a first construction subunit for constructing Conv layers for extracting feature maps of pictures, wherein the Conv layers include three layers of conv, pooling and relu; a second construction subunit for It is used to build the area generation network layer, use the area generation network layer to generate the detection frame, and initially extract the target candidate area in the picture; the third building subunit is used to build the area pooling layer, obtain and analyze the feature map and the target candidate area Afterwards, the candidate feature map is extracted; the fourth subunit is built, and the classification layer is built, and the final precise position of the detection frame is obtained by border regression, and the target category is determined through the candidate feature map.
- a first construction subunit for constructing Conv layers for extracting feature maps of pictures, wherein the Conv layers include three layers of conv, pooling and relu
- a second construction subunit for It is used to build the area generation network layer, use the area generation network layer to generate the detection frame, and initially extract the target candidate area in the picture
- the third building subunit is used
- the training unit includes: a preprocessing subunit for preprocessing pictures in the image data set; a testing subunit for performing preprocessing on the preprocessed pictures with the pretrained Faster-RCNN model Test, to obtain the test result; the evaluation subunit is used to preset the average precision rate AP threshold, evaluate according to the test result, if the evaluation result is lower than the average precision rate AP threshold, then modify the pre-trained Faster-RCNN model parameters until the test result reaches the AP threshold.
- An AI intelligent garbage recognition and classification method includes: acquiring a scene image by training a Faster-RCNN model, inputting the scene image into the trained Faster-RCNN model, using the trained The Faster-RCNN model identifies the garbage category in the garbage station, and determines the real-time coordinates of the garbage in the on-site image, and determines the travel distance according to the real-time coordinates, so as to control the mechanical arm or other grabbing devices to grab the garbage according to the travel distance, and realize
- the intelligent sorting of garbage in the station does not require workers to sort, which not only improves the efficiency of sorting, but also prevents the harm to workers in harsh environments, and is applicable to different scenarios, effectively solving the low efficiency of manual garbage sorting. question.
- An AI intelligent garbage identification and classification system provided by an embodiment of the present invention has the same effect as the above-mentioned method, and details are not described here.
- Fig. 1 is a flow chart of a method for identifying and classifying AI intelligent garbage provided by an embodiment of the present invention
- Fig. 2 is the modular block diagram of a kind of AI intelligent garbage identification classification system provided by the embodiment of the present invention
- Fig. 3 is a schematic structural diagram of an AI intelligent garbage identification and classification system provided by an embodiment of the present invention.
- a kind of AI intelligent garbage identification classification method of the embodiment of the present invention comprises the following steps:
- Step 1 Build a pre-trained Faster-RCNN model.
- Faster RCNN was used in target detection tasks by Ross Girshick and He Kaiming et al. in 2016. Compared with traditional RCNN, Faster RCNN can complete efficient Networks) to complete the selection of candidate boxes.
- the Faster R-CNN network is divided into two parts, one is Region Proposal Network (RPN), the second is Fast R-CNN.
- RPN includes proposals and conv layers in the figure
- Fast R-CNN includes convolutional layers, ROI pooling and subsequent fully connected layers.
- Faster RCNN first inputs the entire image into CNN to extract feature maps of the image. Input the picture features to the RPN to get the feature information of the candidate frame.
- RPN uses a classifier to judge whether it belongs to the candidate frame of the target to be recognized, and further adjusts the position of the candidate frame belonging to a certain category with a regressor. Finally, the feature vector of the target frame and the picture is input to the Roi pooling layer, and then classified by the classifier to complete the task of target detection, and the final position of the target is obtained through bounding box regression.
- Step 2 Obtain an image data set including various garbage categories, specify the recognition object, input the image data set to the pre-trained Faster-RCNN model for training, and obtain a trained Faster-RCNN model.
- the image data set of this embodiment includes multiple different types of garbage image groups, each garbage image group is used to train the Faster-RCNN model for identifying a certain type of garbage, and the recognition model is designated by the operator to identify the object, that is, the garbage image recognition model Named by the operator.
- the Faster-RCNN model can identify the garbage categories contained in the image data set.
- the image data set includes three types of garbage image groups: mineral water bottles, old clothes, and plastic bags. Each garbage image group is input into the pre-trained Faster-RCNN model in turn. After the training is completed, the Faster-RCNN model can recognize these three types types of garbage, and the more data in the garbage image group, the higher the recognition accuracy.
- Step 3 Obtain on-site images, input the on-site images into the trained Faster-RCNN model, identify the types of garbage in the on-site images and determine the real-time coordinates of the garbage in the on-site images.
- the identification and classification method in this embodiment can be applied to a garbage station, a garbage recycling site, or a place where garbage is discarded and piled up, and the on-site image is a real-time picture of the place where garbage needs to be sorted.
- the Faster-RCNN model identifies the garbage category and determines the real-time coordinates of the garbage in the scene image.
- the Faster-RCNN model will generate a detection frame when recognizing the target, and the real-time coordinates are the coordinates of the center point of the detection frame when the recognition is completed.
- Step 4 Analyze the real-time coordinates to obtain the travel distance, grab the garbage according to the travel distance, and place the captured garbage at the designated location.
- the real-time coordinates are converted into the movement distance of the robot arm, so as to control the robot arm to grab the garbage, and put the garbage at a designated location according to the recognized garbage type. For example, if the identified garbage is a mineral water bottle, the robotic arm will move to the set position for placing the mineral water bottle after grabbing the garbage, so as to realize the sorting of the garbage and facilitate subsequent recycling.
- the embodiment of the present invention realizes the intelligent sorting of garbage in the station without workers sorting operation, which not only improves the sorting efficiency, but also prevents the harm of workers from harsh environments, and is applicable to different scenarios, effectively solving the problem of manual sorting Garbage inefficient technical issues.
- building a pre-trained Faster-RCNN model includes the following steps: building Conv layers for extracting feature maps of pictures, wherein Conv layers include conv, pooling, Three layers of relu; build the area generation network layer, use the area generation network layer to generate the detection frame, and initially extract the target candidate area in the picture; build the area pooling layer, obtain and analyze the feature map and the target candidate area, and then extract the candidate feature map; build In the classification layer, border regression is used to obtain the final precise position of the detection frame, and the target category is determined through the candidate feature map.
- the Raspberry Pi is used to build the Faster-RCNN model.
- the Raspberry Pi is an ARM-based microcomputer motherboard, with SD/MocroSD card as the memory hard disk, and has all the basic functions of a PC. Features are designed.
- the parameters of the Conv layer are set as follows: the size of the convolution kernel is 3*3, the step size is 1, the pad filling is 1, and the image is converted into a matrix.
- the obtained matrix is Feature maps (proposal feature maps); build a pooling layer, use the Roi Pooling layer to collect input feature maps and proposals, extract proposal feature maps, and send them to the fully connected layer to determine the target type; build a classification layer, use proposal feature maps Calculate the category of the proposal, determine the final precise position of the detection frame, and determine the target category through the candidate feature map.
- the training step of training the pre-trained Faster-RCNN model includes: preprocessing the pictures in the image data set; the pre-trained Faster-RCNN model Test the preprocessed pictures and obtain the test results; preset the average accuracy AP threshold and evaluate according to the test results. If the evaluation result is lower than the average accuracy AP threshold, modify the pre-trained Faster-RCNN model parameters until the test The result reached the average precision rate AP threshold.
- the user can shoot or download the image data set from the Internet, or select the existing data set PASCALVOC as the data set for target detection, which contains about 10,000 images that have been marked and The picture with borders contains 20 categories, which are used to train the model to adjust the system parameters; the preprocessing of the image dataset adjusts all the pictures to a uniform size, and manually labels them.
- labelimg is used to label and generate corresponding xml files.
- the threshold AP is a percentage less than 1, the closer the value is to 1, the better the effect, but the AP value of the current target detection model is probably between 40% and 50%.
- the average accuracy AP threshold is set to 42%, use the Faster-RCNN model trained on the Raspberry Pi and test it with the data set. If the average accuracy rate AP threshold is lower than 42%, modify the parameters, and test the training results after training again. If the average accuracy rate is When the threshold AP is higher than or equal to 42%, you can start to input the pictures to be detected, and use the trained Faster-RCNN model for target recognition.
- the amount of travel includes X-axis travel, Y-axis travel and Z-axis travel, wherein the Z-axis travel is a preset value, and the X-axis travel
- the amount and the Y-axis travel amount are calculated according to the real-time coordinates.
- the robot arm for grabbing garbage is initially at a fixed height, and returns to the initial position after each garbage grab. Therefore, the preset value of the Z-axis travel amount can be set according to the height of the robot arm and the garbage placement point , and perform manual calibration.
- the calculation steps of the X-axis travel amount and the Y-axis travel amount calculated according to the coordinates include: setting the reference coordinate point; calculating the X-axis offset and the Y-axis offset of the reference coordinate point and the real-time coordinate; and the Y-axis offset are multiplied by the transformation factor a to obtain the X-axis travel amount and the Y-axis travel amount.
- the reference coordinate point is the coordinate of the robot arm on the horizontal plane, which is also the origin coordinate. Subtract it from the real-time coordinate to obtain the X-axis offset and Y-axis offset.
- the transformation factor a is a constant, according to the actual grasping progress The amount is determined.
- the transformation factor a For example, in actual operation, first adjust and determine the value of the transformation factor a, take the reference coordinates as (0,0), and the real-time coordinates as (x, y), then the X-axis offset and Y-axis offset are x and y, manually input the amount of travel to control the robotic arm to reach the top of the real-time coordinates, and then divide the X-axis travel and Y-axis travel by x and y respectively to obtain the transformation factor a in the two directions.
- the embodiment of the present application also proposes an AI intelligent garbage identification and classification system, including a robotic arm subsystem 100 , a conveyor belt 300 , an image acquisition unit 400 and a control module 200 .
- the image acquisition unit 400 is used to acquire the on-site image of the garbage transported on the conveyor belt 300, and input the on-site image to the control module 200;
- the control module 200 includes a recognition unit 220, a training unit 210, a coordinate unit 230 and a travel distance unit 240, and the recognition unit
- a pre-trained Faster-RCNN model is built on the 220 to identify garbage categories;
- the training unit 210 is used to obtain image data sets including multiple garbage categories, and specify recognition objects, and input image data sets of multiple garbage categories to
- the pre-trained Faster-RCNN model is trained to obtain the trained Faster-RCNN model;
- the coordinate unit 230 is used to obtain the real-time coordinates of the garbage in the scene image;
- the amount of travel unit 240 is used to determine the amount of travel according
- the AI intelligent garbage identification and classification system of this embodiment uses the conveyor belt 300 to transport the garbage to be sorted.
- the image acquisition unit 400 and the robotic arm subsystem 100 are arranged above the conveyor belt 300 so that they are at the same position on the horizontal plane as much as possible.
- the conveyor belt 300 temporarily stops running until the sorting is completed, and the control module 200 then controls the conveyor belt 300 to move forward.
- the sorting process is as follows: After the conveyor belt 300 transports the garbage to the position of the image acquisition unit 400 and the robotic arm subsystem 100, the image acquisition unit 400 takes pictures of the scene on the conveyor belt 300, and inputs the pictures of the scene to the recognition unit 220 and the coordinate unit 230 , then identify the garbage type by the identification unit 220 and obtain the real-time coordinates of the garbage by the coordinate unit 230, and the coordinate unit 230 sends the real-time coordinates to the travel distance unit 240, and the travel distance unit 240 determines the travel distance according to the real-time coordinates, and transmits the travel distance to The robotic arm subsystem 100, the robotic arm subsystem 100 grabs garbage according to the amount of travel and puts the grabbed garbage at a designated location to realize garbage sorting without the need for workers to sort operations, which not only improves the sorting efficiency, but also prevents The hazards of harsh environments to workers are understood, and it is applicable to different scenarios.
- the control module 200 is realized by the Raspberry Pi.
- a 3.7V 3800mAh mini battery is placed inside the Raspberry Pi to supply power to the Raspberry Pi system.
- the maximum output current is 1.4A, and the continuous use can guarantee about 8 hours.
- the image acquisition unit 400 adopts a high-definition camera, and the high-definition camera is fixed above the conveyor belt 300 through a fixing frame.
- control module 200 also includes a reset unit, which is used to control the robotic arm control system to return to initial position.
- the travel distance unit 240 includes a preset subunit and a calculation subunit, the preset subunit is used to obtain a preset Z-axis travel distance, and the calculation subunit It is used to calculate the X-axis travel amount and the Y-axis travel amount based on the real-time coordinates.
- the recognition unit 220 includes: a first building subunit for building Conv layers for extracting feature maps of pictures, wherein Conv layers include conv, pooling , relu three layers; the second building subunit is used to build the area generation network layer, use the area generation network layer to generate the detection frame, and initially extract the target candidate area in the picture; the third building subunit is used to build the area pooling layer , obtain and analyze the feature map and the target candidate area, and then extract the candidate feature map; the fourth subunit is built to build the classification layer, use the border regression to obtain the final precise position of the detection frame, and determine the target category through the candidate feature map.
- Conv layers include conv, pooling , relu three layers
- the second building subunit is used to build the area generation network layer, use the area generation network layer to generate the detection frame, and initially extract the target candidate area in the picture
- the third building subunit is used to build the area pooling layer , obtain and analyze the feature map and the target candidate area, and then extract the candidate feature map
- the training unit 210 includes: a preprocessing subunit for preprocessing pictures in the image data set; a testing subunit for pre-training Faster -The RCNN model tests the preprocessed pictures to obtain the test results; the evaluation subunit is used to preset the average precision rate AP threshold, and evaluates according to the test results. If the evaluation result is lower than the average precision rate AP threshold, modify the pre-set Trained Faster-RCNN model parameters until the test result reaches the average precision rate AP threshold.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
Disclosed in the present application are an AI intelligent garbage identification and classification system and method. The AI intelligent garbage identification and classification method comprises: acquiring a scene image by training a Faster-RCNN model, inputting the scene image to the trained Faster-RCNN model, identifying a garbage category in a garbage station by using the trained Faster-RCNN model, determining real-time coordinates of garbage in the scene image, and determining a travel amount according to the real-time coordinates, so as to control, according to the travel amount, a robotic arm or other grabbing devices to grab the garbage, thereby implementing an intelligent sorting operation of the garbage in the station, without requiring a worker to perform a sorting operation. is the present application not only improves the sorting efficiency, but also prevents the harm caused by a harsh environment to the worker, is suitable for different scenarios, and effectively solves the technical problem of low efficiency of manual sorting of the garbage.
Description
本发明垃圾分类技术领域,尤其涉及一种AI智能垃圾识别分类系统和方法。The technical field of garbage classification of the present invention, in particular, relates to an AI intelligent garbage identification and classification system and method.
现如今随着我国经济的快速发展和城镇化发展的不断加快,我国城市人均每年“制造”的生活垃圾数量愈加增多,垃圾总量太多。Nowadays, with the rapid development of my country's economy and the continuous acceleration of urbanization, the amount of household garbage "manufactured" per capita in cities in my country is increasing, and the total amount of garbage is too much.
目前大部分垃圾站都是人工分拣,由此在垃圾处理时会耗费大量的人力,并且在恶劣的工作环境中对工人的身体健康存在种种威胁,人工分拣的效率也会大为折损,垃圾分类率也很低。At present, most garbage stations are manually sorted, which consumes a lot of manpower in garbage disposal, and there are various threats to the health of workers in the harsh working environment, and the efficiency of manual sorting will be greatly compromised. , the garbage sorting rate is also very low.
本发明实施例提供了一种AI智能垃圾识别分类系统和方法,用以解决人工分拣垃圾效率低的技术问题。Embodiments of the present invention provide an AI intelligent garbage identification and classification system and method to solve the technical problem of low efficiency of manual garbage sorting.
所述技术方案如下:Described technical scheme is as follows:
一方面,本发明提供了一种AI智能垃圾识别分类方法,包括以下步骤:On the one hand, the present invention provides a kind of AI intelligent garbage identification classification method, comprises the following steps:
搭建预训练的Faster-RCNN模型;获取包括多种垃圾类别的图像数据集,并指定识别对象,将所述图像数据集输入到所述预训练的Faster-RCNN模型进行训练,获得训练好的Faster-RCNN模型;获取现场图像,将所述现场图像输入到所述训练好的Faster-RCNN模型中,识别所述现场图像中垃圾类别并确定垃圾在所述现场图像中的实时坐标;分析所述实时坐标获取行进量,根据所述行进量抓取垃圾并将抓取到的垃圾放在指定位置。Build a pre-trained Faster-RCNN model; obtain image data sets including various garbage categories, and specify recognition objects, input the image data sets to the pre-trained Faster-RCNN model for training, and obtain trained Faster -RCNN model; obtain the scene image, input the scene image into the trained Faster-RCNN model, identify the garbage category in the scene image and determine the real-time coordinates of the garbage in the scene image; analyze the The real-time coordinates obtain the travel distance, grab the garbage according to the travel distance, and place the captured garbage at the designated position.
优选地,所述搭建预训练的Faster-RCNN模型包括以下步骤:搭建Conv layers,用于提取图片的特征图,其中所述Conv layers包括conv、pooling、relu三种层;搭建区域生成网络层,使用区域生成网络层生成检测框,初步提取图片中目标候选区域;搭建区域池化层,获取并分析所述特征图和目标候选区域后提取候选特征图;搭建分类层,使用边框回归获得检测框最终精确位置,以及通过所述候选特征图判定目标类别。Preferably, said building the pre-trained Faster-RCNN model includes the following steps: building Conv layers for extracting feature maps of pictures, wherein said Conv layers include three layers of conv, pooling, and relu; building a region generation network layer, Use the area generation network layer to generate the detection frame, and initially extract the target candidate area in the picture; build the area pooling layer, obtain and analyze the feature map and the target candidate area, and then extract the candidate feature map; build the classification layer, and use the border regression to obtain the detection frame The final precise position, and the target category is determined through the candidate feature map.
优选地,所述预训练的Faster-RCNN模型进行训练的训练步骤包括:对所述图像数据集中的图片进行预处理;所述预训练的Faster-RCNN模型对预处理后的图片进行测试,获取测试结果;预设平均精确率AP阈值,根据所述测试结果进行评估,若评估结果低于平均精确率AP阈值,则修改所述预训练的Faster-RCNN模型参数,直至测试结果达到所述平均精确率AP阈值。Preferably, the training step of training the pre-trained Faster-RCNN model includes: preprocessing the pictures in the image data set; the pre-trained Faster-RCNN model tests the pre-processed pictures to obtain Test result; preset average accuracy rate AP threshold value, evaluate according to the test result, if the evaluation result is lower than the average accuracy rate AP threshold value, then modify the pre-trained Faster-RCNN model parameters until the test result reaches the average Accuracy rate AP threshold.
优选地,所述行进量包括X轴行进量、Y轴行进量和Z轴行进量,其中,所述Z轴行进量为预设数值,所述X轴行进量和所述Y轴行进量根据所述实时坐标计算得到。Preferably, the amount of travel includes X-axis travel, Y-axis travel and Z-axis travel, wherein the Z-axis travel is a preset value, and the X-axis travel and the Y-axis travel are based on The real-time coordinates are calculated.
优选地,所述X轴行进量和所述Y轴行进量根据所述坐标计算得到的计算步骤包括:Preferably, the calculation steps of the X-axis travel amount and the Y-axis travel amount calculated according to the coordinates include:
设定参考坐标点;计算所述参考坐标点和所述实时坐标的X轴偏移量和Y轴偏移量;将所述X轴偏移量和所述Y轴偏移量分别乘以变换因子a获得所述X轴行进量和所述Y轴行进量。Set a reference coordinate point; calculate the X-axis offset and the Y-axis offset of the reference coordinate point and the real-time coordinates; multiply the X-axis offset and the Y-axis offset by the transformation Factor a to obtain the X-axis travel and the Y-axis travel.
另一方面,本发明还提供了一种AI智能垃圾识别分类系统,包括:机械臂子系统、传送带、图像采集单元和控制模块,所述传送带用于运输待分类的垃圾;所述图像采集单元,用于获取传送带上运输垃圾的现场图像,并将所述现场图像输入到所述控制模块;所述控制模块包括识别单元、训练单元、坐标单元和行进量单元,所述识别单元上搭建有预训练的Faster-RCNN模型,用于识别垃圾类别;所述训练单元用于获取包括多种垃圾类别的图像数据集,并指定识别对象,将多种垃圾类别的图像数据集输入到所述预训练的Faster-RCNN模型进行训练,获得训练好的Faster-RCNN模型;所述坐标单元用于获取垃圾在所述现场图像中的实时坐标;所述行进量单元用于根据所述坐标确定行进量,并将所述行进量传输至所述机械臂子系统;机械臂子系统用于根据所述行进量抓取垃圾并将抓取到的垃圾放在指定位置。On the other hand, the present invention also provides an AI intelligent garbage identification and classification system, including: a robotic arm subsystem, a conveyor belt, an image acquisition unit and a control module, the conveyor belt is used to transport garbage to be sorted; the image acquisition unit , used to acquire on-site images of garbage transported on the conveyor belt, and input the on-site images to the control module; the control module includes an identification unit, a training unit, a coordinate unit and a travel unit, and the identification unit is built with The pre-trained Faster-RCNN model is used to identify garbage categories; the training unit is used to obtain image data sets that include multiple garbage categories, and specify recognition objects, and input the image data sets of multiple garbage categories to the pre-trained The trained Faster-RCNN model is trained to obtain the trained Faster-RCNN model; the coordinate unit is used to obtain the real-time coordinates of the garbage in the scene image; the travel distance unit is used to determine the travel distance according to the coordinates , and transmit the amount of travel to the subsystem of the robotic arm; the subsystem of the robotic arm is used to grab garbage according to the amount of travel and place the grabbed garbage at a designated location.
优选地,所述控制模块还包括复位单元,所述复位单元用于当将抓取到的垃圾放在指定位置后,控制所述机械臂控制系统回到初始位置。Preferably, the control module further includes a reset unit, and the reset unit is used to control the control system of the robotic arm to return to the initial position after the captured garbage is placed in a designated position.
优选地,所述行进量单元包括预设子单元和计算子单元,所述预设子单元用于获取预设的Z轴行进量,所述计算子单元用于根据所述实时坐标计算X轴行进量和Y轴行进量。Preferably, the travel distance unit includes a preset subunit and a calculation subunit, the preset subunit is used to obtain a preset Z-axis travel distance, and the calculation subunit is used to calculate the X-axis according to the real-time coordinates Amount of travel and Y axis travel.
优选地,所述识别单元包括:第一搭建子单元,用于搭建Conv layers,用于提取图片的特征图,其中所述Conv layers包括conv、pooling、relu三种层;第二搭建子单元,用于搭建区域生成网络层,使用区域生成网络层生成检测框,初步提取图片中目标候选区域;第三搭建子单元,用于搭建区域池化层,获取并分析所述特征图和目标候选区域后提取候选特征图;第四搭建子单元,搭建分类层,使用边框回归获得检测框最终精确位置,以及通过所述候选特征图判定目标类别。Preferably, the identification unit includes: a first construction subunit for constructing Conv layers for extracting feature maps of pictures, wherein the Conv layers include three layers of conv, pooling and relu; a second construction subunit for It is used to build the area generation network layer, use the area generation network layer to generate the detection frame, and initially extract the target candidate area in the picture; the third building subunit is used to build the area pooling layer, obtain and analyze the feature map and the target candidate area Afterwards, the candidate feature map is extracted; the fourth subunit is built, and the classification layer is built, and the final precise position of the detection frame is obtained by border regression, and the target category is determined through the candidate feature map.
优选地,所述训练单元包括:预处理子单元,用于对所述图像数据集中的图片进行预处理;测试子单元,用于所述预训练的Faster-RCNN模型对预处理后的图片进行测试,获取测试结果;评估子单元,用于预设平均精确率AP阈值,根据所述测试结果进行评估,若评估结果低于平均精确率AP阈值,则修改所述预训练的Faster-RCNN模型参数,直至测试结果达到所述平均精确率AP阈值。Preferably, the training unit includes: a preprocessing subunit for preprocessing pictures in the image data set; a testing subunit for performing preprocessing on the preprocessed pictures with the pretrained Faster-RCNN model Test, to obtain the test result; the evaluation subunit is used to preset the average precision rate AP threshold, evaluate according to the test result, if the evaluation result is lower than the average precision rate AP threshold, then modify the pre-trained Faster-RCNN model parameters until the test result reaches the AP threshold.
从以上技术方案可以看出,本申请实施例具有以下优点:It can be seen from the above technical solutions that the embodiments of the present application have the following advantages:
本发明实施例提供的一种AI智能垃圾识别分类方法,包括:通过训练Faster-RCNN模型,获取现场图像,将所述现场图像输入到所述训练好的Faster-RCNN模型中,利用训练好的Faster-RCNN模型识别垃圾站内的垃圾类别,并确定垃圾在所述现场图像中的实时坐标,根据所述实时坐标确定行进量,从而根据行进量控制机械臂或其他抓取装置抓取垃圾,实现对站内垃圾的智能分拣工作,无需工人分拣操作,不仅提高了分拣效率,同时还制止了恶劣环境对工人的危害,并且适用于不同场景,有效解决了人工分拣垃圾效率低的技术问题。An AI intelligent garbage recognition and classification method provided in an embodiment of the present invention includes: acquiring a scene image by training a Faster-RCNN model, inputting the scene image into the trained Faster-RCNN model, using the trained The Faster-RCNN model identifies the garbage category in the garbage station, and determines the real-time coordinates of the garbage in the on-site image, and determines the travel distance according to the real-time coordinates, so as to control the mechanical arm or other grabbing devices to grab the garbage according to the travel distance, and realize The intelligent sorting of garbage in the station does not require workers to sort, which not only improves the efficiency of sorting, but also prevents the harm to workers in harsh environments, and is applicable to different scenarios, effectively solving the low efficiency of manual garbage sorting. question.
本发明实施例提供的一种AI智能垃圾识别分类系统,具有和上述方法相同的效果,在此不做赘述。An AI intelligent garbage identification and classification system provided by an embodiment of the present invention has the same effect as the above-mentioned method, and details are not described here.
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中附图仅仅是本发明的一些实施例,对于本领域普通的技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他附图,应当理解的是,本说明书中描述的具体实施方式仅仅为了解释本发明,并非为了限定本发明。In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the following will briefly introduce the drawings that need to be used in the description of the embodiments. Obviously, the drawings in the following description are only some embodiments of the present invention. For Those of ordinary skill in the art can also obtain other drawings based on these drawings without creative work. It should be understood that the specific implementation methods described in this specification are only for explaining the present invention, not for limit the invention.
图1为本发明实施例提供的一种AI智能垃圾识别分类方法的流程图;Fig. 1 is a flow chart of a method for identifying and classifying AI intelligent garbage provided by an embodiment of the present invention;
图2为本发明实施例提供的一种AI智能垃圾识别分类系统的模块框图;Fig. 2 is the modular block diagram of a kind of AI intelligent garbage identification classification system provided by the embodiment of the present invention;
图3为本发明实施例提供的一种AI智能垃圾识别分类系统的结构示意图。Fig. 3 is a schematic structural diagram of an AI intelligent garbage identification and classification system provided by an embodiment of the present invention.
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明实施方式作进一步地详细描述。In order to make the object, technical solution and advantages of the present invention clearer, the implementation manner of the present invention will be further described in detail below in conjunction with the accompanying drawings.
参见图1,本发明实施例的一种AI智能垃圾识别分类方法,包括以下步骤:Referring to Fig. 1, a kind of AI intelligent garbage identification classification method of the embodiment of the present invention, comprises the following steps:
步骤1、搭建预训练的Faster-RCNN模型,Faster RCNN是由Ross Girshick由何凯明等人在2016年将其用于目标检测任务中,能够完成高效的与传统的RCNN相比,利用RPN(Region Proposal Networks)完成候选框的选择。Faster R-CNN网络分为两部分,一是Region Proposal
Network(RPN),二是Fast R-CNN。其中RPN包括图中proposals和conv layers,Fast R-CNN包括卷积层、ROI pooling及后面全连接层等部分。 Faster RCNN首先将整张图片输进CNN,提取图片的feature maps。将图片特征输入到到RPN,得到候选框的特征信息。RPN对于候选框中提取出的特征,使用分类器判别是否属于待识别的目标的候选框,将属于某一类别的候选框,用回归器进一步调整其位置。最后将目标框和图片的特征向量输入到Roi pooling层,再通过分类器进行分类,完成目标检测的任务,并且通过边框回归(bounding box regression)获得目标最终的位置。Step 1. Build a pre-trained Faster-RCNN model. Faster RCNN was used in target detection tasks by Ross Girshick and He Kaiming et al. in 2016. Compared with traditional RCNN, Faster RCNN can complete efficient Networks) to complete the selection of candidate boxes. The Faster R-CNN network is divided into two parts, one is Region Proposal
Network (RPN), the second is Fast R-CNN. Among them, RPN includes proposals and conv layers in the figure, and Fast R-CNN includes convolutional layers, ROI pooling and subsequent fully connected layers. Faster RCNN first inputs the entire image into CNN to extract feature maps of the image. Input the picture features to the RPN to get the feature information of the candidate frame. For the features extracted from the candidate frame, RPN uses a classifier to judge whether it belongs to the candidate frame of the target to be recognized, and further adjusts the position of the candidate frame belonging to a certain category with a regressor. Finally, the feature vector of the target frame and the picture is input to the Roi pooling layer, and then classified by the classifier to complete the task of target detection, and the final position of the target is obtained through bounding box regression.
步骤2、获取包括多种垃圾类别的图像数据集,并指定识别对象,将图像数据集输入到预训练的Faster-RCNN模型进行训练,获得训练好的Faster-RCNN模型。本实施例的图像数据集包括多种不同类型的垃圾图像组,每个垃圾图像组用于训练识别某一种类垃圾的Faster-RCNN模型,识别模型由操作人员指定识别对象,即垃圾图像识别模型由操作人员自定义命名。当利用图像数据集中的数据训练完成后,Faster-RCNN模型就可以识别图像数据集中包含的垃圾类别。例如,图像数据集包括矿泉水瓶、旧衣物、塑料袋三种垃圾图像组,将每个垃圾图像组依次输入到预训练的Faster-RCNN模型,训练完成后得到的Faster-RCNN模型能识别这三种类型的垃圾,并且垃圾图像组中的数据量越多,识别精度越高。Step 2. Obtain an image data set including various garbage categories, specify the recognition object, input the image data set to the pre-trained Faster-RCNN model for training, and obtain a trained Faster-RCNN model. The image data set of this embodiment includes multiple different types of garbage image groups, each garbage image group is used to train the Faster-RCNN model for identifying a certain type of garbage, and the recognition model is designated by the operator to identify the object, that is, the garbage image recognition model Named by the operator. After training with the data in the image data set, the Faster-RCNN model can identify the garbage categories contained in the image data set. For example, the image data set includes three types of garbage image groups: mineral water bottles, old clothes, and plastic bags. Each garbage image group is input into the pre-trained Faster-RCNN model in turn. After the training is completed, the Faster-RCNN model can recognize these three types types of garbage, and the more data in the garbage image group, the higher the recognition accuracy.
步骤3、获取现场图像,将现场图像输入到训练好的Faster-RCNN模型中,识别现场图像中垃圾类别并确定垃圾在现场图像中的实时坐标。本实施例中识别分类方法可以应用于垃圾站、垃圾回收场或垃圾丢弃堆积的地点,现场图像为需要对垃圾分类的场所的实时图片。Faster-RCNN模型识别垃圾类别并确定垃圾在现场图像中的实时坐标,Faster-RCNN模型在识别目标时会产生检测框,其中实时坐标为识别完成时检测框的中心点坐标。Step 3. Obtain on-site images, input the on-site images into the trained Faster-RCNN model, identify the types of garbage in the on-site images and determine the real-time coordinates of the garbage in the on-site images. The identification and classification method in this embodiment can be applied to a garbage station, a garbage recycling site, or a place where garbage is discarded and piled up, and the on-site image is a real-time picture of the place where garbage needs to be sorted. The Faster-RCNN model identifies the garbage category and determines the real-time coordinates of the garbage in the scene image. The Faster-RCNN model will generate a detection frame when recognizing the target, and the real-time coordinates are the coordinates of the center point of the detection frame when the recognition is completed.
步骤4、分析实时坐标获取行进量,根据行进量抓取垃圾并将抓取到的垃圾放在指定位置。本实施例中,将实时坐标转换为机械手臂运动的行进量,以控制机械手臂抓取垃圾,且根据识别到的垃圾种类将垃圾放在指定位置。例如识别到的垃圾为矿泉水瓶,则机械手臂抓取垃圾后既运动到设定的用来放置矿泉水瓶的位置,实现对垃圾的分拣,便于后续回收。Step 4: Analyze the real-time coordinates to obtain the travel distance, grab the garbage according to the travel distance, and place the captured garbage at the designated location. In this embodiment, the real-time coordinates are converted into the movement distance of the robot arm, so as to control the robot arm to grab the garbage, and put the garbage at a designated location according to the recognized garbage type. For example, if the identified garbage is a mineral water bottle, the robotic arm will move to the set position for placing the mineral water bottle after grabbing the garbage, so as to realize the sorting of the garbage and facilitate subsequent recycling.
本发明实施例实现对站内垃圾的智能分拣工作,无需工人分拣操作,不仅提高了分拣效率,同时还制止了恶劣环境对工人的危害,并且适用于不同场景,有效解决了人工分拣垃圾效率低的技术问题。The embodiment of the present invention realizes the intelligent sorting of garbage in the station without workers sorting operation, which not only improves the sorting efficiency, but also prevents the harm of workers from harsh environments, and is applicable to different scenarios, effectively solving the problem of manual sorting Garbage inefficient technical issues.
进一步,在本发明实施例的一种AI智能垃圾识别分类方法中,搭建预训练的Faster-RCNN模型包括以下步骤:搭建Conv layers,用于提取图片的特征图,其中Conv layers包括conv、pooling、relu三种层;搭建区域生成网络层,使用区域生成网络层生成检测框,初步提取图片中目标候选区域;搭建区域池化层,获取并分析特征图和目标候选区域后提取候选特征图;搭建分类层,使用边框回归获得检测框最终精确位置,以及通过候选特征图判定目标类别。在本实施例中,利用树莓派来搭建Faster-RCNN模型。树莓派是一款基于ARM的微型电脑主板,以SD/MocroSD卡为内存硬盘,具备所有PC的基本功能,本实施例尤其涉及树莓派3B+,树莓派3B+根据Faster-RCNN模型框架的特点进行设计。在本实施例中,Conv层的参数设置为:卷积核的大小为3*3,步长为1,pad填充为1,将图片转换成矩阵,在Conv layers经过计算后,得到的矩阵为特征图(proposal feature maps);搭建池化层,使用Roi Pooling层收集输入的特征图和proposals,提取proposal feature maps,送入到全连接层进行目标类型的判定;搭建分类层,利用proposal feature maps计算proposal的类别,确定检测框的最终精确位置,以及通过候选特征图判定目标类别。Further, in an AI intelligent garbage recognition and classification method according to an embodiment of the present invention, building a pre-trained Faster-RCNN model includes the following steps: building Conv layers for extracting feature maps of pictures, wherein Conv layers include conv, pooling, Three layers of relu; build the area generation network layer, use the area generation network layer to generate the detection frame, and initially extract the target candidate area in the picture; build the area pooling layer, obtain and analyze the feature map and the target candidate area, and then extract the candidate feature map; build In the classification layer, border regression is used to obtain the final precise position of the detection frame, and the target category is determined through the candidate feature map. In this embodiment, the Raspberry Pi is used to build the Faster-RCNN model. The Raspberry Pi is an ARM-based microcomputer motherboard, with SD/MocroSD card as the memory hard disk, and has all the basic functions of a PC. Features are designed. In this embodiment, the parameters of the Conv layer are set as follows: the size of the convolution kernel is 3*3, the step size is 1, the pad filling is 1, and the image is converted into a matrix. After the Conv layers are calculated, the obtained matrix is Feature maps (proposal feature maps); build a pooling layer, use the Roi Pooling layer to collect input feature maps and proposals, extract proposal feature maps, and send them to the fully connected layer to determine the target type; build a classification layer, use proposal feature maps Calculate the category of the proposal, determine the final precise position of the detection frame, and determine the target category through the candidate feature map.
进一步,在本发明实施例的一种AI智能垃圾识别分类方法中,预训练的Faster-RCNN模型进行训练的训练步骤包括:对图像数据集中的图片进行预处理;预训练的Faster-RCNN模型对预处理后的图片进行测试,获取测试结果;预设平均精确率AP阈值,根据测试结果进行评估,若评估结果低于平均精确率AP阈值,则修改预训练的Faster-RCNN模型参数,直至测试结果达到平均精确率AP阈值。具体地,在本实施例中,用户可以自行拍摄或从网上下载图像数据集,也可以选择现有数据集PASCALVOC作为用于目标检测的数据集,该数据集包含约10000张已经标注好的并且带有边框的图片,含有20个类别,用于训练模型调整系统参数;图像数据集的预处理将所有图片调整为统一大小,并且人工进行标注,在此使用labelimg进行标注生成相应的xml文件。其中阈值AP是一个小于1的百分数,其数值越接近1效果越好,但是目前的目标检测模型的AP值大概在40%到50%之间,本实施例中,平均精确率AP阈值设置为42%,利用树莓派上训练好的Faster-RCNN模型并用数据集进行测试,若平均精确率AP阈值低于42%,则修改参数,再次进行训练后对训练结果进行测试,若平均精确率阈值AP高于或等于42%,则可开始输入需要检测的图片,利用训练后的Faster-RCNN模型进行目标识别。Further, in an AI intelligent garbage recognition and classification method according to an embodiment of the present invention, the training step of training the pre-trained Faster-RCNN model includes: preprocessing the pictures in the image data set; the pre-trained Faster-RCNN model Test the preprocessed pictures and obtain the test results; preset the average accuracy AP threshold and evaluate according to the test results. If the evaluation result is lower than the average accuracy AP threshold, modify the pre-trained Faster-RCNN model parameters until the test The result reached the average precision rate AP threshold. Specifically, in this embodiment, the user can shoot or download the image data set from the Internet, or select the existing data set PASCALVOC as the data set for target detection, which contains about 10,000 images that have been marked and The picture with borders contains 20 categories, which are used to train the model to adjust the system parameters; the preprocessing of the image dataset adjusts all the pictures to a uniform size, and manually labels them. Here, labelimg is used to label and generate corresponding xml files. Wherein the threshold AP is a percentage less than 1, the closer the value is to 1, the better the effect, but the AP value of the current target detection model is probably between 40% and 50%. In this embodiment, the average accuracy AP threshold is set to 42%, use the Faster-RCNN model trained on the Raspberry Pi and test it with the data set. If the average accuracy rate AP threshold is lower than 42%, modify the parameters, and test the training results after training again. If the average accuracy rate is When the threshold AP is higher than or equal to 42%, you can start to input the pictures to be detected, and use the trained Faster-RCNN model for target recognition.
进一步,在本发明实施例的一种AI智能垃圾识别分类方法中,行进量包括X轴行进量、Y轴行进量和Z轴行进量,其中,Z轴行进量为预设数值,X轴行进量和Y轴行进量根据实时坐标计算得到。本实施例中,抓取垃圾的机械手臂在初始时位于固定高度,每次抓完垃圾后既返回初始位置,因此,Z轴行进量的预设数值可以根据机械手臂与垃圾放置点的高度设置,并进行人工校准。X轴行进量和Y轴行进量根据坐标计算得到的计算步骤包括:设定参考坐标点;计算参考坐标点和实时坐标的X轴偏移量和Y轴偏移量;将X轴偏移量和Y轴偏移量分别乘以变换因子a获得X轴行进量和Y轴行进量。参考坐标点为机械手臂在水平面上的坐标,也为原点坐标,将其和实时坐标相减获得X轴偏移量和Y轴偏移量,变换因子a为一个常数,根据实际抓取的行进量确定。例如,在实际操作时先调校确定变换因子a的值,取参考坐标为(0,0),实时坐标为(x,y),则X轴偏移量和Y轴偏移量分别为x和y,人工输入行进量控制机械手臂到达实时坐标的上方,然后用此时的X轴行进量和Y轴行进量分别除以x和y既获得两个方向上的变换因子a。 Further, in an AI intelligent garbage identification and classification method according to an embodiment of the present invention, the amount of travel includes X-axis travel, Y-axis travel and Z-axis travel, wherein the Z-axis travel is a preset value, and the X-axis travel The amount and the Y-axis travel amount are calculated according to the real-time coordinates. In this embodiment, the robot arm for grabbing garbage is initially at a fixed height, and returns to the initial position after each garbage grab. Therefore, the preset value of the Z-axis travel amount can be set according to the height of the robot arm and the garbage placement point , and perform manual calibration. The calculation steps of the X-axis travel amount and the Y-axis travel amount calculated according to the coordinates include: setting the reference coordinate point; calculating the X-axis offset and the Y-axis offset of the reference coordinate point and the real-time coordinate; and the Y-axis offset are multiplied by the transformation factor a to obtain the X-axis travel amount and the Y-axis travel amount. The reference coordinate point is the coordinate of the robot arm on the horizontal plane, which is also the origin coordinate. Subtract it from the real-time coordinate to obtain the X-axis offset and Y-axis offset. The transformation factor a is a constant, according to the actual grasping progress The amount is determined. For example, in actual operation, first adjust and determine the value of the transformation factor a, take the reference coordinates as (0,0), and the real-time coordinates as (x, y), then the X-axis offset and Y-axis offset are x and y, manually input the amount of travel to control the robotic arm to reach the top of the real-time coordinates, and then divide the X-axis travel and Y-axis travel by x and y respectively to obtain the transformation factor a in the two directions.
参见图2和图3,本申请实施例还提出一种AI智能垃圾识别分类系统,包括机械臂子系统100、传送带300、图像采集单元400和控制模块200。图像采集单元400,用于获取传送带300上运输垃圾的现场图像,并将现场图像输入到控制模块200;控制模块200包括识别单元220、训练单元210、坐标单元230和行进量单元240,识别单元220上搭建有预训练的Faster-RCNN模型,用于识别垃圾类别;训练单元210用于获取包括多种垃圾类别的图像数据集,并指定识别对象,将多种垃圾类别的图像数据集输入到预训练的Faster-RCNN模型进行训练,获得训练好的Faster-RCNN模型;坐标单元230用于获取垃圾在现场图像中的实时坐标;行进量单元240用于根据坐标确定行进量,并将行进量传输至机械臂子系统100;机械臂子系统100用于根据行进量抓取垃圾并将抓取到的垃圾放在指定位置。本实施例的AI智能垃圾识别分类系统,采用传送带300来运输待分类的垃圾,图像采集单元400和机械臂子系统100设置在传送带300上方且使其在水平面上尽量处于同一位置,当垃圾运输到图像采集单元400和机械臂子系统100的位置时传送带300暂时停止运行直至分拣完毕,控制模块200再控制传送带300前进。分拣过程如下:当传送带300将垃圾运输到图像采集单元400和机械臂子系统100的位置后,图像采集单元400拍摄传送带300上的现场图片,将现场图片输入到识别单元220和坐标单元230,然后通过识别单元220识别垃圾类型和通过坐标单元230获取垃圾实时坐标,坐标单元230再把实时坐标发送给行进量单元240,行进量单元240根据实时坐标确定行进量,并将行进量传输至机械臂子系统100,机械臂子系统100根据行进量抓取垃圾并将抓取到的垃圾放在指定位置,实现垃圾分拣,无需工人分拣操作,不仅提高了分拣效率,同时还制止了恶劣环境对工人的危害,并且适用于不同场景。Referring to FIG. 2 and FIG. 3 , the embodiment of the present application also proposes an AI intelligent garbage identification and classification system, including a robotic arm subsystem 100 , a conveyor belt 300 , an image acquisition unit 400 and a control module 200 . The image acquisition unit 400 is used to acquire the on-site image of the garbage transported on the conveyor belt 300, and input the on-site image to the control module 200; the control module 200 includes a recognition unit 220, a training unit 210, a coordinate unit 230 and a travel distance unit 240, and the recognition unit A pre-trained Faster-RCNN model is built on the 220 to identify garbage categories; the training unit 210 is used to obtain image data sets including multiple garbage categories, and specify recognition objects, and input image data sets of multiple garbage categories to The pre-trained Faster-RCNN model is trained to obtain the trained Faster-RCNN model; the coordinate unit 230 is used to obtain the real-time coordinates of the garbage in the scene image; the amount of travel unit 240 is used to determine the amount of travel according to the coordinates, and the amount of travel Transmission to the robotic arm subsystem 100; the robotic arm subsystem 100 is used to grab the rubbish according to the amount of travel and place the grabbed rubbish at a designated location. The AI intelligent garbage identification and classification system of this embodiment uses the conveyor belt 300 to transport the garbage to be sorted. The image acquisition unit 400 and the robotic arm subsystem 100 are arranged above the conveyor belt 300 so that they are at the same position on the horizontal plane as much as possible. When the garbage is transported When the position of the image acquisition unit 400 and the robotic arm subsystem 100 is reached, the conveyor belt 300 temporarily stops running until the sorting is completed, and the control module 200 then controls the conveyor belt 300 to move forward. The sorting process is as follows: After the conveyor belt 300 transports the garbage to the position of the image acquisition unit 400 and the robotic arm subsystem 100, the image acquisition unit 400 takes pictures of the scene on the conveyor belt 300, and inputs the pictures of the scene to the recognition unit 220 and the coordinate unit 230 , then identify the garbage type by the identification unit 220 and obtain the real-time coordinates of the garbage by the coordinate unit 230, and the coordinate unit 230 sends the real-time coordinates to the travel distance unit 240, and the travel distance unit 240 determines the travel distance according to the real-time coordinates, and transmits the travel distance to The robotic arm subsystem 100, the robotic arm subsystem 100 grabs garbage according to the amount of travel and puts the grabbed garbage at a designated location to realize garbage sorting without the need for workers to sort operations, which not only improves the sorting efficiency, but also prevents The hazards of harsh environments to workers are understood, and it is applicable to different scenarios.
在本实施例中,控制模块200采用树莓派来实现,树莓派内部放置一块3.7V3800mAh的mini电池,用于给树莓派系统供电,最大输出电流1.4A,持续使用可以保证8小时左右的续航;一方面,可以在供电系统出现事故时,维持树莓派操作系统的正常运行。图像采集单元400采用高清摄像头,高清摄像头通过固定架固定在传送带300上方。In this embodiment, the control module 200 is realized by the Raspberry Pi. A 3.7V 3800mAh mini battery is placed inside the Raspberry Pi to supply power to the Raspberry Pi system. The maximum output current is 1.4A, and the continuous use can guarantee about 8 hours. On the one hand, it can maintain the normal operation of the Raspberry Pi operating system when there is an accident in the power supply system. The image acquisition unit 400 adopts a high-definition camera, and the high-definition camera is fixed above the conveyor belt 300 through a fixing frame.
进一步,在本发明实施例的一种AI智能垃圾识别分类系统中,控制模块200还包括复位单元,复位单元用于当将抓取到的垃圾放在指定位置后,控制机械臂控制系统回到初始位置。Further, in an AI intelligent garbage identification and classification system according to an embodiment of the present invention, the control module 200 also includes a reset unit, which is used to control the robotic arm control system to return to initial position.
进一步,在本发明实施例的一种AI智能垃圾识别分类系统中,行进量单元240包括预设子单元和计算子单元,预设子单元用于获取预设的Z轴行进量,计算子单元用于根据实时坐标计算X轴行进量和Y轴行进量。Further, in an AI intelligent garbage identification and classification system according to an embodiment of the present invention, the travel distance unit 240 includes a preset subunit and a calculation subunit, the preset subunit is used to obtain a preset Z-axis travel distance, and the calculation subunit It is used to calculate the X-axis travel amount and the Y-axis travel amount based on the real-time coordinates.
进一步,在本发明实施例的一种AI智能垃圾识别分类系统中,识别单元220包括:第一搭建子单元,用于搭建Conv layers,用于提取图片的特征图,其中Conv layers包括conv、pooling、relu三种层;第二搭建子单元,用于搭建区域生成网络层,使用区域生成网络层生成检测框,初步提取图片中目标候选区域;第三搭建子单元,用于搭建区域池化层,获取并分析特征图和目标候选区域后提取候选特征图;第四搭建子单元,搭建分类层,使用边框回归获得检测框最终精确位置,以及通过候选特征图判定目标类别。Further, in an AI intelligent garbage recognition and classification system according to an embodiment of the present invention, the recognition unit 220 includes: a first building subunit for building Conv layers for extracting feature maps of pictures, wherein Conv layers include conv, pooling , relu three layers; the second building subunit is used to build the area generation network layer, use the area generation network layer to generate the detection frame, and initially extract the target candidate area in the picture; the third building subunit is used to build the area pooling layer , obtain and analyze the feature map and the target candidate area, and then extract the candidate feature map; the fourth subunit is built to build the classification layer, use the border regression to obtain the final precise position of the detection frame, and determine the target category through the candidate feature map.
进一步,在本发明实施例的一种AI智能垃圾识别分类系统中,训练单元210包括:预处理子单元,用于对图像数据集中的图片进行预处理;测试子单元,用于预训练的Faster-RCNN模型对预处理后的图片进行测试,获取测试结果;评估子单元,用于预设平均精确率AP阈值,根据测试结果进行评估,若评估结果低于平均精确率AP阈值,则修改预训练的Faster-RCNN模型参数,直至测试结果达到平均精确率AP阈值。Further, in an AI intelligent garbage recognition and classification system according to an embodiment of the present invention, the training unit 210 includes: a preprocessing subunit for preprocessing pictures in the image data set; a testing subunit for pre-training Faster -The RCNN model tests the preprocessed pictures to obtain the test results; the evaluation subunit is used to preset the average precision rate AP threshold, and evaluates according to the test results. If the evaluation result is lower than the average precision rate AP threshold, modify the pre-set Trained Faster-RCNN model parameters until the test result reaches the average precision rate AP threshold.
以上,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。Above, the above embodiments are only used to illustrate the technical solutions of the present application, rather than to limit them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: it can still be applied to the foregoing embodiments The technical solutions described in the examples are modified, or some of the technical features are equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the various embodiments of the application.
Claims (10)
- 一种AI智能垃圾识别分类方法,其特征在于,包括以下步骤:An AI intelligent garbage identification and classification method is characterized in that it comprises the following steps:搭建预训练的Faster-RCNN模型;Build a pre-trained Faster-RCNN model;获取包括多种垃圾类别的图像数据集,并指定识别对象,将所述图像数据集输入到所述预训练的Faster-RCNN模型进行训练,获得训练好的Faster-RCNN模型;Obtaining an image data set comprising a variety of rubbish categories, and specifying a recognition object, inputting the image data set into the pre-trained Faster-RCNN model for training, and obtaining a trained Faster-RCNN model;获取现场图像,将所述现场图像输入到所述训练好的Faster-RCNN模型中,识别所述现场图像中垃圾类别并确定垃圾在所述现场图像中的实时坐标;Obtain a live image, input the live image into the trained Faster-RCNN model, identify the garbage category in the live image and determine the real-time coordinates of the garbage in the live image;分析所述实时坐标获取行进量,根据所述行进量抓取垃圾并将抓取到的垃圾放在指定位置。Analyzing the real-time coordinates to obtain the travel distance, grabbing garbage according to the travel distance and placing the grabbed garbage at a designated location.
- 根据权利要求1所述的一种AI智能垃圾识别分类方法,其特征在于,所述搭建预训练的Faster-RCNN模型包括以下步骤:A kind of AI intelligent garbage identification classification method according to claim 1, is characterized in that, the Faster-RCNN model of described building pre-training comprises the following steps:搭建Conv layers,用于提取图片的特征图,其中所述Conv layers包括conv、pooling、relu三种层;Build Conv layers for extracting feature maps of images, wherein the Conv layers include conv, pooling, and relu layers;搭建区域生成网络层,使用区域生成网络层生成检测框,初步提取图片中目标候选区域;Build a region generation network layer, use the region generation network layer to generate a detection frame, and initially extract target candidate regions in the image;搭建区域池化层,获取并分析所述特征图和目标候选区域后提取候选特征图;Build a region pooling layer, obtain and analyze the feature map and the target candidate region, and then extract the candidate feature map;搭建分类层,使用边框回归获得检测框最终精确位置,以及通过所述候选特征图判定目标类别。Build a classification layer, use border regression to obtain the final precise position of the detection frame, and determine the target category through the candidate feature map.
- 根据权利要求1所述的一种AI智能垃圾识别分类方法,其特征在于,所述预训练的Faster-RCNN模型进行训练的训练步骤包括:A kind of AI intelligent garbage identification classification method according to claim 1, is characterized in that, the training step that the Faster-RCNN model of described pre-training is trained comprises:对所述图像数据集中的图片进行预处理;Preprocessing the pictures in the image data set;所述预训练的Faster-RCNN模型对预处理后的图片进行测试,获取测试结果;The pre-trained Faster-RCNN model tests the preprocessed picture to obtain test results;预设平均精确率AP阈值,根据所述测试结果进行评估,若评估结果低于平均精确率AP阈值,则修改所述预训练的Faster-RCNN模型参数,直至测试结果达到所述平均精确率AP阈值。Preset the average precision rate AP threshold, evaluate according to the test results, if the evaluation result is lower than the average precision rate AP threshold, modify the pre-trained Faster-RCNN model parameters until the test result reaches the average precision rate AP threshold.
- 根据权利要求1所述的一种AI智能垃圾识别分类方法,其特征在于,所述行进量包括X轴行进量、Y轴行进量和Z轴行进量,其中,所述Z轴行进量为预设数值,所述X轴行进量和所述Y轴行进量根据所述实时坐标计算得到。The method for identifying and classifying AI intelligent garbage according to claim 1, wherein the amount of travel includes the amount of X-axis travel, the amount of Y-axis travel and the amount of Z-axis travel, wherein the amount of Z-axis travel is a preset Assuming a value, the X-axis travel amount and the Y-axis travel amount are calculated according to the real-time coordinates.
- 根据权利要求4所述的一种AI智能垃圾识别分类方法,其特征在于,所述X轴行进量和所述Y轴行进量根据所述坐标计算得到的计算步骤包括:An AI intelligent garbage identification and classification method according to claim 4, wherein the calculation steps of the X-axis travel amount and the Y-axis travel amount calculated according to the coordinates include:设定参考坐标点;Set the reference coordinate point;计算所述参考坐标点和所述实时坐标的X轴偏移量和Y轴偏移量;calculating the X-axis offset and the Y-axis offset of the reference coordinate point and the real-time coordinates;将所述X轴偏移量和所述Y轴偏移量分别乘以变换因子a获得所述X轴行进量和所述Y轴行进量。The X-axis offset and the Y-axis offset are respectively multiplied by a conversion factor a to obtain the X-axis travel and the Y-axis travel.
- 一种AI智能垃圾识别分类系统,包括机械臂子系统、传送带、图像采集单元和控制模块,An AI intelligent garbage identification and classification system, including a robotic arm subsystem, a conveyor belt, an image acquisition unit and a control module,所述传送带用于运输待分类的垃圾;The conveyor belt is used to transport garbage to be sorted;所述图像采集单元,用于获取传送带上运输垃圾的现场图像,并将所述现场图像输入到所述控制模块;The image acquisition unit is used to acquire on-site images of garbage transported on the conveyor belt, and input the on-site images to the control module;所述控制模块包括识别单元、训练单元、坐标单元和行进量单元,所述识别单元上搭建有预训练的Faster-RCNN模型,用于识别垃圾类别;所述训练单元用于获取包括多种垃圾类别的图像数据集,并指定识别对象,将多种垃圾类别的图像数据集输入到所述预训练的Faster-RCNN模型进行训练,获得训练好的Faster-RCNN模型;所述坐标单元用于获取垃圾在所述现场图像中的实时坐标;所述行进量单元用于根据所述坐标确定行进量,并将所述行进量传输至所述机械臂子系统;The control module includes a recognition unit, a training unit, a coordinate unit and a travel unit, and the recognition unit is equipped with a pre-trained Faster-RCNN model for identifying garbage categories; the training unit is used to obtain a variety of garbage category of image data sets, and specify the recognition object, input the image data sets of various types of garbage into the pre-trained Faster-RCNN model for training, and obtain the trained Faster-RCNN model; the coordinate unit is used to obtain The real-time coordinates of the garbage in the on-site image; the travel distance unit is used to determine the travel distance according to the coordinates, and transmit the travel distance to the robotic arm subsystem;机械臂子系统用于根据所述行进量抓取垃圾并将抓取到的垃圾放在指定位置。The robotic arm subsystem is used to grab garbage according to the amount of travel and place the grabbed garbage at a designated location.
- 根据权利要求6所述的一种AI智能垃圾识别分类系统,其特征在于,所述控制模块还包括复位单元,所述复位单元用于当将抓取到的垃圾放在指定位置后,控制所述机械臂控制系统回到初始位置。An AI intelligent garbage identification and classification system according to claim 6, wherein the control module further includes a reset unit, and the reset unit is used to control the The mechanical arm control system returns to the initial position.
- 根据权利要求6所述的一种AI智能垃圾识别分类系统,其特征在于,所述行进量单元包括预设子单元和计算子单元,所述预设子单元用于获取预设的Z轴行进量,所述计算子单元用于根据所述实时坐标计算X轴行进量和Y轴行进量。An AI intelligent garbage identification and classification system according to claim 6, wherein the travel distance unit includes a preset subunit and a calculation subunit, and the preset subunit is used to obtain a preset Z-axis travel The calculation subunit is used to calculate the X-axis travel distance and the Y-axis travel distance according to the real-time coordinates.
- 根据权利要求6所述的一种AI智能垃圾识别分类系统,其特征在于,所述识别单元包括:An AI intelligent garbage identification and classification system according to claim 6, wherein the identification unit comprises:第一搭建子单元,用于搭建Conv layers,用于提取图片的特征图,其中所述Conv layers包括conv、pooling、relu三种层;The first building subunit is used to build Conv layers for extracting feature maps of pictures, wherein the Conv layers include three layers of conv, pooling and relu;第二搭建子单元,用于搭建区域生成网络层,使用区域生成网络层生成检测框,初步提取图片中目标候选区域;The second building sub-unit is used to build the area generation network layer, use the area generation network layer to generate the detection frame, and initially extract the target candidate area in the picture;第三搭建子单元,用于搭建区域池化层,获取并分析所述特征图和目标候选区域后提取候选特征图;The third building subunit is used to build a region pooling layer, acquire and analyze the feature map and target candidate regions, and then extract candidate feature maps;第四搭建子单元,搭建分类层,使用边框回归获得检测框最终精确位置,以及通过所述候选特征图判定目标类别。The fourth building sub-unit is building a classification layer, using border regression to obtain the final precise position of the detection frame, and determining the target category through the candidate feature map.
- 根据权利要求6所述的一种AI智能垃圾识别分类系统,其特征在于,所述训练单元包括:An AI intelligent garbage identification and classification system according to claim 6, wherein the training unit comprises:预处理子单元,用于对所述图像数据集中的图片进行预处理;A preprocessing subunit, configured to preprocess the pictures in the image data set;测试子单元,用于所述预训练的Faster-RCNN模型对预处理后的图片进行测试,获取测试结果;The test subunit is used for the pre-trained Faster-RCNN model to test the preprocessed picture to obtain test results;评估子单元,用于预设平均精确率AP阈值,根据所述测试结果进行评估,若评估结果低于平均精确率AP阈值,则修改所述预训练的Faster-RCNN模型参数,直至测试结果达到所述平均精确率AP阈值。The evaluation subunit is used to preset the average precision rate AP threshold, evaluate according to the test results, if the evaluation result is lower than the average precision rate AP threshold, then modify the pre-trained Faster-RCNN model parameters until the test results reach The average precision AP threshold.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/094012 WO2022241597A1 (en) | 2021-05-17 | 2021-05-17 | Ai intelligent garbage identification and classification system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/094012 WO2022241597A1 (en) | 2021-05-17 | 2021-05-17 | Ai intelligent garbage identification and classification system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022241597A1 true WO2022241597A1 (en) | 2022-11-24 |
Family
ID=84140978
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/094012 WO2022241597A1 (en) | 2021-05-17 | 2021-05-17 | Ai intelligent garbage identification and classification system and method |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2022241597A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115753735A (en) * | 2022-11-28 | 2023-03-07 | 华中科技大学 | Method and system for identifying and sorting garbage based on Raman spectrum online detection |
CN116052027A (en) * | 2023-03-31 | 2023-05-02 | 深圳联和智慧科技有限公司 | Unmanned aerial vehicle-based floating garbage type identification method, system and cloud platform |
CN117315541A (en) * | 2023-10-12 | 2023-12-29 | 浙江净禾智慧科技有限公司 | Ground garbage identification method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108355979A (en) * | 2018-01-31 | 2018-08-03 | 塞伯睿机器人技术(长沙)有限公司 | Target tracking sorting system on conveyer belt |
CN110705931A (en) * | 2019-09-09 | 2020-01-17 | 上海凯京信达科技集团有限公司 | Cargo grabbing method, device, system, equipment and storage medium |
CN110717943A (en) * | 2019-09-05 | 2020-01-21 | 中北大学 | Method and system for calibrating eyes of on-hand manipulator for two-dimensional plane |
CN110909660A (en) * | 2019-11-19 | 2020-03-24 | 佛山市南海区广工大数控装备协同创新研究院 | Plastic bottle detection and positioning method based on target detection |
CN111242057A (en) * | 2020-01-16 | 2020-06-05 | 南京理工大学 | Product sorting system, method, computer device and storage medium |
CN112115974A (en) * | 2020-08-18 | 2020-12-22 | 郑州睿如信息技术有限公司 | Intelligent visual detection method for classification treatment of municipal waste |
-
2021
- 2021-05-17 WO PCT/CN2021/094012 patent/WO2022241597A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108355979A (en) * | 2018-01-31 | 2018-08-03 | 塞伯睿机器人技术(长沙)有限公司 | Target tracking sorting system on conveyer belt |
CN110717943A (en) * | 2019-09-05 | 2020-01-21 | 中北大学 | Method and system for calibrating eyes of on-hand manipulator for two-dimensional plane |
CN110705931A (en) * | 2019-09-09 | 2020-01-17 | 上海凯京信达科技集团有限公司 | Cargo grabbing method, device, system, equipment and storage medium |
CN110909660A (en) * | 2019-11-19 | 2020-03-24 | 佛山市南海区广工大数控装备协同创新研究院 | Plastic bottle detection and positioning method based on target detection |
CN111242057A (en) * | 2020-01-16 | 2020-06-05 | 南京理工大学 | Product sorting system, method, computer device and storage medium |
CN112115974A (en) * | 2020-08-18 | 2020-12-22 | 郑州睿如信息技术有限公司 | Intelligent visual detection method for classification treatment of municipal waste |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115753735A (en) * | 2022-11-28 | 2023-03-07 | 华中科技大学 | Method and system for identifying and sorting garbage based on Raman spectrum online detection |
CN116052027A (en) * | 2023-03-31 | 2023-05-02 | 深圳联和智慧科技有限公司 | Unmanned aerial vehicle-based floating garbage type identification method, system and cloud platform |
CN117315541A (en) * | 2023-10-12 | 2023-12-29 | 浙江净禾智慧科技有限公司 | Ground garbage identification method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022241597A1 (en) | Ai intelligent garbage identification and classification system and method | |
CN111974704A (en) | Garbage classification detection system and method based on computer vision | |
CN113688825A (en) | AI intelligent garbage recognition and classification system and method | |
CN109389161A (en) | Rubbish identification evolutionary learning method, apparatus, system and medium based on deep learning | |
CN112102368B (en) | Deep learning-based robot garbage classification and sorting method | |
CN113850799B (en) | YOLOv 5-based trace DNA extraction workstation workpiece detection method | |
CN111921904B (en) | Multi-mechanical-arm collaborative coal and gangue sorting system based on visual and force information fusion perception | |
CN110909660A (en) | Plastic bottle detection and positioning method based on target detection | |
CN110969660A (en) | Robot feeding system based on three-dimensional stereoscopic vision and point cloud depth learning | |
CN111906782B (en) | Intelligent robot grabbing method based on three-dimensional vision | |
CN113469264A (en) | Construction method of automatic garbage classification model, garbage sorting method and system | |
CN111582123A (en) | AGV positioning method based on beacon identification and visual SLAM | |
CN112916416A (en) | Building rubbish letter sorting system | |
Ni et al. | A new approach based on two-stream cnns for novel objects grasping in clutter | |
CN111652214A (en) | Garbage bottle sorting method based on deep learning | |
CN110516625A (en) | A kind of method, system, terminal and the storage medium of rubbish identification classification | |
CN117531717A (en) | Patrol type intelligent garbage sorting robot and working method thereof | |
CN113319013A (en) | Apple intelligent sorting method based on machine vision | |
CN115797811A (en) | Agricultural product detection method and system based on vision | |
CN111272764B (en) | Non-contact image identification mobile management and control system and method for large intelligent temporary platform | |
CN112183374A (en) | Automatic express sorting device and method based on raspberry group and deep learning | |
Shi et al. | A fast workpiece detection method based on multi-feature fused SSD | |
CN205318622U (en) | Traffic jams controlling means based on image | |
CN111709991B (en) | Railway tool detection method, system, device and storage medium | |
Zhang et al. | Object detection based on deep learning and b-spline level set in color images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21940056 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21940056 Country of ref document: EP Kind code of ref document: A1 |