WO2021217922A1 - 人机协作分选系统及其机器人的抓取位置获取方法 - Google Patents

人机协作分选系统及其机器人的抓取位置获取方法 Download PDF

Info

Publication number
WO2021217922A1
WO2021217922A1 PCT/CN2020/104787 CN2020104787W WO2021217922A1 WO 2021217922 A1 WO2021217922 A1 WO 2021217922A1 CN 2020104787 W CN2020104787 W CN 2020104787W WO 2021217922 A1 WO2021217922 A1 WO 2021217922A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
display
sorting system
human
selected position
Prior art date
Application number
PCT/CN2020/104787
Other languages
English (en)
French (fr)
Inventor
莫卓亚
罗海城
陈文辉
Original Assignee
广东弓叶科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东弓叶科技有限公司 filed Critical 广东弓叶科技有限公司
Publication of WO2021217922A1 publication Critical patent/WO2021217922A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/36Sorting apparatus characterised by the means used for distribution
    • B07C5/361Processing or control devices therefor, e.g. escort memory
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/36Sorting apparatus characterised by the means used for distribution
    • B07C5/361Processing or control devices therefor, e.g. escort memory
    • B07C5/362Separating or distributor mechanisms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C2501/00Sorting according to a characteristic or feature of the articles or material to be sorted
    • B07C2501/0054Sorting of waste or refuse
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C2501/00Sorting according to a characteristic or feature of the articles or material to be sorted
    • B07C2501/0063Using robots

Definitions

  • This application relates to the technical field of material sorting, and in particular to a human-machine cooperative sorting system and a method for obtaining a grasping position of a robot.
  • the environment at the garbage sorting site is extremely harsh, full of dust and noise, and the sorting workers are exposed to harsh environments for a long time, which is very likely to cause irreversible health problems to the sorting workers.
  • Chinese patent application CN201710649904.0 proposes an artificial intelligence robot to sort large-size organic light objects in construction waste. Select a monitor outside the scene to display it, and then use human eyes to identify the garbage image displayed on the monitor to determine the location of each garbage, and then manually manipulate the monitor to select the location of a garbage in the image, and finally the monitor transmits the selected location information
  • the sorting robot that arrives at the scene can pick up garbage by the sorting robot.
  • the purpose of this application is to provide a method for acquiring a robot grasping position of a human-machine cooperative sorting system, which automatically corrects the selected position to improve the efficiency of robot grasping.
  • Another object of the present application is to provide a human-machine cooperative sorting system that can automatically correct the selected position.
  • this application discloses a method for acquiring a robot grasping position of a human-machine cooperative sorting system.
  • the human-machine cooperative sorting system includes a camera, a display, and a robot.
  • the robot of the human-machine cooperative sorting system Methods for obtaining the grabbing position include:
  • the preset position is sent to the robot as a grabbing position to grab the material.
  • the image transmitted by the camera in this application obtains the outline of each material displayed therein, and then combines the outline of the material to correct the selected position, thereby determining the grasping position of the robot, which can avoid selecting the position when manipulating the display.
  • the problem of material grabbing failure caused by deviation or encounter when the actual selected position does not match the best grabbing position when the picture is refreshed; and because each selected position within the same material contour will be corrected to a preset in the area where the material contour is located Position, even if the area where the same material contour is selected multiple times, it will only control the robot to perform one grasping action, which can avoid multiple grasping (empty grasping) by the robot and improve the grasping efficiency of the robot.
  • the preset position is the center position of the area where the material contour is located.
  • the method for acquiring the robot grasping position of the human-machine cooperative sorting system further includes: receiving operation information transmitted by the display, the operation information corresponding to the target placement position of the material; and searching for the target placement corresponding to the current operation information The position is sent to the robot to control the robot to place the material in the corresponding target placement position.
  • the display is a touch screen display
  • the selected position is a touch position on the display
  • the operation information is sliding in a preset direction using the selected position as a starting point.
  • this application also discloses a human-machine cooperative sorting system, including a conveying mechanism, a camera, a processing device communicatively connected with the camera, a display and a robot communicatively connected with the processing device, wherein,
  • the conveying mechanism is used for carrying and conveying materials;
  • the camera is used for collecting images of materials and transmitting them to the processing device;
  • the display is used for displaying the images and accepting operations;
  • the processing device executes: receiving the camera The transmitted image, the contour of each material in it is obtained from the image;
  • the selected position information transmitted by the display is received, the contour of the material corresponding to the selected position is searched, and the selected position is converted into a preset of the area where the contour of the material is located Position; sending the preset position as a grasping position to the robot; the robot grasps the corresponding material according to the grasping position.
  • the image transmitted by the camera in this application obtains the outline of each material displayed therein, and then combines the outline of the material to correct the selected position, thereby determining the grasping position of the robot, which can avoid selecting the position when manipulating the display.
  • the problem of material grabbing failure caused by deviation or encounter when the actual selected position does not match the best grabbing position when the picture is refreshed; and because each selected position within the same material contour will be corrected to a preset in the area where the material contour is located Position, even if the area where the same material contour is selected multiple times, it will only control the robot to perform one grasping action, which can avoid multiple grasping (empty grasping) by the robot and improve the grasping efficiency of the robot.
  • the preset position is the center position of the area where the material contour is located.
  • the display is also used to receive operation information
  • the processing device is also pre-set with the corresponding relationship between the target placement position of the material and the operation information, which receives the operation information transmitted by the display and controls according to the operation information.
  • the robot places the material in the corresponding target placement position.
  • the display is a touch screen display
  • the selected position is a touch position on the display
  • the operation information is sliding in a preset direction using the selected position as a starting point.
  • the present application also provides a robot grasping position acquisition device of a human-machine cooperative sorting system, which includes a processor, a memory, and stored in the memory and configured to be used by the processor.
  • the executed computer program when the processor executes the computer program, executes the method for obtaining the grasping position of the robot of the human-machine cooperative sorting system as described above.
  • the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program can be executed by a processor to complete the above-mentioned human-machine A method for obtaining a robot grasping position of a collaborative sorting system.
  • Fig. 1 is a schematic diagram of a desired selected position and an actual selected position in the prior art.
  • Figure 2 is a partial structural block diagram of the applicant's computer-assisted sorting system.
  • Fig. 3 is a schematic structural diagram of an embodiment of the applicant's machine cooperative sorting system.
  • Fig. 4 is a schematic structural diagram of another embodiment of the applicant's machine cooperative sorting system.
  • Fig. 5 is a schematic structural diagram of another embodiment of the applicant's machine cooperative sorting system.
  • Fig. 6 is a flowchart of the method for acquiring the robot grasping position of the applicant's machine-machine cooperative sorting system.
  • Fig. 7 is a schematic diagram of position correction of the present application.
  • Fig. 8 is a block diagram of the robot grasping position acquisition device of the applicant's machine-machine cooperative sorting system.
  • this embodiment provides a human-machine cooperative sorting system, including a conveying mechanism 1, a camera 2, a processing device 3 communicatively connected to the camera 2, a display 4 communicatively connected to the processing device 3, and
  • the robot 5, the conveying mechanism 1, the camera 2, and the robot 5, which are communicatively connected to the processing device 3, are arranged on the site of the sorting workshop, and the display 4 is arranged outside the site of the sorting workshop.
  • the conveying mechanism 1 is used to carry and convey materials;
  • the camera 2 is used to collect images of the materials and transmit the images to the processing device 3;
  • the processing device 3 receives the images transmitted by the camera 2, which processes the received images and obtains the images
  • the contour of each material in the image an image often contains multiple materials.
  • deep learning is used to obtain the contour of each material contained in the image. How to use deep learning to obtain the contour of the material is Existing technology, which will not be repeated here), and transmits the processed image to the display 4; the display 4 receives the image transmitted by the processing device 3 and displays the received image, and it also accepts the operation of the sorting worker 10 and selects The position information is transmitted to the processing device 3; and the processing device 3 receives the selected position information transmitted by the display 4, and converts the selected position into a preset position in the area where the material outline is located, and then sends the preset position as a grasping position to the robot 5; That is, select the location of the material that needs to be grasped by the robot 5 through the human operation display 4 and perform position correction according to the material contour to obtain the final grasping position, and the robot 5 grasps the material according to the final grasping position , So as to realize material sorting.
  • the camera 2 and the robot 5 of the human-machine cooperative sorting system shown in Fig. 3 are both straddling the conveyor belt of the conveying mechanism 1.
  • the camera 2 and the robot 5 are supported by a supporting frame 61 and 62 respectively, and the camera 2 and the robot 5 are supported along The forward direction of the material is set in sequence.
  • the camera 2 straddles the conveyor belt of the conveying mechanism 1
  • the camera 2 is supported by a support frame 6
  • the robot 5 is arranged on one side of the conveying mechanism 1
  • the bottom of the robot 5 is provided with multiple
  • the wheels 51 when the robot 5 receives the driving force of movement, the wheels 51 will move relative to the ground, so that the position adjustment of the robot 5 is convenient and labor-saving (as shown in FIG. 4).
  • both the camera 2 and the robot 5 straddle the conveyor belt of the conveyor mechanism 1, and the camera 2 and the robot 5 are integrated together and supported by the same support frame 6 (as shown in FIG. 5).
  • the camera 2 is a 2D camera
  • the image is an RGB image
  • the processing device 3 can be any device with image processing capabilities and data computing capabilities, such as a PC.
  • the preset position is the center position of the area where the material contour is located, but it should not be limited to this.
  • S1 is a material profile obtained through deep learning
  • P1 is the actual selected position.
  • the selected position P1 is located at the edge of the area where the material contour S1 is located.
  • This application corrects the selected position P1 to obtain the grasping position P2 (that is, the center position of the area where the material outline is located).
  • the display 4 is also used to receive the operation information of the sorting workers 10.
  • the processing device 3 is preset with the corresponding relationship between the target placement position of the material and the operation information, and the display 4 transmits the operation information information to the processing device 3, and the processing device 3
  • the operation information transmitted by the display 4 is received and the robot 5 is controlled to place the material in the corresponding target placement position according to the operation information.
  • the robot 5 is controlled to place each material in the preset target placement position, so as to separate the grabbed materials.
  • the display 4 is a touch screen display with display and touch functions, and the selected position is the touch position of the finger of the sorting worker 10 on the display 4.
  • the display 4 may also only have a display function.
  • the material to be grabbed can be selected by clicking on the display screen of the display 4 with the mouse.
  • the operation information is to slide in a preset direction from the selected position (that is, the touch position of the finger of the sorting worker 10 on the display 4) as the starting point. The operation is simple and quick, and it is not easy to make mistakes.
  • the control robot 5 when sliding to the left with the selected position as the starting point, the control robot 5 will place the grabbed material in the material frame at the first position; when sliding upward with the selected position as the starting point, the control robot 5 will place the grabbed material in the The material frame at the second position; when sliding to the right from the selected position, the control robot 5 will place the grabbed material in the material frame at the third position; the material frame at the first position and the second position can be set in the specific implementation.
  • the material frame of the position and the material frame of the third position respectively correspond to a certain type of material.
  • the material frame of the first position is for plastic bottles
  • the material frame of the second position is for cans
  • the material frame of the third position is for glass bottles, etc.
  • the operation information may also be other actions, for example, making a circle clockwise with the selected position as a starting point, or making a circle counterclockwise with the selected position as a starting point, and should not be limited to this.
  • this embodiment provides a method for acquiring a robot grasping position of a human-machine cooperative sorting system.
  • the human-machine cooperative sorting system includes a camera 2, a display 4, and a robot 5.
  • the camera 2 is set up by the robot 5.
  • the display 4 is set outside the sorting workshop site.
  • the method for acquiring the robot grasping position of the human-machine cooperative sorting system includes:
  • S101 Receive the image transmitted by the camera 2, and obtain the outline of each material from the image (an image often contains multiple materials. After preprocessing the image, deep learning is used to obtain the image of each material contained in the image. contour);
  • S103 Send the preset position as a grasping position to the robot 5 to grasp the material.
  • the preset position is the center position of the area where the material contour is located, but it should not be limited to this.
  • S1 is a material profile obtained through deep learning
  • P1 is the actual selected position.
  • the selected position P1 is located at the edge of the area where the material contour S1 is located.
  • This application corrects the selected position P1 to obtain the grasping position P2 (that is, the center position of the area where the material outline is located).
  • the operation information transmitted by the display 4 is also received, and the operation information corresponds to the target placement position of the material, and then the target placement position corresponding to the current operation information is searched; in S103, the target placement position is also sent to the robot 5 to control the robot 5 to place the material in the corresponding target placement position.
  • the target placement position of the material is obtained according to the corresponding relationship between the operation information and the target placement position, and then the target placement position is sent to the robot 5 to separate the grabbed materials.
  • you can set different The materials of the categories are placed in different target placement positions to realize the refined classification of the grabbed materials directly in the sorting process, thereby reducing the sorting process.
  • a touch screen display with display and touch functions is selected as the display 4, and the selected position is the touch position of the finger of the sorting worker 10 on the display 4.
  • the operation information is the selected position (that is, the touch position of the finger of the sorting worker 10 on the display 4).
  • the operation information is sliding in the preset direction from the selected position as the starting point. The operation is simple and fast, and it is not easy to make mistakes. . For example, when sliding to the left with the selected position as the starting point, the corresponding target placement position is the first position.
  • the robot 5 places the grabbed material in the first position; when sliding upwards from the selected position as the starting point, the corresponding target placement The position is the second position. At this time, the robot 5 will place the grabbed material in the second position; when sliding to the right from the selected position as the starting point, the corresponding target placement position is the third position. At this time, the robot 5 will grab Place the material in the third position; in specific implementation, you can set the first, second, and third positions to place a certain type of material, such as plastic bottles in the first position, cans in the second position, and cans in the third position. Place glass bottles and so on.
  • the operation information may also be other actions, for example, making a circle clockwise with the selected position as a starting point, or making a circle counterclockwise with the selected position as a starting point, and should not be limited to this.
  • this application also discloses a robot grasping position acquisition device 200 of a human-machine cooperative sorting system, which includes a processor 210, a memory 220, and a device stored in the memory 220 and configured to be executed by the processor 210.
  • the computer program 230 when the processor 210 executes the computer program 230, executes the above-mentioned method for acquiring the robot grasping position of the human-machine cooperative sorting system.
  • the image transmitted by the camera 2 obtains the outline of each material displayed therein, and then combines the outline of the material to correct the selected position, thereby determining the grasping position of the robot 5, which can avoid manual operation of the display 4
  • the selected position is deviated or when the picture is refreshed, the actual selected position does not match the best grasping position.
  • the robot 5 will only be controlled to perform one grasping action, which can avoid multiple grasping (empty grasping) by the robot 5 and improve the grasping efficiency of the robot 5.
  • the sorting workers do not need to work on the sorting site, which can avoid the health problems caused by the sorting workers in the harsh sorting environment for a long time.

Landscapes

  • Manipulator (AREA)

Abstract

一种人机协作分选系统的机器人抓取位置获取方法,包括:接收相机(2)传送的图像,由图像获得其中各物料轮廓;接收显示器(4)传送的选中位置信息,查找与选中位置对应的物料轮廓并将选中位置转化为物料轮廓所在区域的一预设位置;将预设位置作为抓取位置发送给机器人(5)以抓取物料。该获取方法结合物料轮廓对选中位置进行修正来确定机器人的抓取位置,可以避免实际选中位置与最佳抓取位置不符导致的抓取失败及多次选中同一个物料轮廓所在区域而导致机器人多次抓取的问题,提高了机器人的抓取效率。还公开一种采用机器人抓取位置获取方法对选中位置进行校正的人机协作分选系统。

Description

人机协作分选系统及其机器人的抓取位置获取方法 技术领域
本申请涉及物料分选技术领域,尤其涉及一种人机协作分选系统及其机器人的抓取位置获取方法。
背景技术
垃圾分类现场环境极其恶劣,充满了灰尘和噪声,分选工人长时间身处恶劣的环境下,极可能会对分选工人造成不可逆的健康问题。
为了解决上述问题,中国专利申请CN201710649904.0提出了一种人工智能机器人分选建筑垃圾中大尺寸有机轻质物的方法,其将垃圾置于输送线上,通过相机采集垃圾的图像并由分选现场之外的显示器显示出来,然后通过人眼识别显示器显示的垃圾图像来判断各垃圾的位置,再人为操作显示器以选中图像中的某一垃圾的所在位置,最后显示器将选中的位置信息传送至现场的分选机器人以通过分选机器人进行垃圾抓取。
但是,人为操作显示器时会存在选中位置偏差而导致显示器实际获取到的选中位置P1’与期望的选中位置P2’不符(如图1所示);此外,若操作显示器时恰好碰上图片刷新,也会导致实际选中位置P1’与期望选中位置P2’出现偏差;甚至还会出现同一个垃圾的所在位置被操作多次而导致机器人对同一垃圾执行多次抓取(空抓)动作,这将大大降低分选机器人的抓取效率,最终导致垃圾分选效率低下。
申请内容
本申请的目的在于提供一种人机协作分选系统的机器人抓取位置获取方法,其自动修正选中位置以提高机器人抓取效率。
本申请的另一目的在于提供一种能够自动修正选中位置的人机协作分选系统。
为了实现上述目的,本申请公开了一种人机协作分选系统的机器人抓取位置获取方法,所述人机协作分选系统包括相机、显示器及机器人,所述人机协作分选系统的机器人抓取位置获取方法包括:
接收相机传送的图像,由所述图像获得其中各物料轮廓;
接收显示器传送的选中位置信息,并将选中位置转化为所述物料轮廓所在区域的一预设位置;
将所述预设位置作为抓取位置发送给机器人以抓取物料。
与现有技术相比,本申请由相机传送的图像获得其中所显示的各个物料的轮廓,然后结合物料轮廓对选中位置进行修正,从而确定机器人的抓取位置,可以避免人为操作显示器时选中位置偏差或碰上图片刷新时实际选中位置与最佳抓取位置不符导致的物料抓取失败的问题;又由于同一物料轮廓内的各个选中位置都将修正为物料轮廓所在区域的一个预先设定的位置,即使多次选中同一个物料轮廓所在区域,也只会控制机器人进行一次抓取动作,可以避免机器人多次抓取(空抓),提高了机器人的抓取效率。
具体地,所述预设位置为所述物料轮廓所在区域的中心位置。
较佳地,所述人机协作分选系统的机器人抓取位置获取方法还包括:接收显示器传送的操作信息,所述操作信息与物料的目标放置位置相对应;查找当前操作信息对应的目标放置位置并发送至机器人以控制机器人将物料放置在对应的目标放置位置。
更佳地,所述显示器为触屏显示器,所述选中位置为在所述显示器的触控位置,所述操作信息为以所述选中位置为起点向预设方向滑动。
为了实现上述目的,本申请还公开了一种人机协作分选系统,包括输送机构、相机、与所述相机通信连接的处理装置、与所述处理装置通信连接的 显示器及机器人,其中,所述输送机构用于承载和输送物料;所述相机用于采集物料的图像并传送至所述处理装置;所述显示器用于显示所述图像并接受操作;所述处理装置执行:接收所述相机传送的图像,由所述图像获得其中各物料轮廓;接收所述显示器传送的选中位置信息,查找与选中位置对应的物料轮廓并将所述选中位置转化为所述物料轮廓所在区域的一预设位置;将所述预设位置作为抓取位置发送给所述机器人;所述机器人依据所述抓取位置抓取对应的物料。
与现有技术相比,本申请由相机传送的图像获得其中所显示的各个物料的轮廓,然后结合物料轮廓对选中位置进行修正,从而确定机器人的抓取位置,可以避免人为操作显示器时选中位置偏差或碰上图片刷新时实际选中位置与最佳抓取位置不符导致的物料抓取失败的问题;又由于同一物料轮廓内的各个选中位置都将修正为物料轮廓所在区域的一个预先设定的位置,即使多次选中同一个物料轮廓所在区域,也只会控制机器人进行一次抓取动作,可以避免机器人多次抓取(空抓),提高了机器人的抓取效率。
具体地,所述预设位置为所述物料轮廓所在区域的中心位置。
较佳地,所述显示器还用于接受操作信息,所述处理装置还预先设置有物料的目标放置位置与操作信息的对应关系,其接收所述显示器传送的操作信息并依据所述操作信息控制所述机器人将物料放置在对应的目标放置位置。
更佳地,所述显示器为触屏显示器,所述选中位置为在所述显示器的触控位置,所述操作信息为以所述选中位置为起点向预设方向滑动。
为实现上述目的,相应地,本申请还提供了一种人机协作分选系统的机器人抓取位置获取装置,包括处理器、存储器以及存储在所述存储器中且被配置为由所述处理器执行的计算机程序,所述处理器执行所述计算机程序时,执行如上所述的人机协作分选系统的机器人抓取位置获取方法。
为实现上述目的,相应地,本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序可被处理器执 行以完成如上所述的人机协作分选系统的机器人抓取位置获取方法。
附图说明
图1是现有技术中期望选中位置和实际选中位置的示意图。
图2是本申请人机协作分选系统的部分组成结构框图。
图3是本申请人机协作分选系统一实施例的结构示意图。
图4是本申请人机协作分选系统另一实施例的结构示意图。
图5是本申请人机协作分选系统又一实施例的结构示意图。
图6是本申请人机协作分选系统的机器人抓取位置获取方法的流程图。
图7是本申请位置修正的示意图。
图8是本申请人机协作分选系统的机器人抓取位置获取装置的组成框图。
具体实施方式
为详细说明本申请的内容、构造特征、所实现目的及效果,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
请参阅图2和图3,本实施例提供一种人机协作分选系统,包括输送机构1、相机2、与相机2通信连接的处理装置3、与处理装置3通信连接的显示器4及与处理装置3通信连接的机器人5,输送机构1、相机2、机器人5设置在分选车间现场,显示器4设置在分选车间现场之外。其中,输送机构1用于承载和输送物料;相机2用于采集物料的图像并将图像传送至处理装置3;处理装置3接收相机2传送的图像,其对接收到的图像进行处理并获得图像中各物料轮廓(一张图像中往往会包含有多个物料,对图像进行预处理后利用深度学习的方式来获得图像中所包含的各个物料的轮廓,如何利用深度学习的方式获得物料轮廓为现有技术,在此不再赘述),并将处理后的图像传送 至显示器4;显示器4接收处理装置3传送的图像并显示接收到的图像,其还接受分选工人10的操作并将选中位置信息传送至处理装置3;而处理装置3接收显示器4传送的选中位置信息,并将选中位置转化为物料轮廓所在区域的一预设位置,然后将该预设位置作为抓取位置发送给机器人5;即是,通过人为操作显示器4来选中需要机器人5抓取的物料的所在位置并依据物料轮廓进行位置校正得到最终的抓取位置,而机器人5则依据该最终的抓取位置抓取物料,从而实现物料分选。
图3所示人机协作分选系统的相机2和机器人5均横跨在输送机构1的输送带之上,相机2和机器人5分别通过一支撑架61、62支撑,相机2和机器人5沿物料的前进方向依次设置。在一些实施例中,相机2横跨在输送机构1的输送带之上,相机2通过一支撑架6支撑,机器人5则设置在输送机构1的一侧,且机器人5的底部设有多个车轮51,当机器人5受到移动驱动力时,车轮51将相对地面移动,使得机器人5的位置调整方便省力(如图4所示)。在一些实施例中,相机2和机器人5均横跨在输送机构1的输送带之上,相机2和机器人5集成在一起并通过同一支撑架6支撑(如图5所示)。其中,相机2为2D相机,图像为RGB图像,处理装置3可以为任何具备图像处理能力和数据运算能力的装置,例如PC机等。
具体的,预设位置为物料轮廓所在区域的中心位置,但不应以此为限。如图7所示,S1为通过深度学习获得的一物料轮廓,P1为实际的选中位置,该选中位置P1位于物料轮廓S1所在区域的边缘,本申请对选中位置P1进行修正后得到抓取位置P2(即是物料轮廓所在区域的中心位置)。
具体的,显示器4还用于接受分选工人10的操作信息,处理装置3预先设置有物料的目标放置位置与操作信息的对应关系,显示器4将操作信息信息传送至处理装置3,处理装置3接收显示器4传送的操作信息信息并依据操作信息来控制机器人5将物料放置在对应的目标放置位置。通过不同的操作信息控制机器人5将各个物料放在预设的目标放置位置,从而将抓取的物料分开放置,在具体实施中,可以设置将不同类别的物料分别放置在不同的目 标放置位置,实现在分选过程中直接对抓取的物料细化分类,从而减少分选工序。
优选的,显示器4为具有显示和触控功能的触屏显示器,选中位置即为分选工人10的手指在显示器4的触控位置,当然,显示器4也可以仅具有显示功能,具体实施中还可以通过鼠标点选显示器4的显示屏来选中欲抓取的物料。在一实施例中,操作信息为以选中位置(即分选工人10的手指在显示器4的触控位置)为起点向预设方向滑动,操作简单快捷,且不容易出错。例如,以选中位置为起点向左滑动时,控制机器人5将抓取的物料放置在位于第一位置的料框;以选中位置为起点向上滑动时,控制机器人5将抓取的物料放置在位于第二位置的料框;以选中位置为起点向右滑动时,控制机器人5将抓取的物料放置在位于第三位置的料框;具体实施中可以设定第一位置的料框、第二位置的料框、第三位置的料框分别对应放置某一类型的物料,如第一位置的料框放置塑料瓶,第二位置的料框放置易拉罐,第三位置的料框放置玻璃瓶等。当然,在一些实施例中,操作信息也可以为其它动作,例如以选中位置为起点顺时针划圈、以选中位置为起点逆时针划圈等,故不应以此为限。
请参阅图3和图6,本实施例提供一种人机协作分选系统的机器人抓取位置获取方法,人机协作分选系统包括相机2、显示器4及机器人5,相机2、机器人5设置在分选车间现场,显示器4设置在分选车间现场之外。人机协作分选系统的机器人抓取位置获取方法包括:
S101,接收相机2传送的图像,由图像获得其中各物料轮廓(一张图像中往往会包含有多个物料,对图像进行预处理后利用深度学习的方式来获得图像中所包含的各个物料的轮廓);
S102,接收显示器4传送的选中位置信息,查找与选中位置对应的物料轮廓并将选中位置转化为物料轮廓所在区域的一预设位置;
S103,将该预设位置作为抓取位置发送给机器人5以抓取物料。
具体的,预设位置为物料轮廓所在区域的中心位置,但不应以此为限。 如图7所示,S1为通过深度学习获得的一物料轮廓,P1为实际的选中位置,该选中位置P1位于物料轮廓S1所在区域的边缘,本申请对选中位置P1进行修正后得到抓取位置P2(即是物料轮廓所在区域的中心位置)。
具体的,在S102中,还接收显示器4传送的操作信息,操作信息与物料的目标放置位置相对应,然后查找当前操作信息对应的目标放置位置;在S103中,还将目标放置位置发送至机器人5以控制机器人5将物料放置在对应的目标放置位置。通过获取操作信息,根据操作信息与目标放置位置的对应关系获得物料的目标放置位置,然后将该目标放置位置发送至机器人5,从而将抓取的物料分开放置,具体实施中,可以设置将不同类别的物料分别放置在不同的目标放置位置,实现在分选过程中直接对抓取的物料细化分类,从而减少分选工序。
优选的,在一实施例中,选用具有显示和触控功能的触屏显示器作为显示器4,选中位置即为分选工人10的手指在显示器4的触控位置,当然,也可以选用仅具有显示功能的显示器4,具体实施中还可以通过鼠标点选显示器4的显示屏来选中欲抓取的物料。在一实施例中,操作信息为以选中位置(即分选工人10的手指在显示器4的触控位置)操作信息为以选中位置为起点向预设方向滑动,操作简单快捷,且不容易出错。例如,以选中位置为起点向左滑动时,对应的目标放置位置为第一位置,此时,机器人5将抓取的物料放置第一位置;以选中位置为起点向上滑动时,对应的目标放置位置为第二位置,此时,机器人5将抓取的物料放置在第二位置;以选中位置为起点向右滑动时,对应的目标放置位置为第三位置,此时,机器人5将抓取的物料放置在第三位置;具体实施中可以设定第一位置、第二位置、第三位置分别对应放置某一类型的物料,如第一位置放置塑料瓶,第二位置放置易拉罐,第三位置放置玻璃瓶等。当然,在一些实施例中,操作信息也可以为其它动作,例如以选中位置为起点顺时针划圈、以选中位置为起点逆时针划圈等,故不应以此为限。
请参阅图8,本申请还公开一种人机协作分选系统的机器人抓取位置获取 装置200,其包括处理器210、存储器220以及存储在存储器220中且被配置为由处理器210执行的计算机程序230,处理器210执行计算机程序230时,执行上述的人机协作分选系统的机器人抓取位置获取方法。
与现有技术相比,本申请由相机2传送的图像获得其中所显示的各个物料的轮廓,然后结合物料轮廓对选中位置进行修正,从而确定机器人5的抓取位置,可以避免人为操作显示器4时选中位置偏差或碰上图片刷新时实际选中位置与最佳抓取位置不符导致的物料抓取失败的问题;又由于同一物料轮廓内的各个选中位置都将修正为物料轮廓所在区域的一个预先设定的位置,即使多次选中同一个物料轮廓所在区域,也只会控制机器人5进行一次抓取动作,可以避免机器人5多次抓取(空抓),提高了机器人5的抓取效率。此外,由于采用人机协作的方式进行物料分选,分选工人无需到分选现场作业,可以避免分选工人长时间身处恶劣的分选环境所导致的健康问题。
以上所揭露的仅为本申请的优选实施例而已,当然不能以此来限定本申请之权利范围,因此依本申请申请专利范围所作的等同变化,仍属本申请所涵盖的范围。

Claims (10)

  1. 一种人机协作分选系统的机器人抓取位置获取方法,所述人机协作分选系统包括相机、显示器及机器人,其特征在于,所述人机协作分选系统的机器人抓取位置获取方法包括:
    接收相机传送的图像,由所述图像获得其中各物料轮廓;
    接收显示器传送的选中位置信息,查找与选中位置对应的物料轮廓并将所述选中位置转化为所述物料轮廓所在区域的一预设位置;
    将所述预设位置作为抓取位置发送给机器人以抓取物料。
  2. 如权利要求1所述的人机协作分选系统的机器人抓取位置获取方法,其特征在于,所述预设位置为所述物料轮廓所在区域的中心位置。
  3. 如权利要求1所述的人机协作分选系统的机器人抓取位置获取方法,其特征在于,还包括:
    接收显示器传送的操作信息,所述操作信息与物料的目标放置位置相对应;
    查找当前操作信息对应的目标放置位置并发送至机器人以控制机器人将物料放置在对应的目标放置位置。
  4. 如权利要求3所述的人机协作分选系统的机器人抓取位置获取方法,其特征在于,所述显示器为触屏显示器,所述选中位置为在所述显示器的触控位置,所述操作信息为以所述选中位置为起点向预设方向滑动。
  5. 一种人机协作分选系统,包括输送机构、相机、与所述相机通信连接的处理装置、与所述处理装置通信连接的显示器及机器人,其特征在于,
    所述输送机构用于承载和输送物料;
    所述相机用于采集物料的图像并传送至所述处理装置;
    所述显示器用于显示所述图像并接受操作;
    所述处理装置执行:接收所述相机传送的图像,由所述图像获得其中各物料轮廓;接收所述显示器传送的选中位置信息,查找与选中位置对应的物料轮廓并将所述选中位置转化为所述物料轮廓所在区域的一预设位置;将所述预设位置作为抓取位置发送给所述机器人;
    所述机器人依据所述抓取位置抓取对应的物料。
  6. 如权利要求5所述的人机协作分选系统,其特征在于,所述预设位置为所述物料轮廓所在区域的中心位置。
  7. 如权利要求5所述的人机协作分选系统,其特征在于,所述显示器还用于接受操作信息,所述处理装置还预先设置有物料的目标放置位置与操作信息的对应关系,其接收所述显示器传送的操作信息并依据所述操作信息控制所述机器人将物料放置在对应的目标放置位置。
  8. 如权利要求7所述的人机协作分选系统,其特征在于,所述显示器为触屏显示器,所述选中位置为在所述显示器的触控位置,所述操作信息为以所述选中位置为起点向预设方向滑动。
  9. 一种人机协作分选系统的机器人抓取位置获取装置,其特征在于,包括:
    处理器、存储器以及存储在所述存储器中且被配置为由所述处理器执行的计算机程序,所述处理器执行所述计算机程序时,执行如权利要求1至4任一项所述的人机协作分选系统的机器人抓取位置获取方法。
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存 储有计算机程序,所述计算机程序可被处理器执行以完成如权利要求1至4任一项所述的人机协作分选系统的机器人抓取位置获取方法。
PCT/CN2020/104787 2020-04-26 2020-07-27 人机协作分选系统及其机器人的抓取位置获取方法 WO2021217922A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010342284.8 2020-04-26
CN202010342284.8A CN111515149B (zh) 2020-04-26 2020-04-26 人机协作分选系统及其机器人的抓取位置获取方法

Publications (1)

Publication Number Publication Date
WO2021217922A1 true WO2021217922A1 (zh) 2021-11-04

Family

ID=71906302

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/104787 WO2021217922A1 (zh) 2020-04-26 2020-07-27 人机协作分选系统及其机器人的抓取位置获取方法

Country Status (2)

Country Link
CN (1) CN111515149B (zh)
WO (1) WO2021217922A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115519544B (zh) * 2022-10-10 2023-12-15 深圳进化动力数码科技有限公司 一种生鲜分拣机器人抓取方法、装置、设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001252886A (ja) * 2000-03-10 2001-09-18 Hitachi Zosen Corp 物体のハンドリングシステム
CN206046503U (zh) * 2016-08-30 2017-03-29 苏州德品医疗科技股份有限公司 一种基于手势控制的医疗垃圾分类系统
CN108381549A (zh) * 2018-01-26 2018-08-10 广东三三智能科技有限公司 一种双目视觉引导机器人快速抓取方法、装置及存储介质
CN109365318A (zh) * 2018-11-30 2019-02-22 天津大学 一种多机器人协作分拣方法及系统
KR101986451B1 (ko) * 2018-03-27 2019-06-10 한국로봇융합연구원 수중로봇의 매니퓰레이터 제어방법 및 그 제어 시스템
CN110293554A (zh) * 2018-03-21 2019-10-01 北京猎户星空科技有限公司 机器人的控制方法、装置和系统
CN110660104A (zh) * 2019-09-29 2020-01-07 珠海格力电器股份有限公司 工业机器人视觉识别定位抓取方法、计算机装置以及计算机可读存储介质

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DD233314A1 (de) * 1984-12-28 1986-02-26 Fortschritt Veb K Einstell- und anzeigeeinrichtung an automatischen trennanlagen
JP3387258B2 (ja) * 1995-03-10 2003-03-17 株式会社ニコン 二次元撮像式スキャナ
JPH10180667A (ja) * 1996-12-26 1998-07-07 Nkk Corp 廃棄物中の特定物を選別するロボットシステム
JP3834766B2 (ja) * 2000-04-03 2006-10-18 独立行政法人科学技術振興機構 マンマシーン・インターフェース・システム
US6940511B2 (en) * 2002-06-07 2005-09-06 Telefonaktiebolaget L M Ericsson (Publ) Graphics texture processing methods, apparatus and computer program products using texture compression, block overlapping and/or texture filtering
CN102581851A (zh) * 2011-01-14 2012-07-18 鸿富锦精密工业(深圳)有限公司 机械手臂运动控制系统及方法
CN103157607B (zh) * 2011-12-19 2014-07-23 华南理工大学 一种物品识别与分拣装置及其方法
CN103706571B (zh) * 2013-12-27 2015-12-09 西安航天精密机电研究所 一种视觉定位分拣方法
CN107030687A (zh) * 2016-02-04 2017-08-11 上海晨兴希姆通电子科技有限公司 位置偏移检测方法及模块、抓取位置校准方法、抓取系统
CN106000904B (zh) * 2016-05-26 2018-04-10 北京新长征天高智机科技有限公司 一种生活垃圾自动分拣系统
CN106275993A (zh) * 2016-08-30 2017-01-04 苏州德品医疗科技股份有限公司 一种直线型垃圾自动分类回收系统
CN106485207B (zh) * 2016-09-21 2019-11-22 清华大学 一种基于双目视觉图像的指尖检测方法及系统
CN107790402A (zh) * 2017-07-24 2018-03-13 梁超 人工智能机器人分拣建筑垃圾中大尺寸有机轻质物的方法
CN208713965U (zh) * 2018-07-23 2019-04-09 沈阳城市建设学院 一种基于人机协作的流水线分拣机器人
CN109483517A (zh) * 2018-10-22 2019-03-19 天津扬天科技有限公司 一种基于手姿追踪的协作机器人示教方法
CN109471841B (zh) * 2018-10-31 2022-02-01 维沃移动通信有限公司 一种文件分类方法和装置
CN109459984B (zh) * 2018-11-02 2021-02-12 宁夏巨能机器人股份有限公司 一种基于三维点云的定位抓取系统及其使用方法
CN110427845B (zh) * 2019-07-19 2022-12-16 广东弓叶科技有限公司 物品像素中心确定方法、装置、设备及可读存储介质
CN110533681B (zh) * 2019-08-26 2023-01-06 广东弓叶科技有限公司 物品抓取方法、装置、设备及计算机可读存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001252886A (ja) * 2000-03-10 2001-09-18 Hitachi Zosen Corp 物体のハンドリングシステム
CN206046503U (zh) * 2016-08-30 2017-03-29 苏州德品医疗科技股份有限公司 一种基于手势控制的医疗垃圾分类系统
CN108381549A (zh) * 2018-01-26 2018-08-10 广东三三智能科技有限公司 一种双目视觉引导机器人快速抓取方法、装置及存储介质
CN110293554A (zh) * 2018-03-21 2019-10-01 北京猎户星空科技有限公司 机器人的控制方法、装置和系统
KR101986451B1 (ko) * 2018-03-27 2019-06-10 한국로봇융합연구원 수중로봇의 매니퓰레이터 제어방법 및 그 제어 시스템
CN109365318A (zh) * 2018-11-30 2019-02-22 天津大学 一种多机器人协作分拣方法及系统
CN110660104A (zh) * 2019-09-29 2020-01-07 珠海格力电器股份有限公司 工业机器人视觉识别定位抓取方法、计算机装置以及计算机可读存储介质

Also Published As

Publication number Publication date
CN111515149B (zh) 2020-12-29
CN111515149A (zh) 2020-08-11

Similar Documents

Publication Publication Date Title
US11213860B2 (en) Automatic magnetic core sorting system based on machine vision
CN111496770B (zh) 基于3d视觉与深度学习的智能搬运机械臂系统及使用方法
DE102019125126B4 (de) Informationsverarbeitungsvorrichtung, Informationsverarbeitungsverfahren und System
CN207057050U (zh) 一种微小零件外观质量缺陷光学检测设备
WO2017015898A1 (zh) 用于机器人拆垛设备的控制系统和用于控制机器人拆垛的方法
CN105710978B (zh) 一种应用智能石材切割流水线的切割方法
US20120229620A1 (en) Image processing apparatus and image processing system, and guidance apparatus therefor
JP2013022705A (ja) ロボット装置及びロボット装置の制御方法、コンピューター・プログラム、並びにロボット・システム
CN110302981B (zh) 一种固废分拣在线抓取方法和系统
US20150062172A1 (en) Image processing apparatus
CN109658432A (zh) 一种移动机器人的边界生成方法及系统
WO2021217922A1 (zh) 人机协作分选系统及其机器人的抓取位置获取方法
JP2021030107A (ja) 物品選別装置、物品選別システムおよび物品選別方法
WO2024021402A1 (zh) 一种基于视觉定位的取料卸货方法及其装置
CN103063137A (zh) 一种基于机器视觉的药瓶测量系统及其测量方法
US20230017444A1 (en) Method for the computer-assisted learning of an artificial neural network for detecting structural features of objects
CN112090782A (zh) 一种人机协同式垃圾分拣系统及方法
WO2021217923A1 (zh) 人机协作分选系统及其分选多类物料的方法
CN112288819B (zh) 多源数据融合的视觉引导机器人抓取及分类系统及方法
CN116175542B (zh) 确定夹具抓取顺序的方法、装置、电子设备和存储介质
US20240051134A1 (en) Controller, robot system and learning device
JP7481205B2 (ja) ロボットシステム、ロボットの制御方法、情報処理装置、コンピュータプログラム、学習装置、及び学習済みモデルの生成方法
CN116529760A (zh) 抓取控制方法、装置、电子设备和存储介质
CN114192447A (zh) 基于图像识别的垃圾分拣方法
WO2022241597A1 (zh) 一种ai智能垃圾识别分类系统和方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20933998

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20933998

Country of ref document: EP

Kind code of ref document: A1