WO2020034872A1 - Target acquisition method and device, and computer readable storage medium - Google Patents

Target acquisition method and device, and computer readable storage medium Download PDF

Info

Publication number
WO2020034872A1
WO2020034872A1 PCT/CN2019/099398 CN2019099398W WO2020034872A1 WO 2020034872 A1 WO2020034872 A1 WO 2020034872A1 CN 2019099398 W CN2019099398 W CN 2019099398W WO 2020034872 A1 WO2020034872 A1 WO 2020034872A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
image
obtaining
information
target acquisition
Prior art date
Application number
PCT/CN2019/099398
Other languages
French (fr)
Chinese (zh)
Inventor
吕仕杰
Original Assignee
深圳蓝胖子机器人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳蓝胖子机器人有限公司 filed Critical 深圳蓝胖子机器人有限公司
Publication of WO2020034872A1 publication Critical patent/WO2020034872A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition

Abstract

Disclosed are a target acquisition method and device, and a computer readable storage medium. The target acquisition method comprises: acquiring a first image captured by a first visual structure; inputting the first image to a pre-trained neural network for calculation, performing calculation and division on each object in two-dimensional information of the first image, and obtaining a pixel set corresponding to each object; acquiring border information of the target according to the pixel set of a target to be acquired, and acquiring depth information of the target according to three-dimensional information of the first image; and controlling, according to the border information and the depth information, a manipulator to move, and acquiring the target. The present invention has the effects of automatically sorting stacked objects and improving the success rate of sorting by a robot.

Description

目标获取方法、设备和计算机可读存储介质Target acquisition method, device and computer-readable storage medium 技术领域Technical field
本发明涉及机器人分拣领域,特别涉及目标获取方法、设备和计算机可读存储介质。The invention relates to the field of robotic sorting, and in particular, to a method, a device, and a computer-readable storage medium for acquiring targets.
背景技术Background technique
目前随着物流自动化的发展,机器人分拣也越来越火热。越来越多的货物需要快速进行分拣。在实际操作中,许多货物堆叠在一起,需要进行分拣。With the development of logistics automation, robotic sorting is becoming more and more popular. More and more goods need to be sorted quickly. In practice, many goods are stacked together and need to be sorted.
但是,现有的分拣方案中,货物堆叠在一起,通常采用人工进行分拣,因此效率较低。However, in the existing sorting schemes, goods are stacked together and sorting is usually performed manually, so the efficiency is low.
发明内容Summary of the Invention
本发明的主要目的是提供目标获取方法、设备和计算机可读存储介质,旨在自动分拣堆叠物体,提高机器人分拣的成功率。The main object of the present invention is to provide a target acquisition method, a device, and a computer-readable storage medium, which aim to automatically sort stacked objects and improve the success rate of robot sorting.
为实现上述目的,本发明提出的一种目标获取方法,用于机器人分拣重叠物体,所述目标获取方法包括:获得通过第一视觉结构拍摄的第一图像;将所述第一图像输入至预先训练的神经网络进行计算,计算分割所述第一图像的二维信息中的每一物体,根据所要获取的目标的像素点集获得所述目标的边框信息,以及根据第一图像的三维信息获得所述目标的深度信息;根据所述边框信息和深度信息控制机械手移动并获取目标。To achieve the above object, the present invention provides a target acquisition method for a robot to sort overlapping objects. The target acquisition method includes: obtaining a first image captured by a first visual structure; and inputting the first image to A pre-trained neural network performs calculations to calculate each object in the two-dimensional information of the first image, obtains the frame information of the target according to the pixel set of the target to be obtained, and according to the three-dimensional information of the first image Obtaining the depth information of the target; controlling the robot to move and obtain the target according to the frame information and the depth information.
可选的,所述目标获取方法还包括:Optionally, the target acquisition method further includes:
获得多个训练图像;Obtain multiple training images;
根据输入指令获得所述训练图像中完整度达到70%的物体的标注;Obtaining annotations of objects with a completeness of 70% in the training image according to the input instruction;
根据所述训练图像和对应的标注对神经网络进行训练。A neural network is trained according to the training image and corresponding labels.
可选的,所述计算分割所述第一图像的二维信息中的每一物体包括:Optionally, each of the objects in the two-dimensional information for calculating and segmenting the first image includes:
通过Fully Convolutional Instance-aware Semantic Segmentation分割图片中每一个物体。Segment each object in the picture through Fully Instance-aware Semantic Segmentation.
可选的,所述根据目标的像素点集获得所述目标的边框信息包括:Optionally, the obtaining the frame information of the target according to the pixel set of the target includes:
根据目标的像素点集以及RANSAC方法提取出目标的边框。The frame of the target is extracted according to the pixel set of the target and the RANSAC method.
可选的,所述目标获取方法还包括:Optionally, the target acquisition method further includes:
获得通过第二视觉结构拍摄所述目标的第二图像;Obtaining a second image of the target captured by the second visual structure;
根据所述第二图像的三维信息获得所述目标的当前姿态;Obtaining the current pose of the target according to the three-dimensional information of the second image;
在所述当前姿态未匹配预设姿态时,控制所述机械手调整姿态,以使所述目标处于预设姿态。When the current posture does not match the preset posture, the robot is controlled to adjust the posture so that the target is in the preset posture.
本发明还提供了一种目标获取设备,所述目标获取设备包括处理器、存储器以及存储在所述存储器上并可在所述处理器上运行的目标获取程序,所述目标获取程序被所述处理器执行时实现如下步骤:The present invention also provides a target acquisition device. The target acquisition device includes a processor, a memory, and a target acquisition program stored on the memory and operable on the processor. The processor executes the following steps:
获得通过位于上方的第一视觉结构拍摄的第一图像;Obtaining a first image taken by a first visual structure located above;
将所述第一图像输入至预先训练的神经网络进行计算,计算分割所述第一图像的二维信息中的每一物体,获得对应每一物体的像素点集;Inputting the first image to a pre-trained neural network for calculation, calculating each object in the two-dimensional information for segmenting the first image, and obtaining a pixel point set corresponding to each object;
根据所要获取的目标的像素点集获得目标的边框信息,以及根据第一图像的三维信息获得目标的深度信息;Obtaining the frame information of the target according to the pixel point set of the target to be obtained, and obtaining the depth information of the target according to the three-dimensional information of the first image;
根据所述边框信息和深度信息控制机械手移动并获取目标。Control the robot arm to move and acquire the target according to the frame information and depth information.
可选的,所述目标获取程序被所述处理器执行时还实现如下步骤:Optionally, when the target obtaining program is executed by the processor, the following steps are further implemented:
获得多个训练图像;Obtain multiple training images;
根据输入指令获得所述训练图像中完整度达到70%的物体的标注;Obtaining annotations of objects with a completeness of 70% in the training image according to the input instruction;
根据所述训练图像和对应的标注对神经网络进行训练。A neural network is trained according to the training image and corresponding labels.
可选的,所述计算分割所述第一图像的二维信息中的每一物体包括:Optionally, each of the objects in the two-dimensional information for calculating and segmenting the first image includes:
通过Fully Convolutional Instance-aware Semantic Segmentation分割图片中每一个物体。。Segment each object in the picture through Fully Instance-aware Semantic Segmentation. .
可选的,所述目标获取程序被所述处理器执行时还执行如下步骤:Optionally, when the target obtaining program is executed by the processor, the following steps are further performed:
获得通过位于下方的第二视觉结构拍摄所述目标的第二图像;Obtaining a second image of the target captured by a second visual structure located below;
根据所述第二图像的三维信息获得所述目标的当前姿态;Obtaining the current pose of the target according to the three-dimensional information of the second image;
在所述当前姿态未匹配预设姿态时,控制所述机械手调整获取动作,以使所述目标处于预设姿态。When the current posture does not match the preset posture, controlling the manipulator to adjust the acquisition action so that the target is in the preset posture.
本发明还提供了一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有目标获取程序,所述目标获取程序被处理器执行时实现如 上述的目标获取方法的步骤。The present invention also provides a computer-readable storage medium, characterized in that the computer-readable storage medium stores a target acquisition program, and when the target acquisition program is executed by a processor, implements the steps of the target acquisition method described above. .
本发明所提供的目标获取方法,通过第一视觉结构获得第一图像,并且通过第一图像中的二维信息来识别需要获取的目标。然后在通过二维信息和三维信息的结合,获得机械手需要横向移动的量,以及需要纵向移动的量。最后通过机械手移动来获取目标。因此,在本实施例中,通过二维和三维信息的结合,达到了拍摄多个物体、从多个物体中识别需要获取的目标,并且获得目标的位置,以供机械手进行获取。该过程全程自动处理,不需要人工干预,从而具有自动化的效果。并且本实施例通过二维和三维的结合,通过二维信息进行目标识别,以及获得目标的边框信息,再通过三维信息获得深度信息,该过程具有高效准确和方案精巧的效果。The target acquisition method provided by the present invention obtains a first image through a first visual structure, and identifies the target to be acquired through two-dimensional information in the first image. Then, through the combination of two-dimensional information and three-dimensional information, the amount of horizontal movement of the manipulator and the amount of vertical movement required are obtained. Finally, the manipulator moves to obtain the target. Therefore, in this embodiment, through the combination of two-dimensional and three-dimensional information, it is achieved to capture multiple objects, identify targets that need to be acquired from multiple objects, and obtain the positions of the targets for the robotic arm to acquire. The whole process is handled automatically without manual intervention, so it has the effect of automation. In addition, in this embodiment, two-dimensional and three-dimensional information are combined, two-dimensional information is used for target recognition, and target frame information is obtained, and then three-dimensional information is used to obtain depth information. This process has the effects of high efficiency and accuracy and sophisticated solutions.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图示出的结构获得其他的附图。In order to more clearly explain the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below. Obviously, the drawings in the following description are merely These are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can be obtained based on the structure shown in these drawings without paying creative work.
图1为本发明目标获取方法第一实施例的流程图;FIG. 1 is a flowchart of a first embodiment of a method for obtaining a target according to the present invention; FIG.
图2为如图1中目标获取方法的应用例的一示意图;FIG. 2 is a schematic diagram of an application example of the target acquisition method shown in FIG. 1; FIG.
图3为本发明目标获取方法第二实施例的部分流程图;FIG. 3 is a partial flowchart of a second embodiment of a method for acquiring a target according to the present invention; FIG.
图4为本发明目标获取方法第三实施例的部分流程图;FIG. 4 is a partial flowchart of a third embodiment of a method for obtaining a target according to the present invention; FIG.
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The realization of the purpose, functional characteristics and advantages of the present invention will be further explained with reference to the embodiments and the drawings.
具体实施方式detailed description
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。It should be understood that the specific embodiments described herein are only used to explain the present invention and are not intended to limit the present invention.
实施例一Example one
本实施例提出了一种目标获取方法,用于机器人分拣重叠物体。This embodiment proposes a target acquisition method for a robot to sort overlapping objects.
请结合参看图1和图2,所述目标获取方法包括:Please refer to FIG. 1 and FIG. 2 in combination. The target acquisition method includes:
步骤S101,获得通过第一视觉结构100拍摄的第一图像。In step S101, a first image captured by the first visual structure 100 is obtained.
步骤S102,将所述第一图像输入至预先训练的神经网络进行计算,计算分割所述第一图像的二维信息中的每一物体,获得对应每一物体的像素点集。In step S102, the first image is input to a pre-trained neural network for calculation, and each object in the two-dimensional information of the first image is calculated to obtain a pixel point set corresponding to each object.
步骤S103,根据所要获取的目标300的像素点集获得所述目标300的边框信息,以及根据第一图像的三维信息获得所述目标300的深度信息。Step S103: Obtain frame information of the target 300 according to the pixel set of the target 300 to be acquired, and obtain depth information of the target 300 according to the three-dimensional information of the first image.
步骤S104,根据所述边框信息和深度信息控制机械手移动并获取目标300。Step S104: Control the manipulator to move and acquire the target 300 according to the frame information and the depth information.
在本实施例中,首先过得通过位于上方的第一视觉结构100拍摄的第一图像。其中,第一视觉结构100可以获得RGB图像和以及3D图像,例如安装两个单独的摄像头用于分别获得RGB图像和3D图像;或者采用双目摄像头既获得RGB图像可以通过计算获得3D图像。因此,本系统获得的第一图像中既包括RGB信息又包括3D信息。通常,机械手将由上向下去获取目标,然后带着目标向上移动。因此,本实施例中第一视觉结构100设于上方,朝向下方进行拍摄。In this embodiment, first a first image captured by the first visual structure 100 located above is passed. The first visual structure 100 can obtain RGB images and 3D images. For example, two separate cameras are installed to obtain RGB images and 3D images respectively; or a binocular camera can be used to obtain both RGB images and 3D images through calculation. Therefore, the first image obtained by the system includes both RGB information and 3D information. Normally, the robot will pick up the target from the top down, and then move it up with the target. Therefore, in this embodiment, the first visual structure 100 is disposed above and shoots downward.
在本实施例中,在获得所述第一图像之后,再将所述第一图像输入至预先训练的神经网络进行计算,计算分割所述第一图像的二维信息中的每一物体,获得对应每一物体的像素点集。其中,系统获取第一图像中的二维信息,例如:直接获得二维图像;或者从三维图像中去掉深度信息,而获得二维图像。然后,对二维图像作为神经网络的输入。预先训练的神经网络,能够在获得输入值时,根据预先训练而获得的计算式,计算获得输出值。In this embodiment, after obtaining the first image, the first image is input to a pre-trained neural network for calculation, and each object in the two-dimensional information of the first image is divided to obtain The set of pixels corresponding to each object. The system obtains two-dimensional information in the first image, for example: directly obtains a two-dimensional image; or removes depth information from the three-dimensional image to obtain a two-dimensional image. The two-dimensional image is then used as the input to the neural network. The pre-trained neural network can calculate and obtain the output value according to the calculation formula obtained in advance when the input value is obtained.
其中,神经网络可以通过Fully Convolutional Instance-aware Semantic Segmentation方案来进行卷积,分类和升维的操作。通过对二维信息进行卷积处理,能够高效和准确实施像素分类。在分类之后,再对降维后的图片进行升维处理,从而获得分类图像,达到分割所述第一图像的二维信息中的每一物体的效果。而分类图像与第一图像大小相同,则能够便于进行后续步骤中依据像素点集的识别边框操作。从而为机械手的平移,提供坐标。Among them, the neural network can perform convolution, classification, and dimension upgrading operations through the Fully Convolutional Instance-aware Semantic Segmentation scheme. By performing convolution processing on two-dimensional information, pixel classification can be implemented efficiently and accurately. After the classification, the dimension-reduced picture is subjected to a dimension-upgrading process, so as to obtain a classified image, and achieve the effect of segmenting each object in the two-dimensional information of the first image. The classified image is the same size as the first image, which can facilitate the operation of identifying the border according to the pixel set in the subsequent steps. This provides coordinates for the translation of the manipulator.
在本实施例中,在获得对应每一物体的像素点集之后,再根据所要获取的目标300的像素点集获得所述目标300的边框信息,以及根据第一图像的三维信息获得所述目标300的深度信息。其中,由于像素点集基本展示了目标 300的大部分信息,因此通常能够计算获得目标300的边框,例如可以根据目标300的像素点集以及RANSAC方法提取出目标300的边框。然后从边框信息中获得边框的长度、在二维坐标中所覆盖区域等等。进一步的,该边框信息可以提供用于机械手在前后左右方向的移动信息,通常记为X轴和Y轴方向的移动量。然后在通过第一图像中的三维信息获得目标300所在位置的深度。该深度信息可以提供用于机械手在上下方向的移动信息,通常记为Z轴方向的移动量。In this embodiment, after obtaining the pixel point set corresponding to each object, the frame information of the target 300 is obtained according to the pixel point set of the target 300 to be obtained, and the target is obtained according to the three-dimensional information of the first image. 300 depth information. Among them, since the pixel point set basically displays most of the information of the target 300, the frame of the target 300 can usually be calculated. For example, the frame of the target 300 can be extracted according to the pixel point set of the target 300 and the RANSAC method. Then obtain the length of the frame, the area covered in two-dimensional coordinates, and so on from the frame information. Further, the frame information can provide movement information of the manipulator in the forward, backward, leftward, and rightward directions, and is generally recorded as the amount of movement in the X-axis and Y-axis directions. Then, the depth at which the target 300 is located is obtained through the three-dimensional information in the first image. This depth information can provide the movement information of the manipulator in the up-down direction, which is usually recorded as the movement amount in the Z-axis direction.
在本实施例中,在获得边框信息和深度信息之后,根据所述边框信息和深度信息控制机械手移动并获取目标300。其中,在获取时,可以通过吸盘进行负压吸取,也可以通过机械手抓取。机械手下降至预设高度时进行减速,并且通过负压传感器或力矩传感器判断是否有碰到目标300。在判断碰到目标300之后,即可保持负压至预设值或者将张开的爪子合拢,以获取目标300。In this embodiment, after obtaining the frame information and the depth information, the robot is controlled to move and acquire the target 300 according to the frame information and the depth information. Among them, during the acquisition, the suction can be performed by suction, or the robot can grasp it. The manipulator decelerates when it descends to a preset height, and determines whether it has touched the target 300 through a negative pressure sensor or a torque sensor. After determining that the target 300 is encountered, the negative pressure can be maintained to a preset value or the open claws can be closed to obtain the target 300.
本实施例所提供的目标获取方法,通过第一视觉结构100获得第一图像,并且通过第一图像中的二维信息来识别需要获取的目标300。然后在通过二维信息和三维信息的结合,获得机械手需要横向移动的量,以及需要纵向移动的量。最后通过机械手移动来获取目标300。因此,在本实施例中,通过二维和三维信息的结合,达到了拍摄多个物体、从多个物体中识别需要获取的目标300,并且获得目标300的位置,以供机械手进行获取。该过程全程自动处理,不需要人工干预,从而具有自动化的效果。并且本实施例通过二维和三维的结合,通过二维信息进行目标300识别,以及获得目标300的边框信息,再通过三维信息获得深度信息,该过程具有高效准确和方案精巧的效果。The target acquisition method provided in this embodiment obtains a first image through the first visual structure 100, and identifies the target 300 to be acquired through the two-dimensional information in the first image. Then, through the combination of two-dimensional information and three-dimensional information, the amount of horizontal movement of the manipulator and the amount of vertical movement required are obtained. Finally, the target 300 is obtained through the movement of the robot. Therefore, in this embodiment, by combining two-dimensional and three-dimensional information, it is achieved to capture multiple objects, identify the target 300 to be acquired from the multiple objects, and obtain the position of the target 300 for the robotic arm to acquire. The whole process is handled automatically without manual intervention, so it has the effect of automation. And this embodiment uses two-dimensional and three-dimensional combination to recognize the target 300 through two-dimensional information, and obtains the frame information of the target 300, and then obtains the depth information through the three-dimensional information. This process has the effects of high efficiency, accuracy and sophisticated solutions.
实施例二Example two
本实施例提供了一种目标获取方法,本实施例基于上述实施例,额外增加了步骤。请参看图3,具体如下:This embodiment provides a method for obtaining a target. This embodiment is based on the foregoing embodiment, and additional steps are added. Please refer to Figure 3, as follows:
步骤S201,获得多个训练图像;Step S201, obtaining a plurality of training images;
步骤S202,根据输入指令获得所述训练图像中完整度达到70%的物体的标注;Step S202: Obtain a label of the 70% complete object in the training image according to the input instruction;
步骤S203,根据所述训练图像和对应的标注对神经网络进行训练。Step S203: Train a neural network according to the training image and corresponding labels.
本实施例的其他步骤和第一实施例相同,具体可以参看第一实施例,在此 不再赘述。The other steps of this embodiment are the same as those of the first embodiment. For details, refer to the first embodiment, and details are not described herein again.
在本实施例中,首先获得多个训练图像。其中,训练图像可以成千上万,通过越多的训练图像能够训练出越精确的分类模型。通过将输入值输入至神经网络的分类模型即可获得所需的输出值。In this embodiment, a plurality of training images are obtained first. Among them, there can be thousands of training images, and the more accurate the classification model can be trained by more training images. The desired output value can be obtained by inputting the input value into the classification model of the neural network.
在本实施例中,在获得多个训练图像之后,再根据输入指令获得所述训练图像中完整度达到70%的物体的标注。其中,输入指令即通过人工进行标注,即在训练图像中标注所要识别的物体所包括的像素点。其中,本实施例中,设置完整度达到70%的物体进行标注。完整度是指,在训练图像中,物体仅部分暴露在外,而通过判断暴露在外的区域是否达到物体本身的70%。若是,则完整度达到70%。通过仅标注完整度达到70%的物体,则能够使得训练更具有针对性,能够识别出最上层可供获取的物体。使得系统在识别第一图像中的物体时,仅仅能够识别到最上层,并且能够获取的物体。In this embodiment, after obtaining a plurality of training images, according to an input instruction, a label of an object with a completeness of 70% in the training image is obtained. Among them, the input instruction is manually labeled, that is, the pixel points included in the object to be recognized are labeled in the training image. Among them, in this embodiment, objects with a completeness of 70% are set for labeling. The completeness means that in the training image, the object is only partially exposed, and by judging whether the exposed area reaches 70% of the object itself. If so, the integrity is 70%. By labeling only objects that are 70% complete, training can be made more targeted and the top-level objects available can be identified. When the system recognizes the object in the first image, it can only recognize the uppermost layer and can obtain the object.
在本实施例中,在根据输入指令获得所述训练图像中完整度达到70%的物体的标注之后,再根据所述训练图像和对应的标注对神经网络进行训练。其中,神经网络可以通过自身程序去不断的尝试公式,变化公式,各种公式的组合,从而不断趋近于在训练图像上计算获得如输入指令一样的标注。在趋近程度达到预设值时,则神经网络保存当前获得算法,即分类模型。In this embodiment, after obtaining the annotations of the objects with a completeness of 70% in the training image according to the input instruction, a neural network is trained according to the training image and the corresponding annotations. Among them, the neural network can continuously try formulas, change formulas, and various combinations of formulas through its own program, so as to continuously approach calculations on the training images to obtain labels like input instructions. When the approach degree reaches a preset value, the neural network saves the current acquisition algorithm, that is, the classification model.
实施例三Example three
本实施例提供了一种目标获取方法,本实施例基于上述实施例,在获取目标300之后额外增加了步骤,请结合参看图4和图2,具体如下:This embodiment provides a method for obtaining a target. This embodiment is based on the above embodiment, and additional steps are added after obtaining the target 300. Please refer to FIG. 4 and FIG. 2 in detail, as follows:
步骤S301,获得通过位于下方的第二视觉结构200拍摄所述目标300的第二图像。Step S301: Obtain a second image of the target 300 captured by the second visual structure 200 located below.
步骤S302,根据所述第二图像的三维信息获得所述目标300的当前姿态。Step S302: Obtain the current pose of the target 300 according to the three-dimensional information of the second image.
步骤S303,在所述当前姿态未匹配预设姿态时,控制所述机械手调整姿态,以使所述目标300处于预设姿态。Step S303: when the current posture does not match the preset posture, control the manipulator to adjust the posture so that the target 300 is in the preset posture.
本实施例的其他步骤和第二实施例相同,具体可以参看第二实施例,在此不再赘述。The other steps of this embodiment are the same as those of the second embodiment. For details, refer to the second embodiment, and details are not described herein again.
在本实施例中,首先获得通过位于下方的第二视觉结构200拍摄所述目标300的第二图像。其中,由于后续步骤中,仅仅需要3D信息,因此第二视 觉结构200仅仅需要获得3D信息即可。由于机械手在获取目标300之后,会向上移动,因此,第二视觉结构200将由下向上拍摄目标300。相对第一视觉结构100由上向下拍摄,此时会被机械手挡住,因此难以分割并且获得目标300的姿态。因此,通过布置在下方的第二视觉结构200由下向上拍摄,则目标300不会被机械手挡住,从而拍摄所获得的第二图像能够较为容易实现对目标300的分割并且获得目标300的姿态。In this embodiment, a second image of the target 300 captured by the second visual structure 200 located below is first obtained. Among them, since only 3D information is required in the subsequent steps, the second visual structure 200 only needs to obtain 3D information. Since the manipulator moves upward after acquiring the target 300, the second vision structure 200 will shoot the target 300 from the bottom up. The first visual structure 100 is photographed from the top to the bottom. At this time, it will be blocked by the robot, so it is difficult to segment and obtain the attitude of the target 300. Therefore, by shooting the second visual structure 200 arranged from below to the bottom, the target 300 will not be blocked by the manipulator, so that the second image obtained by shooting can easily segment the target 300 and obtain the attitude of the target 300.
在本实施例中,当获得目标300的第二图像之后,再根据所述第二图像的三维信息获得所述目标300的当前姿态。其中,第二图像的三维信息包括三维点云,可以通过RANSAC方案提取三维点云中的平面。然后所提取出的平面拟合出目标300的形状和姿态。具体的拟合方案例如:将所提取的每一平面以各自正视方向投影到平面,而获得二维平面数据;再根据二维平面数据拟合获得矩形区域。然后,将密度最大且面积超过预设阈值的一矩形区域作为参考平面,并且将法线与参考平面的法线垂直的矩形区域作为相关平面;最后根据所述参考平面和相关平面拟合出箱体区域。从而获得了目标300的姿态信息。In this embodiment, after the second image of the target 300 is obtained, the current posture of the target 300 is obtained according to the three-dimensional information of the second image. The three-dimensional information of the second image includes a three-dimensional point cloud, and a plane in the three-dimensional point cloud can be extracted through the RANSAC scheme. The extracted plane then fits the shape and attitude of the target 300. A specific fitting scheme is, for example, projecting each of the extracted planes onto the plane in a respective front view direction to obtain two-dimensional plane data; and then fitting to obtain a rectangular region according to the two-dimensional plane data. Then, a rectangular region with the highest density and an area exceeding a preset threshold is used as a reference plane, and a rectangular region whose normal is perpendicular to the normal of the reference plane is used as the relevant plane; finally, a box is fitted according to the reference plane and the relevant plane Body area. Thereby, attitude information of the target 300 is obtained.
在本实施例中,当获得目标300的当前姿态之后,再判断所述当前姿态是否匹配预设姿态,若否,则控制所述机械手调整姿态,以使所述目标300处于预设姿态。其中,当机械手获取箱子之后,箱子可以是各种东倒西歪的姿态。而为了能够使得箱子能够平稳放入某一位置,则需要对箱子的姿态进行调整。因此,当目标300的当前姿态不匹配预设姿态时,则通过计算获得需要旋转的角度的方向,然后进行相应的调整,使得目标300处于所需的预设姿态。In this embodiment, after the current posture of the target 300 is obtained, it is determined whether the current posture matches a preset posture, and if not, the manipulator is controlled to adjust the posture so that the target 300 is in the preset posture. Among them, after the robot picks up the box, the box can be in various postures. In order to enable the box to be smoothly placed in a certain position, the attitude of the box needs to be adjusted. Therefore, when the current posture of the target 300 does not match the preset posture, the direction of the angle to be rotated is obtained through calculation, and then the corresponding adjustment is performed so that the target 300 is in the required preset posture.
因此,本实施例所提供的目标获取方法,通过获取第二图像,通过第二图像的三维信息来计算目标300的当前姿态,并且调整目标300至预设姿态。从而能够达到安全稳定将目标300放入预设位置的效果。Therefore, the target acquisition method provided in this embodiment obtains a second image, calculates the current posture of the target 300 from the three-dimensional information of the second image, and adjusts the target 300 to a preset posture. Thus, the effect of safely and stably placing the target 300 in a preset position can be achieved.
实施例四Embodiment 4
本实施例提供了一种目标获取设备,所述目标获取设备包括处理器、存储器以及存储在所述存储器上并可在所述处理器上运行的目标获取程序,所述目标获取程序被所述处理器执行时实现如下步骤:This embodiment provides a target acquisition device. The target acquisition device includes a processor, a memory, and a target acquisition program stored on the memory and executable on the processor. The target acquisition program is described by the The processor executes the following steps:
获得通过位于上方的第一视觉结构100拍摄的第一图像;Obtaining a first image captured by the first visual structure 100 located above;
将所述第一图像输入至预先训练的神经网络进行计算,计算分割所述第一图像的二维信息中的每一物体,获得对应每一物体的像素点集;Inputting the first image to a pre-trained neural network for calculation, calculating each object in the two-dimensional information for segmenting the first image, and obtaining a pixel point set corresponding to each object;
根据所要获取的目标300的像素点集获得目标300的边框信息,以及根据第一图像的三维信息获得目标300的深度信息;Obtaining frame information of the target 300 according to the pixel point set of the target 300 to be acquired, and obtaining depth information of the target 300 according to three-dimensional information of the first image;
根据所述边框信息和深度信息控制机械手移动并获取目标300。Control the robot to move and acquire the target 300 according to the frame information and depth information.
本实施例所提供的目标获取设备,通过第一视觉结构100获得第一图像,并且通过第一图像中的二维信息来识别需要获取的目标300。然后在通过二维信息和三维信息的结合,获得机械手需要横向移动的量,以及需要纵向移动的量。最后通过机械手移动来获取目标300。因此,在本实施例中,通过二维和三维信息的结合,达到了拍摄多个物体、从多个物体中识别需要获取的目标300,并且获得目标300的位置,以供机械手进行获取。该过程全程自动处理,不需要人工干预,从而具有自动化的效果。并且本实施例通过二维和三维的结合,通过二维信息进行目标300识别,以及获得目标300的边框信息,再通过三维信息获得深度信息,该过程具有高效准确和方案精巧的效果。The target acquisition device provided in this embodiment obtains a first image through the first visual structure 100 and identifies the target 300 to be acquired through the two-dimensional information in the first image. Then, through the combination of two-dimensional information and three-dimensional information, the amount of horizontal movement of the manipulator and the amount of vertical movement required are obtained. Finally, the target 300 is obtained through the movement of the robot. Therefore, in this embodiment, by combining two-dimensional and three-dimensional information, it is achieved to capture multiple objects, identify the target 300 to be acquired from the multiple objects, and obtain the position of the target 300 for the robotic arm to acquire. The whole process is handled automatically without manual intervention, so it has the effect of automation. And this embodiment uses two-dimensional and three-dimensional combination to recognize the target 300 through two-dimensional information, and obtains the frame information of the target 300, and then obtains the depth information through the three-dimensional information. This process has the effects of high efficiency, accuracy and sophisticated solutions.
本实施例所提供的目标获取设备还可以参照上述目标获取方法的实施例进行调整。调整的技术特征以及这些技术特征所带来的有益效果,具体可以参看上述实施例,在此不再赘述。The target acquisition device provided in this embodiment may also be adjusted by referring to the foregoing embodiment of the target acquisition method. For the technical characteristics of the adjustment and the beneficial effects brought by these technical characteristics, reference may be made to the foregoing embodiments, and details are not described herein again.
实施例五Example 5
本实施例提供了一种计算机可读存储介质。This embodiment provides a computer-readable storage medium.
所述计算机可读存储介质上存储有目标获取程序,所述目标获取程序被处理器执行时实现如下步骤:A target acquisition program is stored on the computer-readable storage medium, and when the target acquisition program is executed by a processor, the following steps are implemented:
获得通过位于上方的第一视觉结构100拍摄的第一图像;Obtaining a first image captured by the first visual structure 100 located above;
将所述第一图像输入至预先训练的神经网络进行计算,计算分割所述第一图像的二维信息中的每一物体,获得对应每一物体的像素点集;Inputting the first image to a pre-trained neural network for calculation, calculating each object in the two-dimensional information for segmenting the first image, and obtaining a pixel point set corresponding to each object;
根据所要获取的目标300的像素点集获得目标300的边框信息,以及根据第一图像的三维信息获得目标300的深度信息;Obtaining frame information of the target 300 according to the pixel point set of the target 300 to be acquired, and obtaining depth information of the target 300 according to three-dimensional information of the first image;
根据所述边框信息和深度信息控制机械手移动并获取目标300。Control the robot to move and acquire the target 300 according to the frame information and depth information.
本实施例所提供的计算机可读存储介质,通过第一视觉结构100获得第一图像,并且通过第一图像中的二维信息来识别需要获取的目标300。然后在 通过二维信息和三维信息的结合,获得机械手需要横向移动的量,以及需要纵向移动的量。最后通过机械手移动来获取目标300。因此,在本实施例中,通过二维和三维信息的结合,达到了拍摄多个物体、从多个物体中识别需要获取的目标300,并且获得目标300的位置,以供机械手进行获取。该过程全程自动处理,不需要人工干预,从而具有自动化的效果。并且本实施例通过二维和三维的结合,通过二维信息进行目标300识别,以及获得目标300的边框信息,再通过三维信息获得深度信息,该过程具有高效准确和方案精巧的效果。The computer-readable storage medium provided in this embodiment obtains a first image through the first visual structure 100, and identifies the target 300 to be obtained through the two-dimensional information in the first image. Then, through the combination of two-dimensional information and three-dimensional information, the amount of horizontal movement of the manipulator and the amount of vertical movement required are obtained. Finally, the target 300 is obtained through the movement of the robot. Therefore, in this embodiment, by combining two-dimensional and three-dimensional information, it is achieved to capture multiple objects, identify the target 300 to be acquired from the multiple objects, and obtain the position of the target 300 for the robotic arm to acquire. The whole process is handled automatically without manual intervention, so it has the effect of automation. And this embodiment uses two-dimensional and three-dimensional combination to recognize the target 300 through two-dimensional information, and obtains the frame information of the target 300, and then obtains the depth information through the three-dimensional information. This process has the effects of high efficiency, accuracy and sophisticated solutions.
本实施例所提供的计算机可读存储介质还可以参照上述目标获取方法的实施例进行调整。调整的技术特征以及这些技术特征所带来的有益效果,具体可以参看上述实施例,在此不再赘述。The computer-readable storage medium provided in this embodiment may also be adjusted with reference to the embodiments of the foregoing target acquisition method. For the technical characteristics of the adjustment and the beneficial effects brought by these technical characteristics, reference may be made to the foregoing embodiments, and details are not described herein again.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。It should be noted that, in this article, the terms "including", "including" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article or device including a series of elements includes not only those elements, It also includes other elements not explicitly listed, or elements inherent to such a process, method, article, or device. Without more restrictions, an element limited by the sentence "including a ..." does not exclude that there are other identical elements in the process, method, article, or device that includes the element.
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。The sequence numbers of the foregoing embodiments of the present invention are only for description, and do not represent the superiority or inferiority of the embodiments.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本发明各个实施例所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the methods in the above embodiments can be implemented by means of software plus a necessary universal hardware platform, and of course, also by hardware, but in many cases the former is better. Implementation. Based on such an understanding, the technical solution of the present invention, in essence, or a part that contributes to the prior art, can be embodied in the form of a software product, which is stored in a storage medium (such as ROM / RAM, magnetic disk, The optical disc) includes several instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the methods described in the embodiments of the present invention.
上面结合附图对本发明的实施例进行了描述,但是本发明并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本发明的启示下,在不脱离本发明宗旨和权利要求所保护的范围情况下,还可做出很多形式,这些均属于本发明的保护之内。The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above specific implementations, and the above specific implementations are only schematic and not restrictive. Those of ordinary skill in the art at Under the enlightenment of the present invention, many forms can be made without departing from the spirit of the present invention and the scope protected by the claims, and these all fall within the protection of the present invention.

Claims (10)

  1. 一种目标获取方法,用于机器人分拣重叠物体,其特征在于,所述目标获取方法包括:A target acquisition method for a robot to sort overlapping objects is characterized in that the target acquisition method includes:
    获得通过第一视觉结构拍摄的第一图像;Obtaining a first image shot through a first visual structure;
    将所述第一图像输入至预先训练的神经网络进行计算,计算分割所述第一图像的二维信息中的每一物体,获得对应每一物体的像素点集;Inputting the first image to a pre-trained neural network for calculation, calculating each object in the two-dimensional information for segmenting the first image, and obtaining a pixel point set corresponding to each object;
    根据所要获取的目标的像素点集获得所述目标的边框信息,以及根据第一图像的三维信息获得所述目标的深度信息;Obtaining frame information of the target according to the pixel point set of the target to be obtained, and obtaining depth information of the target according to three-dimensional information of the first image;
    根据所述边框信息和深度信息控制机械手移动并获取目标。Control the robot arm to move and acquire the target according to the frame information and depth information.
  2. 如权利要求1所述的目标获取方法,其特征在于,所述目标获取方法还包括:The target acquisition method according to claim 1, wherein the target acquisition method further comprises:
    获得多个训练图像;Obtain multiple training images;
    根据输入指令获得所述训练图像中完整度达到70%的物体的标注;Obtaining annotations of objects with a completeness of 70% in the training image according to the input instruction;
    根据所述训练图像和对应的标注对神经网络进行训练。A neural network is trained according to the training image and corresponding labels.
  3. 如权利要求1所述的目标获取方法,其特征在于,所述计算分割所述第一图像的二维信息中的每一物体包括:The method for acquiring an object according to claim 1, wherein each of the objects in the two-dimensional information for calculating and segmenting the first image comprises:
    通过Fully Convolutional Instance-aware Semantic Segmentation分割图片中每一个物体。Segment each object in the picture through Fully Instance-aware Semantic Segmentation.
  4. 如权利要求1所述的目标获取方法,其特征在于,所述根据目标的像素点集获得所述目标的边框信息包括:The method for acquiring a target according to claim 1, wherein the obtaining frame information of the target according to the pixel set of the target comprises:
    根据目标的像素点集以及RANSAC方法提取出目标的边框。The frame of the target is extracted according to the pixel set of the target and the RANSAC method.
  5. 如权利要求1至4任一项所述的目标获取方法,其特征在于,所述目标获取方法还包括:The target acquisition method according to any one of claims 1 to 4, wherein the target acquisition method further comprises:
    获得通过第二视觉结构拍摄所述目标的第二图像;Obtaining a second image of the target captured by the second visual structure;
    根据所述第二图像的三维信息获得所述目标的当前姿态;Obtaining the current pose of the target according to the three-dimensional information of the second image;
    在所述当前姿态未匹配预设姿态时,控制所述机械手调整姿态,以使所述目标处于预设姿态。When the current posture does not match the preset posture, the robot is controlled to adjust the posture so that the target is in the preset posture.
  6. 一种目标获取设备,其特征在于,所述目标获取设备包括处理器、存储器以及存储在所述存储器上并可在所述处理器上运行的目标获取程序,所述目标获取程序被所述处理器执行时实现如下步骤:A target acquisition device, characterized in that the target acquisition device includes a processor, a memory, and a target acquisition program stored on the memory and operable on the processor, and the target acquisition program is processed by the processor. Implement the following steps when the processor executes:
    获得通过位于上方的第一视觉结构拍摄的第一图像;Obtaining a first image taken by a first visual structure located above;
    将所述第一图像输入至预先训练的神经网络进行计算,计算分割所述第一图像的二维信息中的每一物体,获得对应每一物体的像素点集;Inputting the first image to a pre-trained neural network for calculation, calculating each object in the two-dimensional information for segmenting the first image, and obtaining a pixel point set corresponding to each object;
    根据所要获取的目标的像素点集获得目标的边框信息,以及根据第一图像的三维信息获得目标的深度信息;Obtaining the frame information of the target according to the pixel point set of the target to be obtained, and obtaining the depth information of the target according to the three-dimensional information of the first image;
    根据所述边框信息和深度信息控制机械手移动并获取目标。Control the robot arm to move and acquire the target according to the frame information and depth information.
  7. 如权利要求6所述的目标获取设备,其特征在于,所述目标获取程序被所述处理器执行时还实现如下步骤:The target acquisition device according to claim 6, wherein when the target acquisition program is executed by the processor, the following steps are further implemented:
    获得多个训练图像;Obtain multiple training images;
    根据输入指令获得所述训练图像中完整度达到70%的物体的标注;Obtaining annotations of objects with a completeness of 70% in the training image according to the input instruction;
    根据所述训练图像和对应的标注对神经网络进行训练。A neural network is trained according to the training image and corresponding labels.
  8. 如权利要求6所述的目标获取设备,其特征在于,所述计算分割所述第一图像的二维信息中的每一物体包括:The target acquisition device according to claim 6, wherein each of the objects in the two-dimensional information for calculating and segmenting the first image comprises:
    通过Fully Convolutional Instance-aware Semantic Segmentation分割图片中每一个物体。Segment each object in the picture through Fully Instance-aware Semantic Segmentation.
  9. 如权利要求6至8任一项所述的目标获取方法,其特征在于,所述目标获取程序被所述处理器执行时还执行如下步骤:The method for acquiring a target according to any one of claims 6 to 8, wherein the target acquisition program further executes the following steps when executed by the processor:
    获得通过位于下方的第二视觉结构拍摄所述目标的第二图像;Obtaining a second image of the target captured by a second visual structure located below;
    根据所述第二图像的三维信息获得所述目标的当前姿态;Obtaining the current pose of the target according to the three-dimensional information of the second image;
    在所述当前姿态未匹配预设姿态时,控制所述机械手调整获取动作,以使所述目标处于预设姿态。When the current posture does not match the preset posture, controlling the manipulator to adjust the acquisition action so that the target is in the preset posture.
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有目标获取程序,所述目标获取程序被处理器执行时实现如权利要求1至5中任一项所述的目标获取方法的步骤。A computer-readable storage medium, wherein a target acquisition program is stored on the computer-readable storage medium, and when the target acquisition program is executed by a processor, the program according to any one of claims 1 to 5 is implemented. Steps of the target acquisition method.
PCT/CN2019/099398 2018-08-17 2019-08-06 Target acquisition method and device, and computer readable storage medium WO2020034872A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810942852.0 2018-08-17
CN201810942852.0A CN109086736A (en) 2018-08-17 2018-08-17 Target Acquisition method, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2020034872A1 true WO2020034872A1 (en) 2020-02-20

Family

ID=64793807

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/099398 WO2020034872A1 (en) 2018-08-17 2019-08-06 Target acquisition method and device, and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN109086736A (en)
WO (1) WO2020034872A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112883881A (en) * 2021-02-25 2021-06-01 中国农业大学 Disordered sorting method and device for strip-shaped agricultural products
CN113325950A (en) * 2021-05-27 2021-08-31 百度在线网络技术(北京)有限公司 Function control method, device, equipment and storage medium
CN113920142A (en) * 2021-11-11 2022-01-11 江苏昱博自动化设备有限公司 Sorting manipulator multi-object sorting method based on deep learning
CN115359112A (en) * 2022-10-24 2022-11-18 爱夫迪(沈阳)自动化科技有限公司 Stacking control method of high-level material warehouse robot

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086736A (en) * 2018-08-17 2018-12-25 深圳蓝胖子机器人有限公司 Target Acquisition method, equipment and computer readable storage medium
CN109800874A (en) * 2018-12-29 2019-05-24 复旦大学 A kind of training method, equipment and the storage medium of machine vision neural network
CN109895095B (en) * 2019-02-11 2022-07-15 赋之科技(深圳)有限公司 Training sample obtaining method and device and robot
CN111639510B (en) * 2019-03-01 2024-03-29 纳恩博(北京)科技有限公司 Information processing method, device and storage medium
CN109911645B (en) * 2019-03-22 2020-10-23 深圳蓝胖子机器人有限公司 Ladle-to-ladle control method and device and robot
CN110395515B (en) * 2019-07-29 2021-06-11 深圳蓝胖子机器智能有限公司 Cargo identification and grabbing method and equipment and storage medium
CN110717404B (en) * 2019-09-17 2021-07-23 禾多科技(北京)有限公司 Obstacle sensing method for monocular camera
CN111015662B (en) * 2019-12-25 2021-09-07 深圳蓝胖子机器智能有限公司 Method, system and equipment for dynamically grabbing object and method, system and equipment for dynamically grabbing garbage
CN111003380A (en) * 2019-12-25 2020-04-14 深圳蓝胖子机器人有限公司 Method, system and equipment for intelligently recycling garbage
CN111168686B (en) * 2020-02-25 2021-10-29 深圳市商汤科技有限公司 Object grabbing method, device, equipment and storage medium
CN111521142B (en) * 2020-04-10 2022-02-01 金瓜子科技发展(北京)有限公司 Paint surface thickness measuring method and device and paint film instrument
CN112170781B (en) * 2020-09-25 2022-02-22 泰州鑫宇精工股份有限公司 Method and device for improving environmental protection performance of sand spraying machine
CN112605986B (en) * 2020-11-09 2022-04-19 深圳先进技术研究院 Method, device and equipment for automatically picking up goods and computer readable storage medium
CN114029250B (en) * 2021-10-27 2022-11-18 因格(苏州)智能技术有限公司 Article sorting method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140147240A1 (en) * 2011-06-29 2014-05-29 Mitsubishi Electric Corporation Component supply apparatus
CN105499155A (en) * 2016-02-01 2016-04-20 先驱智能机械(深圳)有限公司 Grasping and sorting method and sorting disc for objects
CN105772407A (en) * 2016-01-26 2016-07-20 耿春茂 Waste classification robot based on image recognition technology
CN107009358A (en) * 2017-04-13 2017-08-04 武汉库柏特科技有限公司 A kind of unordered grabbing device of robot based on one camera and method
CN108154098A (en) * 2017-12-20 2018-06-12 歌尔股份有限公司 A kind of target identification method of robot, device and robot
CN109086736A (en) * 2018-08-17 2018-12-25 深圳蓝胖子机器人有限公司 Target Acquisition method, equipment and computer readable storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103963058B (en) * 2014-04-30 2016-01-06 重庆环视高科技有限公司 Mechanical arm based on multi-directional vision location captures control system and method
CN107694962A (en) * 2017-11-07 2018-02-16 陕西科技大学 A kind of fruit automatic sorting method based on machine vision and BP neural network
CN108171748B (en) * 2018-01-23 2021-12-07 哈工大机器人(合肥)国际创新研究院 Visual identification and positioning method for intelligent robot grabbing application
CN108399639B (en) * 2018-02-12 2021-01-26 杭州蓝芯科技有限公司 Rapid automatic grabbing and placing method based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140147240A1 (en) * 2011-06-29 2014-05-29 Mitsubishi Electric Corporation Component supply apparatus
CN105772407A (en) * 2016-01-26 2016-07-20 耿春茂 Waste classification robot based on image recognition technology
CN105499155A (en) * 2016-02-01 2016-04-20 先驱智能机械(深圳)有限公司 Grasping and sorting method and sorting disc for objects
CN107009358A (en) * 2017-04-13 2017-08-04 武汉库柏特科技有限公司 A kind of unordered grabbing device of robot based on one camera and method
CN108154098A (en) * 2017-12-20 2018-06-12 歌尔股份有限公司 A kind of target identification method of robot, device and robot
CN109086736A (en) * 2018-08-17 2018-12-25 深圳蓝胖子机器人有限公司 Target Acquisition method, equipment and computer readable storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112883881A (en) * 2021-02-25 2021-06-01 中国农业大学 Disordered sorting method and device for strip-shaped agricultural products
CN112883881B (en) * 2021-02-25 2023-10-31 中国农业大学 Unordered sorting method and unordered sorting device for strip-shaped agricultural products
CN113325950A (en) * 2021-05-27 2021-08-31 百度在线网络技术(北京)有限公司 Function control method, device, equipment and storage medium
CN113325950B (en) * 2021-05-27 2023-08-25 百度在线网络技术(北京)有限公司 Function control method, device, equipment and storage medium
CN113920142A (en) * 2021-11-11 2022-01-11 江苏昱博自动化设备有限公司 Sorting manipulator multi-object sorting method based on deep learning
CN113920142B (en) * 2021-11-11 2023-09-26 江苏昱博自动化设备有限公司 Sorting manipulator multi-object sorting method based on deep learning
CN115359112A (en) * 2022-10-24 2022-11-18 爱夫迪(沈阳)自动化科技有限公司 Stacking control method of high-level material warehouse robot
CN115359112B (en) * 2022-10-24 2023-01-03 爱夫迪(沈阳)自动化科技有限公司 Stacking control method of high-level material warehouse robot

Also Published As

Publication number Publication date
CN109086736A (en) 2018-12-25

Similar Documents

Publication Publication Date Title
WO2020034872A1 (en) Target acquisition method and device, and computer readable storage medium
CN107767423B (en) mechanical arm target positioning and grabbing method based on binocular vision
CN109483554B (en) Robot dynamic grabbing method and system based on global and local visual semantics
US10124489B2 (en) Locating, separating, and picking boxes with a sensor-guided robot
US9802317B1 (en) Methods and systems for remote perception assistance to facilitate robotic object manipulation
CN109986560B (en) Mechanical arm self-adaptive grabbing method for multiple target types
CN109213202B (en) Goods placement method, device, equipment and storage medium based on optical servo
TW201923706A (en) Method and system for calibrating vision system in environment
WO2022042304A1 (en) Method and apparatus for identifying scene contour, and computer-readable medium and electronic device
JP7377627B2 (en) Object detection device, object grasping system, object detection method, and object detection program
US10957067B2 (en) Control apparatus, object detection system, object detection method and program
WO2022156593A1 (en) Target object detection method and apparatus, and electronic device, storage medium and program
WO2022188410A1 (en) Automatic device assembly control method and apparatus based on manipulator
JP7171294B2 (en) Information processing device, information processing method and program
CN113052907B (en) Positioning method of mobile robot in dynamic environment
CN111275758B (en) Hybrid 3D visual positioning method, device, computer equipment and storage medium
CN110175523B (en) Self-moving robot animal identification and avoidance method and storage medium thereof
WO2023036212A1 (en) Shelf locating method, shelf docking method and apparatus, device, and medium
CN108733076B (en) Method and device for grabbing target object by unmanned aerial vehicle and electronic equipment
JP2018146347A (en) Image processing device, image processing method, and computer program
JP6041710B2 (en) Image recognition method
CN117124302B (en) Part sorting method and device, electronic equipment and storage medium
TW202035255A (en) Object transporting method and system capable of transporting an object according to image recognition
TWI783779B (en) Training data generation device, training data generation method using the same and robot arm system using the same
TWI763453B (en) Control method and system for picking equipment and automatic picking system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19850043

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19850043

Country of ref document: EP

Kind code of ref document: A1