WO2020034872A1 - Procédé et dispositif d'acquisition de cibles, et support de stockage lisible par ordinateur - Google Patents

Procédé et dispositif d'acquisition de cibles, et support de stockage lisible par ordinateur Download PDF

Info

Publication number
WO2020034872A1
WO2020034872A1 PCT/CN2019/099398 CN2019099398W WO2020034872A1 WO 2020034872 A1 WO2020034872 A1 WO 2020034872A1 CN 2019099398 W CN2019099398 W CN 2019099398W WO 2020034872 A1 WO2020034872 A1 WO 2020034872A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
image
obtaining
information
target acquisition
Prior art date
Application number
PCT/CN2019/099398
Other languages
English (en)
Chinese (zh)
Inventor
吕仕杰
Original Assignee
深圳蓝胖子机器人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳蓝胖子机器人有限公司 filed Critical 深圳蓝胖子机器人有限公司
Publication of WO2020034872A1 publication Critical patent/WO2020034872A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition

Definitions

  • the invention relates to the field of robotic sorting, and in particular, to a method, a device, and a computer-readable storage medium for acquiring targets.
  • the main object of the present invention is to provide a target acquisition method, a device, and a computer-readable storage medium, which aim to automatically sort stacked objects and improve the success rate of robot sorting.
  • the present invention provides a target acquisition method for a robot to sort overlapping objects.
  • the target acquisition method includes: obtaining a first image captured by a first visual structure; and inputting the first image to A pre-trained neural network performs calculations to calculate each object in the two-dimensional information of the first image, obtains the frame information of the target according to the pixel set of the target to be obtained, and according to the three-dimensional information of the first image Obtaining the depth information of the target; controlling the robot to move and obtain the target according to the frame information and the depth information.
  • the target acquisition method further includes:
  • a neural network is trained according to the training image and corresponding labels.
  • each of the objects in the two-dimensional information for calculating and segmenting the first image includes:
  • the obtaining the frame information of the target according to the pixel set of the target includes:
  • the frame of the target is extracted according to the pixel set of the target and the RANSAC method.
  • the target acquisition method further includes:
  • the robot is controlled to adjust the posture so that the target is in the preset posture.
  • the present invention also provides a target acquisition device.
  • the target acquisition device includes a processor, a memory, and a target acquisition program stored on the memory and operable on the processor.
  • the processor executes the following steps:
  • the following steps are further implemented:
  • a neural network is trained according to the training image and corresponding labels.
  • each of the objects in the two-dimensional information for calculating and segmenting the first image includes:
  • the following steps are further performed:
  • the present invention also provides a computer-readable storage medium, characterized in that the computer-readable storage medium stores a target acquisition program, and when the target acquisition program is executed by a processor, implements the steps of the target acquisition method described above. .
  • the target acquisition method provided by the present invention obtains a first image through a first visual structure, and identifies the target to be acquired through two-dimensional information in the first image. Then, through the combination of two-dimensional information and three-dimensional information, the amount of horizontal movement of the manipulator and the amount of vertical movement required are obtained. Finally, the manipulator moves to obtain the target. Therefore, in this embodiment, through the combination of two-dimensional and three-dimensional information, it is achieved to capture multiple objects, identify targets that need to be acquired from multiple objects, and obtain the positions of the targets for the robotic arm to acquire. The whole process is handled automatically without manual intervention, so it has the effect of automation. In addition, in this embodiment, two-dimensional and three-dimensional information are combined, two-dimensional information is used for target recognition, and target frame information is obtained, and then three-dimensional information is used to obtain depth information. This process has the effects of high efficiency and accuracy and sophisticated solutions.
  • FIG. 1 is a flowchart of a first embodiment of a method for obtaining a target according to the present invention
  • FIG. 2 is a schematic diagram of an application example of the target acquisition method shown in FIG. 1;
  • FIG. 2 is a schematic diagram of an application example of the target acquisition method shown in FIG. 1;
  • FIG. 3 is a partial flowchart of a second embodiment of a method for acquiring a target according to the present invention
  • FIG. 4 is a partial flowchart of a third embodiment of a method for obtaining a target according to the present invention.
  • This embodiment proposes a target acquisition method for a robot to sort overlapping objects.
  • the target acquisition method includes:
  • step S101 a first image captured by the first visual structure 100 is obtained.
  • step S102 the first image is input to a pre-trained neural network for calculation, and each object in the two-dimensional information of the first image is calculated to obtain a pixel point set corresponding to each object.
  • Step S103 Obtain frame information of the target 300 according to the pixel set of the target 300 to be acquired, and obtain depth information of the target 300 according to the three-dimensional information of the first image.
  • Step S104 Control the manipulator to move and acquire the target 300 according to the frame information and the depth information.
  • first a first image captured by the first visual structure 100 located above is passed.
  • the first visual structure 100 can obtain RGB images and 3D images.
  • two separate cameras are installed to obtain RGB images and 3D images respectively; or a binocular camera can be used to obtain both RGB images and 3D images through calculation. Therefore, the first image obtained by the system includes both RGB information and 3D information.
  • the robot will pick up the target from the top down, and then move it up with the target. Therefore, in this embodiment, the first visual structure 100 is disposed above and shoots downward.
  • the first image is input to a pre-trained neural network for calculation, and each object in the two-dimensional information of the first image is divided to obtain The set of pixels corresponding to each object.
  • the system obtains two-dimensional information in the first image, for example: directly obtains a two-dimensional image; or removes depth information from the three-dimensional image to obtain a two-dimensional image.
  • the two-dimensional image is then used as the input to the neural network.
  • the pre-trained neural network can calculate and obtain the output value according to the calculation formula obtained in advance when the input value is obtained.
  • the neural network can perform convolution, classification, and dimension upgrading operations through the Fully Convolutional Instance-aware Semantic Segmentation scheme.
  • convolution processing on two-dimensional information
  • pixel classification can be implemented efficiently and accurately.
  • the dimension-reduced picture is subjected to a dimension-upgrading process, so as to obtain a classified image, and achieve the effect of segmenting each object in the two-dimensional information of the first image.
  • the classified image is the same size as the first image, which can facilitate the operation of identifying the border according to the pixel set in the subsequent steps. This provides coordinates for the translation of the manipulator.
  • the frame information of the target 300 is obtained according to the pixel point set of the target 300 to be obtained, and the target is obtained according to the three-dimensional information of the first image. 300 depth information.
  • the frame of the target 300 can usually be calculated.
  • the frame of the target 300 can be extracted according to the pixel point set of the target 300 and the RANSAC method. Then obtain the length of the frame, the area covered in two-dimensional coordinates, and so on from the frame information.
  • the frame information can provide movement information of the manipulator in the forward, backward, leftward, and rightward directions, and is generally recorded as the amount of movement in the X-axis and Y-axis directions. Then, the depth at which the target 300 is located is obtained through the three-dimensional information in the first image. This depth information can provide the movement information of the manipulator in the up-down direction, which is usually recorded as the movement amount in the Z-axis direction.
  • the robot after obtaining the frame information and the depth information, the robot is controlled to move and acquire the target 300 according to the frame information and the depth information.
  • the suction can be performed by suction, or the robot can grasp it.
  • the manipulator decelerates when it descends to a preset height, and determines whether it has touched the target 300 through a negative pressure sensor or a torque sensor. After determining that the target 300 is encountered, the negative pressure can be maintained to a preset value or the open claws can be closed to obtain the target 300.
  • the target acquisition method provided in this embodiment obtains a first image through the first visual structure 100, and identifies the target 300 to be acquired through the two-dimensional information in the first image. Then, through the combination of two-dimensional information and three-dimensional information, the amount of horizontal movement of the manipulator and the amount of vertical movement required are obtained. Finally, the target 300 is obtained through the movement of the robot. Therefore, in this embodiment, by combining two-dimensional and three-dimensional information, it is achieved to capture multiple objects, identify the target 300 to be acquired from the multiple objects, and obtain the position of the target 300 for the robotic arm to acquire. The whole process is handled automatically without manual intervention, so it has the effect of automation. And this embodiment uses two-dimensional and three-dimensional combination to recognize the target 300 through two-dimensional information, and obtains the frame information of the target 300, and then obtains the depth information through the three-dimensional information. This process has the effects of high efficiency, accuracy and sophisticated solutions.
  • This embodiment provides a method for obtaining a target. This embodiment is based on the foregoing embodiment, and additional steps are added. Please refer to Figure 3, as follows:
  • Step S201 obtaining a plurality of training images
  • Step S202 Obtain a label of the 70% complete object in the training image according to the input instruction
  • Step S203 Train a neural network according to the training image and corresponding labels.
  • a plurality of training images are obtained first.
  • the desired output value can be obtained by inputting the input value into the classification model of the neural network.
  • a label of an object with a completeness of 70% in the training image is obtained.
  • the input instruction is manually labeled, that is, the pixel points included in the object to be recognized are labeled in the training image.
  • objects with a completeness of 70% are set for labeling.
  • the completeness means that in the training image, the object is only partially exposed, and by judging whether the exposed area reaches 70% of the object itself. If so, the integrity is 70%.
  • a neural network is trained according to the training image and the corresponding annotations.
  • the neural network can continuously try formulas, change formulas, and various combinations of formulas through its own program, so as to continuously approach calculations on the training images to obtain labels like input instructions.
  • the neural network saves the current acquisition algorithm, that is, the classification model.
  • This embodiment provides a method for obtaining a target. This embodiment is based on the above embodiment, and additional steps are added after obtaining the target 300. Please refer to FIG. 4 and FIG. 2 in detail, as follows:
  • Step S301 Obtain a second image of the target 300 captured by the second visual structure 200 located below.
  • Step S302 Obtain the current pose of the target 300 according to the three-dimensional information of the second image.
  • Step S303 when the current posture does not match the preset posture, control the manipulator to adjust the posture so that the target 300 is in the preset posture.
  • a second image of the target 300 captured by the second visual structure 200 located below is first obtained.
  • the second visual structure 200 since only 3D information is required in the subsequent steps, the second visual structure 200 only needs to obtain 3D information. Since the manipulator moves upward after acquiring the target 300, the second vision structure 200 will shoot the target 300 from the bottom up. The first visual structure 100 is photographed from the top to the bottom. At this time, it will be blocked by the robot, so it is difficult to segment and obtain the attitude of the target 300. Therefore, by shooting the second visual structure 200 arranged from below to the bottom, the target 300 will not be blocked by the manipulator, so that the second image obtained by shooting can easily segment the target 300 and obtain the attitude of the target 300.
  • the current posture of the target 300 is obtained according to the three-dimensional information of the second image.
  • the three-dimensional information of the second image includes a three-dimensional point cloud, and a plane in the three-dimensional point cloud can be extracted through the RANSAC scheme.
  • the extracted plane then fits the shape and attitude of the target 300.
  • a specific fitting scheme is, for example, projecting each of the extracted planes onto the plane in a respective front view direction to obtain two-dimensional plane data; and then fitting to obtain a rectangular region according to the two-dimensional plane data.
  • attitude information of the target 300 is obtained.
  • the manipulator is controlled to adjust the posture so that the target 300 is in the preset posture.
  • the box can be in various postures. In order to enable the box to be smoothly placed in a certain position, the attitude of the box needs to be adjusted. Therefore, when the current posture of the target 300 does not match the preset posture, the direction of the angle to be rotated is obtained through calculation, and then the corresponding adjustment is performed so that the target 300 is in the required preset posture.
  • the target acquisition method provided in this embodiment obtains a second image, calculates the current posture of the target 300 from the three-dimensional information of the second image, and adjusts the target 300 to a preset posture.
  • the effect of safely and stably placing the target 300 in a preset position can be achieved.
  • the target acquisition device includes a processor, a memory, and a target acquisition program stored on the memory and executable on the processor.
  • the target acquisition program is described by the The processor executes the following steps:
  • the target acquisition device obtained in this embodiment obtains a first image through the first visual structure 100 and identifies the target 300 to be acquired through the two-dimensional information in the first image. Then, through the combination of two-dimensional information and three-dimensional information, the amount of horizontal movement of the manipulator and the amount of vertical movement required are obtained. Finally, the target 300 is obtained through the movement of the robot. Therefore, in this embodiment, by combining two-dimensional and three-dimensional information, it is achieved to capture multiple objects, identify the target 300 to be acquired from the multiple objects, and obtain the position of the target 300 for the robotic arm to acquire. The whole process is handled automatically without manual intervention, so it has the effect of automation. And this embodiment uses two-dimensional and three-dimensional combination to recognize the target 300 through two-dimensional information, and obtains the frame information of the target 300, and then obtains the depth information through the three-dimensional information. This process has the effects of high efficiency, accuracy and sophisticated solutions.
  • the target acquisition device provided in this embodiment may also be adjusted by referring to the foregoing embodiment of the target acquisition method.
  • the technical characteristics of the adjustment and the beneficial effects brought by these technical characteristics reference may be made to the foregoing embodiments, and details are not described herein again.
  • This embodiment provides a computer-readable storage medium.
  • a target acquisition program is stored on the computer-readable storage medium, and when the target acquisition program is executed by a processor, the following steps are implemented:
  • the computer-readable storage medium provided in this embodiment obtains a first image through the first visual structure 100, and identifies the target 300 to be obtained through the two-dimensional information in the first image. Then, through the combination of two-dimensional information and three-dimensional information, the amount of horizontal movement of the manipulator and the amount of vertical movement required are obtained. Finally, the target 300 is obtained through the movement of the robot. Therefore, in this embodiment, by combining two-dimensional and three-dimensional information, it is achieved to capture multiple objects, identify the target 300 to be acquired from the multiple objects, and obtain the position of the target 300 for the robotic arm to acquire. The whole process is handled automatically without manual intervention, so it has the effect of automation. And this embodiment uses two-dimensional and three-dimensional combination to recognize the target 300 through two-dimensional information, and obtains the frame information of the target 300, and then obtains the depth information through the three-dimensional information. This process has the effects of high efficiency, accuracy and sophisticated solutions.
  • the computer-readable storage medium provided in this embodiment may also be adjusted with reference to the embodiments of the foregoing target acquisition method.
  • the technical characteristics of the adjustment and the beneficial effects brought by these technical characteristics reference may be made to the foregoing embodiments, and details are not described herein again.
  • the methods in the above embodiments can be implemented by means of software plus a necessary universal hardware platform, and of course, also by hardware, but in many cases the former is better.
  • Implementation Based on such an understanding, the technical solution of the present invention, in essence, or a part that contributes to the prior art, can be embodied in the form of a software product, which is stored in a storage medium (such as ROM / RAM, magnetic disk, The optical disc) includes several instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the methods described in the embodiments of the present invention.
  • a terminal which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Abstract

L'invention concerne un procédé et dispositif d'acquisition de cibles, et un support de stockage lisible par ordinateur. Le procédé d'acquisition de cibles comporte les étapes consistant à: acquérir une première image capturée par une première structure visuelle; introduire la première image dans un réseau neuronal pré-entraîné en vue d'un calcul, effectuer un calcul et une division sur chaque objet dans des informations bidimensionnelles de la première image, et obtenir un ensemble de pixels correspondant à chaque objet; acquérir des informations de bordure de la cible d'après l'ensemble de pixels d'une cible à acquérir, et acquérir des informations de profondeur de la cible d'après des informations tridimensionnelles de la première image; et commander, selon les informations de bordure et les informations de profondeur, le déplacement d'un manipulateur, et acquérir la cible. La présente invention a pour effet de trier automatiquement des objets empilés et d'améliorer le taux de réussite d'un tri par robot.
PCT/CN2019/099398 2018-08-17 2019-08-06 Procédé et dispositif d'acquisition de cibles, et support de stockage lisible par ordinateur WO2020034872A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810942852.0A CN109086736A (zh) 2018-08-17 2018-08-17 目标获取方法、设备和计算机可读存储介质
CN201810942852.0 2018-08-17

Publications (1)

Publication Number Publication Date
WO2020034872A1 true WO2020034872A1 (fr) 2020-02-20

Family

ID=64793807

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/099398 WO2020034872A1 (fr) 2018-08-17 2019-08-06 Procédé et dispositif d'acquisition de cibles, et support de stockage lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN109086736A (fr)
WO (1) WO2020034872A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112883881A (zh) * 2021-02-25 2021-06-01 中国农业大学 一种条状农产品无序分拣方法及装置
CN113325950A (zh) * 2021-05-27 2021-08-31 百度在线网络技术(北京)有限公司 功能控制方法、装置、设备以及存储介质
CN113920142A (zh) * 2021-11-11 2022-01-11 江苏昱博自动化设备有限公司 一种基于深度学习的分拣机械手多物体分拣方法
CN115359112A (zh) * 2022-10-24 2022-11-18 爱夫迪(沈阳)自动化科技有限公司 一种高位料库机器人的码垛控制方法

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086736A (zh) * 2018-08-17 2018-12-25 深圳蓝胖子机器人有限公司 目标获取方法、设备和计算机可读存储介质
CN109800874A (zh) * 2018-12-29 2019-05-24 复旦大学 一种机器视觉神经网络的训练方法、设备及存储介质
CN109895095B (zh) * 2019-02-11 2022-07-15 赋之科技(深圳)有限公司 一种训练样本的获取方法、装置和机器人
CN111639510B (zh) * 2019-03-01 2024-03-29 纳恩博(北京)科技有限公司 一种信息处理方法、装置及存储介质
CN109911645B (zh) * 2019-03-22 2020-10-23 深圳蓝胖子机器人有限公司 倒包控制方法、装置及机器人
CN110395515B (zh) * 2019-07-29 2021-06-11 深圳蓝胖子机器智能有限公司 一种货物识别抓取方法、设备以及存储介质
CN110717404B (zh) * 2019-09-17 2021-07-23 禾多科技(北京)有限公司 单目相机障碍物感知方法
CN111003380A (zh) * 2019-12-25 2020-04-14 深圳蓝胖子机器人有限公司 一种智能回收垃圾的方法、系统、设备
CN111015662B (zh) * 2019-12-25 2021-09-07 深圳蓝胖子机器智能有限公司 一种动态抓取物体方法、系统、设备和动态抓取垃圾方法、系统、设备
CN111168686B (zh) * 2020-02-25 2021-10-29 深圳市商汤科技有限公司 物体的抓取方法、装置、设备及存储介质
CN111521142B (zh) * 2020-04-10 2022-02-01 金瓜子科技发展(北京)有限公司 漆面厚度的测量方法、装置及漆膜仪
CN112170781B (zh) * 2020-09-25 2022-02-22 泰州鑫宇精工股份有限公司 一种提升淋砂机环保性能的方法和装置
CN112605986B (zh) * 2020-11-09 2022-04-19 深圳先进技术研究院 自动取货的方法、装置、设备及计算机可读存储介质
CN114029250B (zh) * 2021-10-27 2022-11-18 因格(苏州)智能技术有限公司 物品分拣方法与系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140147240A1 (en) * 2011-06-29 2014-05-29 Mitsubishi Electric Corporation Component supply apparatus
CN105499155A (zh) * 2016-02-01 2016-04-20 先驱智能机械(深圳)有限公司 物体的抓取与分拣方法及分拣盘
CN105772407A (zh) * 2016-01-26 2016-07-20 耿春茂 一种基于图像识别技术的垃圾分类机器人
CN107009358A (zh) * 2017-04-13 2017-08-04 武汉库柏特科技有限公司 一种基于单相机的机器人无序抓取装置及方法
CN108154098A (zh) * 2017-12-20 2018-06-12 歌尔股份有限公司 一种机器人的目标识别方法、装置和机器人
CN109086736A (zh) * 2018-08-17 2018-12-25 深圳蓝胖子机器人有限公司 目标获取方法、设备和计算机可读存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103963058B (zh) * 2014-04-30 2016-01-06 重庆环视高科技有限公司 基于多方位视觉定位的机械手臂抓取控制系统及方法
CN107694962A (zh) * 2017-11-07 2018-02-16 陕西科技大学 一种基于机器视觉与bp神经网络的水果自动分拣方法
CN108171748B (zh) * 2018-01-23 2021-12-07 哈工大机器人(合肥)国际创新研究院 一种面向机器人智能抓取应用的视觉识别与定位方法
CN108399639B (zh) * 2018-02-12 2021-01-26 杭州蓝芯科技有限公司 基于深度学习的快速自动抓取与摆放方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140147240A1 (en) * 2011-06-29 2014-05-29 Mitsubishi Electric Corporation Component supply apparatus
CN105772407A (zh) * 2016-01-26 2016-07-20 耿春茂 一种基于图像识别技术的垃圾分类机器人
CN105499155A (zh) * 2016-02-01 2016-04-20 先驱智能机械(深圳)有限公司 物体的抓取与分拣方法及分拣盘
CN107009358A (zh) * 2017-04-13 2017-08-04 武汉库柏特科技有限公司 一种基于单相机的机器人无序抓取装置及方法
CN108154098A (zh) * 2017-12-20 2018-06-12 歌尔股份有限公司 一种机器人的目标识别方法、装置和机器人
CN109086736A (zh) * 2018-08-17 2018-12-25 深圳蓝胖子机器人有限公司 目标获取方法、设备和计算机可读存储介质

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112883881A (zh) * 2021-02-25 2021-06-01 中国农业大学 一种条状农产品无序分拣方法及装置
CN112883881B (zh) * 2021-02-25 2023-10-31 中国农业大学 一种条状农产品无序分拣方法及装置
CN113325950A (zh) * 2021-05-27 2021-08-31 百度在线网络技术(北京)有限公司 功能控制方法、装置、设备以及存储介质
CN113325950B (zh) * 2021-05-27 2023-08-25 百度在线网络技术(北京)有限公司 功能控制方法、装置、设备以及存储介质
CN113920142A (zh) * 2021-11-11 2022-01-11 江苏昱博自动化设备有限公司 一种基于深度学习的分拣机械手多物体分拣方法
CN113920142B (zh) * 2021-11-11 2023-09-26 江苏昱博自动化设备有限公司 一种基于深度学习的分拣机械手多物体分拣方法
CN115359112A (zh) * 2022-10-24 2022-11-18 爱夫迪(沈阳)自动化科技有限公司 一种高位料库机器人的码垛控制方法
CN115359112B (zh) * 2022-10-24 2023-01-03 爱夫迪(沈阳)自动化科技有限公司 一种高位料库机器人的码垛控制方法

Also Published As

Publication number Publication date
CN109086736A (zh) 2018-12-25

Similar Documents

Publication Publication Date Title
WO2020034872A1 (fr) Procédé et dispositif d'acquisition de cibles, et support de stockage lisible par ordinateur
CN107767423B (zh) 一种基于双目视觉的机械臂目标定位抓取方法
CN109483554B (zh) 基于全局和局部视觉语义的机器人动态抓取方法及系统
US10124489B2 (en) Locating, separating, and picking boxes with a sensor-guided robot
US9802317B1 (en) Methods and systems for remote perception assistance to facilitate robotic object manipulation
CN108827154B (zh) 一种机器人无示教抓取方法、装置及计算机可读存储介质
CN109986560B (zh) 一种面向多目标种类的机械臂自适应抓取方法
WO2019114339A1 (fr) Procédé et dispositif de correction de mouvement de bras robotisé
CN109213202B (zh) 基于光学伺服的货物摆放方法、装置、设备和存储介质
WO2022042304A1 (fr) Procédé et appareil pour identifier un contour de lieu, support lisible par ordinateur et dispositif électronique
CN108415434B (zh) 一种机器人调度方法
JP7377627B2 (ja) 物体検出装置、物体把持システム、物体検出方法及び物体検出プログラム
US10957067B2 (en) Control apparatus, object detection system, object detection method and program
WO2022156593A1 (fr) Procédé et appareil de détection d'objet cible, et dispositif électronique, support d'enregistrement et programme
WO2022188410A1 (fr) Procédé et appareil de commande d'assemblage de dispositif automatique basés sur un manipulateur
CN117124302B (zh) 一种零件分拣方法、装置、电子设备及存储介质
JP7171294B2 (ja) 情報処理装置、情報処理方法及びプログラム
CN113052907B (zh) 一种动态环境移动机器人的定位方法
CN111275758B (zh) 混合型3d视觉定位方法、装置、计算机设备及存储介质
CN110175523B (zh) 一种自移动机器人动物识别与躲避方法及其存储介质
WO2023036212A1 (fr) Procédé de localisation d'étagère, procédé et appareil d'amarrage d'étagère, dispositif et support
CN108733076B (zh) 一种无人机抓取目标物体的方法、装置及电子设备
JP2018146347A (ja) 画像処理装置、画像処理方法、及びコンピュータプログラム
JP6041710B2 (ja) 画像認識方法
US20230419605A1 (en) Map generation apparatus, map generation method, and non-transitory computer-readable medium storing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19850043

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19850043

Country of ref document: EP

Kind code of ref document: A1