WO2022107943A1 - Intelligent smart logistics automation information processing device - Google Patents

Intelligent smart logistics automation information processing device Download PDF

Info

Publication number
WO2022107943A1
WO2022107943A1 PCT/KR2020/016547 KR2020016547W WO2022107943A1 WO 2022107943 A1 WO2022107943 A1 WO 2022107943A1 KR 2020016547 W KR2020016547 W KR 2020016547W WO 2022107943 A1 WO2022107943 A1 WO 2022107943A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
center
robot arm
image
gravity
Prior art date
Application number
PCT/KR2020/016547
Other languages
French (fr)
Korean (ko)
Inventor
이상설
장성준
박종희
Original Assignee
한국전자기술연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국전자기술연구원 filed Critical 한국전자기술연구원
Publication of WO2022107943A1 publication Critical patent/WO2022107943A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1612Programme controls characterised by the hand, wrist, grip control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity

Definitions

  • the present invention relates to smart logistics automation technology, and more particularly, to a method and apparatus for controlling an operation of a robot arm gripping an object using a camera image.
  • the smart camera system has a problem in that the camera and information processing (eg, object recognition) are dualized, so it is difficult to install and the price is high. Such a structure is difficult to use in a field requiring a lightweight system.
  • information processing eg, object recognition
  • a camera device in order to be applied to a logistics robot, a camera device must be designed to be lightweight, and there is a lack of device design technology that satisfies this.
  • the present invention has been devised to solve the above problems, and an object of the present invention is to apply a high-complexity deep learning-based object recognition and location estimation technology to a lightweight embedded system, a monocular camera or a depth camera To provide a method for controlling a robot arm that grips an object using
  • a method for controlling a robot arm includes: measuring an optical flow while moving a camera in a distance direction of an object; checking whether the center of the camera image coincides with the center of gravity of the object based on the measured light flow; and if it is determined that they do not match, moving the camera on a plane perpendicular to the distance direction of the object.
  • the robot arm control method may further include.
  • a method for controlling a robot arm comprising: acquiring an image; detecting an object in the image; It may further include; estimating the distance to the detected object.
  • the distance per pixel of the object is calculated based on the distance to the object, the number of pixels from the center of the camera image to the center of gravity of the object is calculated, and the distance per pixel is multiplied by the number of pixels to determine the amount of movement of the camera.
  • the image may be an RGB image or a depth image. And, in the moving step, the direction of the camera may be rotated.
  • a robot arm control device a camera; and measuring the optical flow while moving the camera in the distance direction of the object, and based on the measured optical flow, check whether the center of the camera image coincides with the center of gravity of the object. It includes; a processor that moves on a vertical plane.
  • FIG. 3 is a view showing a smart logistics system to which the present invention is applicable
  • FIG. 4 is an external perspective view of a robot arm control system according to an embodiment of the present invention.
  • FIG. 5 is a view illustrating cases in which the center of gravity of the object is located at the center of the camera image
  • FIG. 6 is a flowchart provided for the description of a method for controlling a robot arm according to an embodiment of the present invention
  • FIG. 8 is a diagram showing the relationship between the FOV of the camera, the distance to the object and the size of the object;
  • FIG. 9 is a view showing the process of matching the camera center to the center of gravity of the object while grasping the movements at the edges of the object through the optical flow;
  • 10 and 11 are views showing a case in which the plane on which the center of gravity of the object is located does not coincide with the camera image plane;
  • FIG. 12 is an internal block diagram of the robot arm control system shown in FIG.
  • an intelligent smart logistics automation information processing device is provided. Specifically, we present a method to directly control a robot arm without going through a server in a low-spec embedded system using a monocular camera image or a depth camera and a lightweight deep learning network.
  • FIG. 3 is a diagram illustrating a smart logistics system to which the present invention is applicable.
  • the smart logistics system to which the present invention is applicable is configured to include a robot arm 10 and a robot arm control system 100, as shown.
  • the robot arm 10 moves the target object to a predetermined position after picking it using a suction device. Picking the object's center of gravity is stable and consumes less power. Therefore, it is necessary to find the center of gravity of the object and move the robot arm 10 to the corresponding position.
  • the robot arm control system 100 detects an object in an image generated by a camera (monocular camera or depth camera), and controls the robot arm 10 to move to the center of gravity of the detected object.
  • a camera monocular camera or depth camera
  • the robot arm 10 Since the robot arm 10 is set to move toward the center of the camera image, when the center of gravity of the object is located at the center of the camera image, the robot arm 10 may move to the center of gravity of the object.
  • FIG. 5 exemplifies cases in which the center of gravity of the object is located at the center of the camera image.
  • circles and squares are the top surfaces of objects displayed in the camera image, and the + sign indicates the center of the camera image.
  • FIG. 6 is a flowchart provided for explaining a method for controlling a robot arm according to an embodiment of the present invention.
  • the robot arm control system 100 acquires an image using a camera (S210), and detects/classifies an object from the acquired image (S220).
  • the image acquired in step S210 may be an RGB image or a depth image.
  • object detection in step S220 may be performed using a network learned to detect/classify objects by receiving an image through a lightweight deep learning network.
  • the robot arm control system 100 estimates the distance to the object detected in step S220 (S230).
  • step S230 is not a problem.
  • the distance estimation in step S230 may be found by comparing the actual size (pre-stored) of the detected object with the image size.
  • the optical flow is measured while moving the camera downward along the distance direction of the object detected in step S220 ( S240 ), and based on the measured optical flow, it is confirmed whether the center of the camera image coincides with the center of the object do (S250).
  • step S250 If it is confirmed that they do not match in step S250 (S260-N), the camera is moved on a plane perpendicular to the distance direction of the object (S270).
  • step S250 it is possible to check whether the center coincides with the movements at the edges of the object through the optical flow. If all the detected movements are the same, it is determined that the center of the camera image matches the center of gravity of the object.
  • the center of gravity of the object is pre-stored for each object in the robot arm control system 100 .
  • the distance to the object estimated in step S230 is used.
  • the FOV of the camera is as shown in FIG. 8
  • the size of the object is known to the robot arm control system 100 , so the distance per pixel of the object can be calculated based on the distance to the object.
  • the distance per pixel is multiplied by the number of pixels to determine the amount of movement of the camera.
  • the robot arm 10 is moved to the center of the object and controlled to grip the object (S260).
  • step S260 it is possible by a method of lowering the robot arm 10 until the distance to the object estimated in step S230 becomes 0.
  • the robot arm control system 100 may hold information about the center of gravity of the object as 3D information, as shown in FIG. 11 .
  • step S270 in addition to moving the camera, a direction change is also performed. At this time, it is necessary to switch the direction of the robot arm 10 in conjunction with the direction of the camera.
  • the robot arm control system 100 is configured to include a camera 110 , a processor 120 , a control unit 130 , and a storage unit 140 , as shown in FIG. 12 .
  • the camera 110 is a means for acquiring an image of an object, and is implemented as a monocular camera or a depth camera. Furthermore, the camera 110 may be implemented with a specification that includes both.
  • the processor 120 drives a deep learning network for detecting/classifying an object in the image, estimating the distance to the detected object, and measuring the optical flow so that the camera center matches the center of gravity of the object. ) to control
  • the controller 130 controls the robot arm 10 to move to the center of the object to grip the object.
  • the storage unit 140 stores information on the center of gravity of the object, and provides a storage space necessary for the processor 120 to function.
  • the technical idea of the present invention can also be applied to a computer-readable recording medium containing a computer program for performing the functions of the apparatus and method according to the present embodiment.
  • the technical ideas according to various embodiments of the present invention may be implemented in the form of computer-readable codes recorded on a computer-readable recording medium.
  • the computer-readable recording medium may be any data storage device readable by the computer and capable of storing data.
  • the computer-readable recording medium may be a ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical disk, hard disk drive, or the like.
  • the computer-readable code or program stored in the computer-readable recording medium may be transmitted through a network connected between computers.

Abstract

An intelligent smart logistics automation information processing device is provided. In a robot arm control method according to an embodiment of the present invention, an optical flow is measured while a camera is moved in the distance direction of an object, whether the center of a camera image coincides with the center of weight of the object is confirmed on the basis of the measured optical flow, and when it is determined that the centers do not coincide, the camera is moved on a plane perpendicular to the distance direction of the object. Accordingly, through image generation and object recognition through a lightweight camera and a lightweight deep learning network and object-oriented tracking through optical flow measurement, a logistics object can be stably picked up with low power in a small quantity/multi-product logistics system.

Description

지능형 스마트 물류 자동화 정보처리장치Intelligent smart logistics automation information processing device
본 발명은 스마트 물류 자동화 기술에 관한 것으로, 더욱 상세하게는 카메라 영상을 이용하여 로봇팔이 객체를 파지하는 동작을 제어하는 방법 및 장치에 관한 것이다.The present invention relates to smart logistics automation technology, and more particularly, to a method and apparatus for controlling an operation of a robot arm gripping an object using a camera image.
종래 산업 생산/물류 현장의 로봇 자동화에 있어, 스마트 카메라 시스템은 카메라와 정보 처리(예, 객체 인식) 부분이 이원화 되어 있어, 설치가 어렵고 가격이 높은 문제가 있다. 이 같은 구조는 경량 시스템을 요구하는 분야에서 사용이 어렵다.In the conventional robot automation of industrial production/logistics sites, the smart camera system has a problem in that the camera and information processing (eg, object recognition) are dualized, so it is difficult to install and the price is high. Such a structure is difficult to use in a field requiring a lightweight system.
또한, 서버 시스템의 구축 및 운용에 따른 추가적인 비용이 지속적으로 필요하며, 성능 향상을 위해서는 모든 시스템을 주기적으로 교체해 주어야야 하는 문제도 있다.In addition, additional costs are continuously required for the construction and operation of the server system, and there is a problem in that all systems must be periodically replaced in order to improve performance.
또한, 물류용 로봇에 적용을 위해서는 카메라 장치가 경량으로 설계되어야 하는데, 이를 만족하는 장치 설계 기술이 부족한 상태이다.In addition, in order to be applied to a logistics robot, a camera device must be designed to be lightweight, and there is a lack of device design technology that satisfies this.
본 발명은 상기와 같은 문제점을 해결하기 위하여 안출된 것으로서, 본 발명의 목적은, 높은 복잡도의 딥러닝 기반 객체 인식 및 위치 추정 기술을 경량 임베디드 시스템에 적용하기 위한 방안으로, 단안 카메라 또는 뎁스 카메라를 이용하여 객체를 파지하는 로봇팔을 제어하기 위한 방법을 제공함에 있다.The present invention has been devised to solve the above problems, and an object of the present invention is to apply a high-complexity deep learning-based object recognition and location estimation technology to a lightweight embedded system, a monocular camera or a depth camera To provide a method for controlling a robot arm that grips an object using
상기 목적을 달성하기 위한 본 발명의 일 실시예에 따른, 로봇팔 제어 방법은, 카메라를 객체의 거리 방향으로 이동시키면서 광류를 측정하는 단계; 측정된 광류를 기초로, 카메라 영상의 중심이 객체의 무게 중심과 일치하는지 확인하는 단계; 일치하지 않는 것으로 확인되면, 카메라를 객체의 거리 방향에 수직한 평면 상에서 이동시키는 단계;를 포함한다.According to an embodiment of the present invention for achieving the above object, a method for controlling a robot arm includes: measuring an optical flow while moving a camera in a distance direction of an object; checking whether the center of the camera image coincides with the center of gravity of the object based on the measured light flow; and if it is determined that they do not match, moving the camera on a plane perpendicular to the distance direction of the object.
그리고, 확인 단계는, 광류를 통해 객체의 엣지들에서 움직임들을 파악하고, 움직임들이 모두 동일하면 카메라 영상의 중심이 객체의 무게 중심과 일치할 수 있다.And, in the confirmation step, movements at the edges of the object are identified through the optical flow, and if the movements are all the same, the center of the camera image may coincide with the center of gravity of the object.
본 발명의 실시예에 따른 로봇팔 제어 방법은, 일치하는 것으로 확인되면, 로봇팔이 객체의 무게 중심으로 이동하여 객체를 파지하도록 제어하는 단계;를 더 포함할 수 있다.The robot arm control method according to an embodiment of the present invention, if it is confirmed that the match, the step of controlling the robot arm to move to the center of gravity of the object to grip the object; may further include.
본 발명의 실시예에 따른 로봇팔 제어 방법은, 영상을 획득하는 단계; 영상에서 객체를 검출하는 단계; 검출된 객체까지의 거리를 추정하는 단계;를 더 포함할 수 있다.A method for controlling a robot arm according to an embodiment of the present invention, comprising: acquiring an image; detecting an object in the image; It may further include; estimating the distance to the detected object.
이동 단계는, 객체까지의 거리를 기초로 객체의 픽셀 당 거리를 계산하고, 카메라 영상의 중심으로부터 객체의 무게 중심까지의 픽셀 수를 계산하며, 픽셀당 거리에 픽셀 수를 곱하여 카메라의 이동량을 결정할 수 있다.In the movement step, the distance per pixel of the object is calculated based on the distance to the object, the number of pixels from the center of the camera image to the center of gravity of the object is calculated, and the distance per pixel is multiplied by the number of pixels to determine the amount of movement of the camera. can
영상은, RGB 영상 또는 뎁스 영상일 수 있다. 그리고, 이동 단계는, 카메라의 방향을 회전시킬 수 있다.The image may be an RGB image or a depth image. And, in the moving step, the direction of the camera may be rotated.
한편, 본 발명의 다른 실시예에 따른, 로봇팔 제어 장치는, 카메라; 및 카메라를 객체의 거리 방향으로 이동시키면서 광류를 측정하고, 측정된 광류를 기초로, 카메라 영상의 중심이 객체의 무게 중심과 일치하는지 확인하며, 일치하지 않는 것으로 확인되면 카메라를 객체의 거리 방향에 수직한 평면 상에서 이동시키는 프로세서;를 포함한다.On the other hand, according to another embodiment of the present invention, a robot arm control device, a camera; and measuring the optical flow while moving the camera in the distance direction of the object, and based on the measured optical flow, check whether the center of the camera image coincides with the center of gravity of the object. It includes; a processor that moves on a vertical plane.
이상 설명한 바와 같이, 본 발명의 실시예들에 따르면, 경량의 카메라와 경량의 딥러닝 네트워크를 통한 영상 생성과 객체 인식 및 광류 측정을 통한 객체 중심 추적을 통해, 소량/다품종 물류 시스템에서 저전력으로 안정적인 물류 픽킹이 가능해진다.As described above, according to the embodiments of the present invention, through image generation using a lightweight camera and a lightweight deep learning network, and object-oriented tracking through object recognition and optical flow measurement, it is stable with low power in a small quantity/multiple product distribution system Logistics picking becomes possible.
도 1은 산업 현장 및 객체 인식 조건 예시,1 is an example of industrial sites and object recognition conditions;
도 2는 네트워크 복잡도와 성능의 반비례 예시,2 is an example of an inverse proportion of network complexity and performance;
도 3은 본 발명이 적용 가능한 스마트 물류 시스템을 도시한 도면,3 is a view showing a smart logistics system to which the present invention is applicable;
도 4는 본 발명의 일 실시예에 따른 로봇팔 제어 시스템의 외관 사시도,4 is an external perspective view of a robot arm control system according to an embodiment of the present invention;
도 5는 객체의 무게 중심이 카메라 영상의 중심에 위치하게 된 경우들을 예시한 도면,5 is a view illustrating cases in which the center of gravity of the object is located at the center of the camera image;
도 6은 본 발명의 일 실시예에 따른 로봇팔 제어 방법의 설명에 제공되는 흐름도,6 is a flowchart provided for the description of a method for controlling a robot arm according to an embodiment of the present invention;
도 7은 광류를 통해 파악된 객체의 엣지들의 움직임들을 나타낸 도면,7 is a view showing the movements of the edges of the object identified through the optical flow;
도 8은 카메라의 FOV, 객체까지의 거리 및 객체의 크기 간의 관계를 나타낸 도면,8 is a diagram showing the relationship between the FOV of the camera, the distance to the object and the size of the object;
도 9는 광류를 통해 객체의 엣지들에서 움직임들을 파악하면서, 카메라 중심을 객체의 무게 중심으로 일치시켜 가는 과정을 나타낸 도면,9 is a view showing the process of matching the camera center to the center of gravity of the object while grasping the movements at the edges of the object through the optical flow;
도 10 및 도 11은 객체의 무게 중심이 위치하는 평면이 카메라 영상 평면과 일치하지 않는 경우를 나타낸 도면,10 and 11 are views showing a case in which the plane on which the center of gravity of the object is located does not coincide with the camera image plane;
도 12는, 도 4에 도시된 로봇팔 제어 시스템의 내부 블럭도이다.12 is an internal block diagram of the robot arm control system shown in FIG.
이하에서는 도면을 참조하여 본 발명을 보다 상세하게 설명한다.Hereinafter, the present invention will be described in more detail with reference to the drawings.
본 발명의 실시예에서는 지능형 스마트 물류 자동화 정보처리장치를 제시한다. 구체적으로, 단안 카메라 영상 혹은 뎁스 카메라와 경량의 딥러닝 네트워크를 이용하는 저사양의 임베디드 시스템에서 서버를 통하지 않고 로봇팔을 직접 제어하는 방법을 제시한다.In an embodiment of the present invention, an intelligent smart logistics automation information processing device is provided. Specifically, we present a method to directly control a robot arm without going through a server in a low-spec embedded system using a monocular camera image or a depth camera and a lightweight deep learning network.
스마트 팩토리, 마트, 편의점 등에서의 물류 자동화에 있어 제어하고자 하는 다양한 환경 및 객체 상태에서 객체(물체)를 정확한 인식하기 위한 '머신러닝 및 딥러닝' 기반 기술을 필요로 한다.In logistics automation in smart factories, marts, convenience stores, etc., 'machine learning and deep learning'-based technologies are required to accurately recognize objects (objects) in various environments and object states to be controlled.
특히, 도 1에 나타낸 바와 같이, 산업 현장의 경우 작업 환경에 따라 복잡도(배경, 객체 다양성 등), 조명(조도, 도 자 등)과 제품의 특성(크기, 형태, 재질, 배열 등)이 다르기 때문에, 같은 객체라 하더라도 주변 환경, 객체 외형 및 텍스쳐 등의 변화가 다양해서 일반화된 네트워크 모델로 객체 인식을 정밀하게 수행하는 것은 매우 어렵다.In particular, as shown in FIG. 1, in the case of an industrial site, complexity (background, object diversity, etc.), lighting (illuminance, ceramics, etc.) and product characteristics (size, shape, material, arrangement, etc.) are different depending on the work environment. Therefore, even for the same object, it is very difficult to precisely perform object recognition with a generalized network model because changes in the surrounding environment, object appearance and texture, etc. vary.
산업 현장의 경우 작업 효율화 등과 같은 요소로 고정밀 인식 성능이 요구되지만, 일반적으로 네트워크의 성능과 수행 속도는 도 2에 보여지는 것처럼 Trade-off 관계를 가지고 있다. In the case of industrial sites, high-precision recognition performance is required for factors such as work efficiency, but in general, network performance and execution speed have a trade-off relationship as shown in FIG. 2 .
딥러닝의 특성 상 복잡도가 높은 네트워크는 다수의 활성화 함수 및 특징을 통해 분류가 어려운 비선형 데이터를 구분 짓는 능력이 향상하기 때문이다. 따라서, 적용하고자 하는 산업 현장에서 요구되는 정밀도에 따라 연산량이 고려되어야 한다.This is because, due to the nature of deep learning, a network with high complexity improves the ability to distinguish nonlinear data that is difficult to classify through a large number of activation functions and features. Therefore, the amount of calculation should be considered according to the precision required in the industrial site to be applied.
도 3은 본 발명이 적용 가능한 스마트 물류 시스템을 도시한 도면이다. 본 발명이 적용 가능한 스마트 물류 시스템은, 도시된 바와 같이, 로봇팔(10)과 로봇팔 제어 시스템(100)을 포함하여 구성된다.3 is a diagram illustrating a smart logistics system to which the present invention is applicable. The smart logistics system to which the present invention is applicable is configured to include a robot arm 10 and a robot arm control system 100, as shown.
로봇팔(10)은 타겟 객체를 흡입 장치를 이용하여 픽킹한 후 정해진 위치로 이동시킨다. 객체의 무게 중심을 픽킹하여야 안정적이고 전력 소모가 적다. 따라서, 객체의 무게 중심을 찾아 로봇팔(10)을 해당 위치로 이동시켜야 한다.The robot arm 10 moves the target object to a predetermined position after picking it using a suction device. Picking the object's center of gravity is stable and consumes less power. Therefore, it is necessary to find the center of gravity of the object and move the robot arm 10 to the corresponding position.
도 4는 본 발명의 일 실시예에 따른 로봇팔 제어 시스템(100)의 외관 사시도이다. 로봇팔 제어 시스템(100)은 카메라(단안 카메라 또는 뎁스 카메라)로 생성되는 영상에서 객체를 검출하고, 검출된 객체의 무게 중심으로 로봇팔(10)이 이동할 수 있도록 제어한다.4 is an external perspective view of the robot arm control system 100 according to an embodiment of the present invention. The robot arm control system 100 detects an object in an image generated by a camera (monocular camera or depth camera), and controls the robot arm 10 to move to the center of gravity of the detected object.
로봇팔(10)은 카메라 영상의 중심을 향해 이동하도록 설정되어 있으므로, 객체의 무게 중심이 카메라 영상의 중심에 위치하게 되면, 로봇팔(10)이 객체의 무게 중심으로 이동할 수 있다.Since the robot arm 10 is set to move toward the center of the camera image, when the center of gravity of the object is located at the center of the camera image, the robot arm 10 may move to the center of gravity of the object.
도 5에는 객체의 무게 중심이 카메라 영상의 중심에 위치하게 된 경우들을 예시하였다. 도 5에서 원과 사각형은 카메라 영상에 나타난 객체의 상면이고, + 자 표시는 카메라 영상의 중심을 나타낸다.5 exemplifies cases in which the center of gravity of the object is located at the center of the camera image. In FIG. 5 , circles and squares are the top surfaces of objects displayed in the camera image, and the + sign indicates the center of the camera image.
이하에서, 로봇팔 제어 시스템(100)이 로봇팔(10)을 제어하는 과정에 대해, 도 6을 참조하여 상세히 설명한다. 도 6은 본 발명의 일 실시예에 따른 로봇팔 제어 방법의 설명에 제공되는 흐름도이다.Hereinafter, a process of the robot arm control system 100 controlling the robot arm 10 will be described in detail with reference to FIG. 6 . 6 is a flowchart provided for explaining a method for controlling a robot arm according to an embodiment of the present invention.
로봇팔(10)이 객체의 무게 중심으로 이동하도록 제어하기 위해, 먼저, 로봇팔 제어 시스템(100)은 카메라를 이용하여, 영상을 획득하고(S210), 획득한 영상에서 객체를 검출/분류한다(S220).In order to control the robot arm 10 to move to the center of gravity of the object, first, the robot arm control system 100 acquires an image using a camera (S210), and detects/classifies an object from the acquired image (S220).
S210단계에서 획득하는 영상은 RGB 영상 또는 뎁스 영상일 수도 있다. 그리고, S220단계에서의 객체 검출은 경량 딥러닝 네트워크로 영상을 입력 받아 객체를 검출/분류하도록 학습된 네트워크를 이용하여 수행할 수 있다.The image acquired in step S210 may be an RGB image or a depth image. And, object detection in step S220 may be performed using a network learned to detect/classify objects by receiving an image through a lightweight deep learning network.
다음, 로봇팔 제어 시스템(100)은 S220단계에서 검출된 객체까지의 거리를 추정한다(S230).Next, the robot arm control system 100 estimates the distance to the object detected in step S220 (S230).
S210단계에서 획득하는 영상이 뎁스 영상인 경우, S230단계에서의 거리 추정은 문제되지 않는다. S210단계에서 획득하는 영상이 RGB 영상인 경우, S230단계에서의 거리 추정은 검출된 객체의 실제 크기(기저장되어 있음)와 영상 크기를 비교함으로써 알아낼 수 있다.If the image acquired in step S210 is a depth image, distance estimation in step S230 is not a problem. When the image obtained in step S210 is an RGB image, the distance estimation in step S230 may be found by comparing the actual size (pre-stored) of the detected object with the image size.
이후, 카메라를 S220단계에서 검출된 객체의 거리 방향을 따라 아래로 이동시키면서 광류(Optical Flow)를 측정하고(S240), 측정된 광류를 기초로, 카메라 영상의 중심이 객체의 중심과 일치하는지 확인한다(S250).Thereafter, the optical flow is measured while moving the camera downward along the distance direction of the object detected in step S220 ( S240 ), and based on the measured optical flow, it is confirmed whether the center of the camera image coincides with the center of the object do (S250).
S250단계에서 일치하지 않는 것으로 확인되면(S260-N), 카메라를 객체의 거리 방향에 수직한 평면 상에서 이동시킨다(S270).If it is confirmed that they do not match in step S250 (S260-N), the camera is moved on a plane perpendicular to the distance direction of the object (S270).
S250단계에서는, 광류를 통해 객체의 엣지들에서 움직임들을 파악함으로써 중심의 일치 여부를 확인할 수 있는데, 파악된 움직임들이 모두 동일한 경우에 카메라 영상의 중심이 객체의 무게 중심과 일치하는 것으로 판단한다. 객체의 무게 중심은, 로봇팔 제어 시스템(100)에 객체 별로 기저장되어 있다.In step S250, it is possible to check whether the center coincides with the movements at the edges of the object through the optical flow. If all the detected movements are the same, it is determined that the center of the camera image matches the center of gravity of the object. The center of gravity of the object is pre-stored for each object in the robot arm control system 100 .
도 7에는 광류를 통해 파악된 객체의 엣지들이 좌측에서 크게 움직이고 있는 상황을 표현하였다. 이는, 카메라 중심과 객체의 무게 중심이 일치하지 않는 것을 의미하다.7 shows a situation in which the edges of the object detected through the optical flow move greatly from the left. This means that the camera center and the object's center of gravity do not match.
도 7에 도시된 바와 같은 상황에서는 카메라를 객체 엣지들의 움직임이 크게 나타나는 좌측 방향으로 이동시킬 것이 요구된다.In a situation as shown in FIG. 7 , it is required to move the camera in a left direction in which the movement of object edges is large.
S270단계에서 카메라 이동 시에는 S230단계에서 추정된 객체까지의 거리가 이용된다. 구체적으로, 카메라의 FOV는 도 8에 도시된 바와 같고, 객체의 크기는 로봇팔 제어 시스템(100)에 알려져 있으므로, 객체까지의 거리를 기초로 객체의 픽셀 당 거리를 계산할 수 있다.When the camera moves in step S270, the distance to the object estimated in step S230 is used. Specifically, the FOV of the camera is as shown in FIG. 8 , and the size of the object is known to the robot arm control system 100 , so the distance per pixel of the object can be calculated based on the distance to the object.
따라서, 카메라 중심으로부터 객체의 중심까지의 픽셀 수를 계산한 다음, 픽셀당 거리에 픽셀 수를 곱하여 카메라의 이동량을 결정하면 된다.Therefore, after calculating the number of pixels from the center of the camera to the center of the object, the distance per pixel is multiplied by the number of pixels to determine the amount of movement of the camera.
이후, S210단계부터 반복한다.After that, it repeats from step S210.
이에 의해, S250단계에서 카메라 영상의 중심과 객체의 무게 중심이 일치하는 것으로 확인되면(S260-Y), 로봇팔(10)을 객체의 중심으로 이동하여 객체를 파지하도록 제어한다(S260).Accordingly, when it is confirmed that the center of the camera image matches the center of gravity of the object in step S250 (S260-Y), the robot arm 10 is moved to the center of the object and controlled to grip the object (S260).
S260단계에서는 S230단계에서 추정되는 객체까지의 거리가 0이 될때까지 로봇팔(10)을 하강시키는 방법에 의해 가능하다.In step S260, it is possible by a method of lowering the robot arm 10 until the distance to the object estimated in step S230 becomes 0.
도 9에는 광류를 통해 객체의 엣지들에서 움직임들을 파악하면서, 카메라 중심을 객체의 무게 중심으로 일치시켜 가는 실제의 과정을 나타내었다.9 shows the actual process of matching the camera center to the object's center of gravity while grasping movements at the edges of the object through the optical flow.
한편, 도 10에 나타난 바와 같이, 객체의 무게 중심이 위치하는 평면이 카메라 영상 평면과 일치하지 않는 경우가 있을 수 있다. 이를 위해, 로봇팔 제어 시스템(100)는 도 11에 도시된 바와 같이, 객체의 무게 중심 정보를 3D 정보로 보유하고 있을 수 있다.Meanwhile, as shown in FIG. 10 , there may be a case where the plane on which the center of gravity of the object is located does not coincide with the camera image plane. To this end, the robot arm control system 100 may hold information about the center of gravity of the object as 3D information, as shown in FIG. 11 .
이 경우에는 S270단계에서 카메라의 이동 외에 방향 전환까지 수행한다. 이때, 로봇팔(10)의 방향도 카메라의 방향에 연동하여 전환되도록 하는 것이 필요하다.In this case, in step S270, in addition to moving the camera, a direction change is also performed. At this time, it is necessary to switch the direction of the robot arm 10 in conjunction with the direction of the camera.
도 12는, 도 4에 도시된 로봇팔 제어 시스템(100)의 내부 블럭도이다. 로봇팔 제어 시스템(100)은, 도 12에 도시된 바와 같이, 카메라(110), 프로세서(120), 제어부(130) 및 저장부(140)를 포함하여 구성된다.12 is an internal block diagram of the robot arm control system 100 shown in FIG. The robot arm control system 100 is configured to include a camera 110 , a processor 120 , a control unit 130 , and a storage unit 140 , as shown in FIG. 12 .
카메라(110)는 객체의 영상 획득을 위한 수단으로, 단안 카메라 또는 뎁스 카메라로 구현한다. 나아가, 카메라(110)는 양자를 모두 포함하는 사양으로 구현할 수도 있다.The camera 110 is a means for acquiring an image of an object, and is implemented as a monocular camera or a depth camera. Furthermore, the camera 110 may be implemented with a specification that includes both.
이고, 프로세서(120)는 영상에서 객체를 검출/분류하기 위한 딥러닝 네트워크를 구동하고, 검출된 객체까지의 거리를 추정하고, 광류를 측정하여 카메라 중심이 객체의 무게 중심에 일치되도록 카메라(110)를 제어한다.and the processor 120 drives a deep learning network for detecting/classifying an object in the image, estimating the distance to the detected object, and measuring the optical flow so that the camera center matches the center of gravity of the object. ) to control
제어부(130)는 로봇팔(10)이 객체의 중심으로 이동하여 객체를 파지하도록 제어한다.The controller 130 controls the robot arm 10 to move to the center of the object to grip the object.
저장부(140)는 객체의 무게 중심에 대한 정보가 저장되어 있으며, 프로세서(120)가 기능함에 있어 필요한 저장 공간을 제공한다.The storage unit 140 stores information on the center of gravity of the object, and provides a storage space necessary for the processor 120 to function.
지금까지, 지능형 스마트 물류 자동화 정보처리장치에 대해, 로봇팔 제어 시스템을 실시예로 하여 상세히 설명하였다.So far, for the intelligent smart logistics automation information processing device, the robot arm control system has been described in detail as an embodiment.
위 실시예에서는, 깊이 정보, 광류 정보, 딥러닝을 이용한 정확한 객체 인식을 통하여 소형의 임베디드 시스템만으로도 고정밀도/고속으로 로봇팔의 파지/픽킹 동작을 제어할 수 있도록 하였다.In the above embodiment, through accurate object recognition using depth information, optical flow information, and deep learning, it was possible to control the gripping/picking operation of the robot arm with high precision/high speed only with a small embedded system.
특히, 고복잡도의 딥러닝 기반 객체 인식 및 위치 추정을 경량 임베디드 시스템에서 구현 가능하도록 하였으며, 단일 센서와 더불어 다종 센서 결합시에도 동시 적용 가능한 구조를 제시하였다.In particular, it made it possible to implement high-complexity deep learning-based object recognition and location estimation in a lightweight embedded system, and presented a structure that can be applied simultaneously when combining a single sensor and multiple sensors.
또한, 경량의 객체 인식 및 위치 추정이 가능하여 소량, 다품종 물류 시스템에 적용이 가능하고, 유연한 딥러닝 장치 및 신규 어플리케이션에도 유지보수가 가능한 모델로, 저사양의 임베디드 시스템을 탑재한 지능형 카메라를 이용한 고정밀도 스마트 물류 자동화 장치이다.In addition, it is a lightweight object recognition and location estimation that can be applied to small-volume and multi-variety logistics systems, and it is a model that can be maintained for flexible deep learning devices and new applications. It is also a smart logistics automation device.
한편, 본 실시예에 따른 장치와 방법의 기능을 수행하게 하는 컴퓨터 프로그램을 수록한 컴퓨터로 읽을 수 있는 기록매체에도 본 발명의 기술적 사상이 적용될 수 있음은 물론이다. 또한, 본 발명의 다양한 실시예에 따른 기술적 사상은 컴퓨터로 읽을 수 있는 기록매체에 기록된 컴퓨터로 읽을 수 있는 코드 형태로 구현될 수도 있다. 컴퓨터로 읽을 수 있는 기록매체는 컴퓨터에 의해 읽을 수 있고 데이터를 저장할 수 있는 어떤 데이터 저장 장치이더라도 가능하다. 예를 들어, 컴퓨터로 읽을 수 있는 기록매체는 ROM, RAM, CD-ROM, 자기 테이프, 플로피 디스크, 광디스크, 하드 디스크 드라이브, 등이 될 수 있음은 물론이다. 또한, 컴퓨터로 읽을 수 있는 기록매체에 저장된 컴퓨터로 읽을 수 있는 코드 또는 프로그램은 컴퓨터간에 연결된 네트워크를 통해 전송될 수도 있다.On the other hand, it goes without saying that the technical idea of the present invention can also be applied to a computer-readable recording medium containing a computer program for performing the functions of the apparatus and method according to the present embodiment. In addition, the technical ideas according to various embodiments of the present invention may be implemented in the form of computer-readable codes recorded on a computer-readable recording medium. The computer-readable recording medium may be any data storage device readable by the computer and capable of storing data. For example, the computer-readable recording medium may be a ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical disk, hard disk drive, or the like. In addition, the computer-readable code or program stored in the computer-readable recording medium may be transmitted through a network connected between computers.
또한, 이상에서는 본 발명의 바람직한 실시예에 대하여 도시하고 설명하였지만, 본 발명은 상술한 특정의 실시예에 한정되지 아니하며, 청구범위에서 청구하는 본 발명의 요지를 벗어남이 없이 당해 발명이 속하는 기술분야에서 통상의 지식을 가진자에 의해 다양한 변형실시가 가능한 것은 물론이고, 이러한 변형실시들은 본 발명의 기술적 사상이나 전망으로부터 개별적으로 이해되어져서는 안될 것이다.In addition, although preferred embodiments of the present invention have been illustrated and described above, the present invention is not limited to the specific embodiments described above, and the technical field to which the present invention belongs without departing from the gist of the present invention as claimed in the claims In addition, various modifications are possible by those of ordinary skill in the art, and these modifications should not be individually understood from the technical spirit or perspective of the present invention.

Claims (8)

  1. 카메라를 객체의 거리 방향으로 이동시키면서 광류를 측정하는 단계;measuring the light flow while moving the camera in the distance direction of the object;
    측정된 광류를 기초로, 카메라 영상의 중심이 객체의 무게 중심과 일치하는지 확인하는 단계;checking whether the center of the camera image coincides with the center of gravity of the object based on the measured light flow;
    일치하지 않는 것으로 확인되면, 카메라를 객체의 거리 방향에 수직한 평면 상에서 이동시키는 단계;를 포함하는 것을 특징으로 하는 로봇팔 제어 방법.If it is confirmed that they do not match, moving the camera on a plane perpendicular to the distance direction of the object; Robot arm control method comprising a.
  2. 청구항 1에 있어서,The method according to claim 1,
    확인 단계는,The verification step is
    광류를 통해 객체의 엣지들에서 움직임들을 파악하고, 움직임들이 모두 동일하면 카메라 영상의 중심이 객체의 무게 중심과 일치하는 것으로 판단하는 것을 특징으로 하는 로봇팔 제어 방법.A method of controlling a robot arm, characterized in that it is determined that the movements at the edges of the object are identified through the optical flow, and that the center of the camera image coincides with the center of gravity of the object if the movements are all the same.
  3. 청구항 1에 있어서,The method according to claim 1,
    일치하는 것으로 확인되면, 로봇팔이 객체의 무게 중심으로 이동하여 객체를 파지하도록 제어하는 단계;를 더 포함하는 것을 특징으로 하는 로봇팔 제어 방법.When it is confirmed that they match, the robot arm moves to the center of gravity of the object and controls to grip the object; the robot arm control method further comprising a.
  4. 청구항 1에 있어서,The method according to claim 1,
    영상을 획득하는 단계;acquiring an image;
    영상에서 객체를 검출하는 단계;detecting an object in the image;
    검출된 객체까지의 거리를 추정하는 단계;를 더 포함하는 것을 특징으로 하는 로봇팔 제어 방법.Estimating the distance to the detected object; Robot arm control method characterized in that it further comprises.
  5. 청구항 4에 있어서,5. The method according to claim 4,
    이동 단계는,moving steps,
    객체까지의 거리를 기초로 객체의 픽셀 당 거리를 계산하고, 카메라 영상의 중심으로부터 객체의 무게 중심까지의 픽셀 수를 계산하며, 픽셀당 거리에 픽셀 수를 곱하여 카메라의 이동량을 결정하는 것을 특징으로 하는 로봇팔 제어 방법.Calculating the distance per pixel of the object based on the distance to the object, calculating the number of pixels from the center of the camera image to the center of gravity of the object, and determining the movement amount of the camera by multiplying the distance per pixel by the number of pixels How to control a robot arm.
  6. 청구항 4에 있어서,5. The method according to claim 4,
    영상은,the video,
    RGB 영상 또는 뎁스 영상인 것을 특징으로 하는 로봇팔 제어 방법.A method of controlling a robot arm, characterized in that it is an RGB image or a depth image.
  7. 청구항 1에 있어서,The method according to claim 1,
    이동 단계는,moving steps,
    카메라의 방향을 회전시키는 것을 특징으로 하는 로봇팔 제어 방법.Robot arm control method, characterized in that rotating the direction of the camera.
  8. 카메라; 및camera; and
    카메라를 객체의 거리 방향으로 이동시키면서 광류를 측정하고, 측정된 광류를 기초로, 카메라 영상의 중심이 객체의 무게 중심과 일치하는지 확인하며, 일치하지 않는 것으로 확인되면 카메라를 객체의 거리 방향에 수직한 평면 상에서 이동시키는 프로세서;를 포함하는 것을 특징으로 하는 로봇팔 제어 장치.Measure the light flow while moving the camera in the distance direction of the object, and based on the measured light flow, check whether the center of the camera image coincides with the center of gravity of the object. A robot arm control device comprising a; processor for moving on one plane.
PCT/KR2020/016547 2020-11-23 2020-11-23 Intelligent smart logistics automation information processing device WO2022107943A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020200157290A KR20220070592A (en) 2020-11-23 2020-11-23 Intelligent smart logistics automation information processing device
KR10-2020-0157290 2020-11-23

Publications (1)

Publication Number Publication Date
WO2022107943A1 true WO2022107943A1 (en) 2022-05-27

Family

ID=81709173

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/016547 WO2022107943A1 (en) 2020-11-23 2020-11-23 Intelligent smart logistics automation information processing device

Country Status (2)

Country Link
KR (1) KR20220070592A (en)
WO (1) WO2022107943A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120045962A (en) * 2010-11-01 2012-05-09 연세대학교 산학협력단 Robot and method of controlling posture of the same
KR101359968B1 (en) * 2012-08-16 2014-02-12 주식회사 포스코 Apparatus and method for building refractory in converter
KR20180080630A (en) * 2017-01-04 2018-07-12 삼성전자주식회사 Robot and electronic device for performing hand-eye calibration
US20190105113A1 (en) * 2016-03-31 2019-04-11 Koninklijke Philips N.V. Image guided robotic system for tumor aspiration
JP2019081208A (en) * 2017-10-30 2019-05-30 株式会社東芝 Information processing device and robot arm control system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120045962A (en) * 2010-11-01 2012-05-09 연세대학교 산학협력단 Robot and method of controlling posture of the same
KR101359968B1 (en) * 2012-08-16 2014-02-12 주식회사 포스코 Apparatus and method for building refractory in converter
US20190105113A1 (en) * 2016-03-31 2019-04-11 Koninklijke Philips N.V. Image guided robotic system for tumor aspiration
KR20180080630A (en) * 2017-01-04 2018-07-12 삼성전자주식회사 Robot and electronic device for performing hand-eye calibration
JP2019081208A (en) * 2017-10-30 2019-05-30 株式会社東芝 Information processing device and robot arm control system

Also Published As

Publication number Publication date
KR20220070592A (en) 2022-05-31

Similar Documents

Publication Publication Date Title
US20210394367A1 (en) Systems, Devices, Components, and Methods for a Compact Robotic Gripper with Palm-Mounted Sensing, Grasping, and Computing Devices and Components
Holland et al. CONSIGHT-I: a vision-controlled robot system for transferring parts from belt conveyors
JP2018205929A (en) Learning device, learning method, learning model, detection device and gripping system
CN110211180A (en) A kind of autonomous grasping means of mechanical arm based on deep learning
CN108748149B (en) Non-calibration mechanical arm grabbing method based on deep learning in complex environment
WO2012124933A2 (en) Device and method for recognizing the location of a robot
CN112203809B (en) Information processing apparatus and method, robot control apparatus and method, and storage medium
CN112518748A (en) Automatic grabbing method and system of vision mechanical arm for moving object
CN112060085B (en) Robot operation pose control method based on visual-touch multi-scale positioning
CN115213896A (en) Object grabbing method, system and equipment based on mechanical arm and storage medium
WO2016209029A1 (en) Optical homing system using stereoscopic camera and logo and method thereof
WO2022107943A1 (en) Intelligent smart logistics automation information processing device
WO2020111327A1 (en) Contactless device and method for recognizing object attribute
WO2023085463A1 (en) Method and device for processing data for smart logistics control
WO2020171527A1 (en) Mobile robot and robot arm alignment method thereof
Chang Binocular vision-based 3-D trajectory following for autonomous robotic manipulation
CN114347028B (en) Robot tail end intelligent grabbing method based on RGB-D image
Kheng et al. Stereo vision with 3D coordinates for robot arm application guide
Leroux et al. Robot grasping of unknown objects, description and validation of the function with quadriplegic people
Rajpar et al. Location and tracking of robot end-effector based on stereo vision
Dev Anand et al. Robotics in online inspection and quality control using moment algorithm
CN113524258B (en) Testing device and testing platform for selecting grabbing points
TWI788253B (en) Adaptive mobile manipulation apparatus and method
WO2020180171A1 (en) Thickness measuring apparatus
WO2022097929A1 (en) Robot and control method therefor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20962549

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20962549

Country of ref document: EP

Kind code of ref document: A1