TW201933177A - Mobile robots to generate reference maps for localization - Google Patents

Mobile robots to generate reference maps for localization Download PDF

Info

Publication number
TW201933177A
TW201933177A TW107138043A TW107138043A TW201933177A TW 201933177 A TW201933177 A TW 201933177A TW 107138043 A TW107138043 A TW 107138043A TW 107138043 A TW107138043 A TW 107138043A TW 201933177 A TW201933177 A TW 201933177A
Authority
TW
Taiwan
Prior art keywords
robot
objects
control system
reference map
given area
Prior art date
Application number
TW107138043A
Other languages
Chinese (zh)
Other versions
TWI684136B (en
Inventor
強納森 賽爾芙緹
大衛 莫非
Original Assignee
美商惠普發展公司有限責任合夥企業
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 美商惠普發展公司有限責任合夥企業 filed Critical 美商惠普發展公司有限責任合夥企業
Publication of TW201933177A publication Critical patent/TW201933177A/en
Application granted granted Critical
Publication of TWI684136B publication Critical patent/TWI684136B/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/14Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by recording the course traversed by the object
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

An example robot performs a scan to obtain image data of a given region. The robot performs image analysis on the image data to detect a set of undesirable objects, and generates a reference map that excludes the set of undesirable objects, where the reference map is associated with the location of the robot at the time of the scan.

Description

用以產生定位用參考地圖的行動機器人Mobile robot for generating reference map for positioning

本發明大致而言係有關於用以產生定位用參考地圖的行動機器人。The present invention generally relates to a mobile robot for generating a reference map for positioning.

在機器人學的領域中,在自由空間中具有行動力之機器人典型上由它們自己作定位,意即它們決定它們自己相對於一參考點或參考框架之位置。行動機器人通常利用參考地圖來自己作定位。In the field of robotics, robots with mobility in free space are typically positioned by themselves, meaning they determine their own position relative to a reference point or frame. Mobile robots usually use reference maps to locate themselves.

依據本發明之一可行實施態樣,係特地提出一種機器人,其包含一組影像感測器、一空間判定資源、一控制系統以及一推進機構;其中該控制系統:執行一給定區域之一掃描,以利用該組影像感測器來獲得該給定區域之影像資料;使用該空間判定資源,判定於該掃描的時間時該機器人相對於一參考點之一位置;對該影像資料執行影像分析,以檢測出由該影像資料所描繪之一組不想要的物體,該組不想要的物體為一動態物體或一預定類別之不想要的物體中之至少一者;以及產生一排除該組不想要的物體之參考地圖,該參考地圖係與於該掃描的時間時該機器人之該位置相關聯。According to a feasible implementation aspect of the present invention, a robot is specifically proposed, which includes a set of image sensors, a space determination resource, a control system, and a propulsion mechanism; wherein the control system: executes one of a given area Scan to use the set of image sensors to obtain the image data of the given area; use the spatial decision resources to determine the position of the robot relative to a reference point at the time of the scan; perform an image on the image data Analysis to detect a group of unwanted objects depicted by the image data, the group of unwanted objects being at least one of a dynamic object or an undesired object of a predetermined category; and generating an exclusion group A reference map of an unwanted object, the reference map being associated with the position of the robot at the time of the scan.

實例提出一機器人,其產生用於使該機器人(或另一個機器人)能執行定位之一參考地圖。該參考地圖可由該機器人執行給定區域之掃描來獲得影像資料,以及然後對該影像資料執行影像分析,以從該參考地圖檢出和排除不想要的物體,而來產生。The example proposes a robot that generates a reference map for enabling the robot (or another robot) to perform positioning. The reference map can be generated by the robot performing a scan of a given area to obtain image data, and then performing image analysis on the image data to detect and exclude unwanted objects from the reference map.

在某些實例中,一範例機器人包括一組影像感測器、一空間判定資源、一控制系統和一推進機構。該控制系統可操作來執行一給定區域之一掃描,以及利用該空間判定資源,判定在掃描時,該機器人相對於一參考點之一位置。該機器人可對該影像資料執行影像分析,以判定由該影像資料所描繪之一組不想要的物體,以及產生排除該組不想要的物體之一參考地圖。在此等情況中,該參考地圖係與在該掃描時該機器人的位置相關聯。如同配合實例所描述地,該參考地圖能夠扮演一資源,以使該機器人(或另一個機器人)能在該(等)機器人行經該給定區域時,隨後定位其本身。In some examples, an example robot includes a set of image sensors, a space determination resource, a control system, and a propulsion mechanism. The control system is operable to perform a scan of a given area and use the space to determine resources to determine a position of the robot relative to a reference point during the scan. The robot can perform image analysis on the image data to determine a group of unwanted objects depicted by the image data and generate a reference map that excludes the group of unwanted objects. In these cases, the reference map system is associated with the position of the robot at the time of the scan. As described in conjunction with the example, the reference map can act as a resource so that the robot (or another robot) can then locate itself when the robot (or the like) travels through the given area.

一旦針對一給定區域產生一參考地圖,一機器人(執行定位之該機器人)能利用該參考地圖來判定其相對於一參考框架或點之位置。特定地說,該機器人可擷取(拍攝)針對一給定區域之影像資料,以及然後比較目前影像資料與一參考地圖以識別視覺界標,該等視覺界標可能形成該目前影像和該參考地圖之間的比較基礎。執行定位之該機器人然後能藉由比較由目前影像中之選擇物體所描繪的形貌和特徵以及該參考地圖中所描繪的相同物體之形貌,來判定其於該給定區域內相對於一參考框架或參考的點之位置。藉由比較,能夠形成一比較基礎之該等形貌和特徵包括針對個別物體之像素尺寸,包括兩個物體間的距離之判定所需者,以及所描繪物體之大致形狀或尺寸。藉由比較描繪於兩個影像(例如該目前影像和該參考地圖)中之一物體之相對尺寸,機器人即可能能夠依三角法來測定其在一給定區域中相對於先前已就參考地圖擷取了影像資料的一位置之自己的位置。Once a reference map is generated for a given area, a robot (the robot that performs positioning) can use the reference map to determine its position relative to a reference frame or point. Specifically, the robot can capture (shoot) image data for a given area, and then compare the current image data with a reference map to identify visual landmarks, which may form the current image and the reference map. Comparison basis. The robot performing positioning can then determine its relative position in the given area by comparing the shape and features depicted by the selected object in the current image with the shape of the same object depicted in the reference map The location of the reference frame or reference point. By comparison, the morphologies and features that can form a basis for comparison include the pixel size for individual objects, including those required for the determination of the distance between two objects, and the approximate shape or size of the objects depicted. By comparing the relative size of an object depicted in two images (such as the current image and the reference map), the robot may be able to triangulate it in a given area relative to a previously referenced map Take a location of the image data to its own location.

實例認知到,使用參考地圖之定位對於機器人來說會是在運算上耗費較大的。此外,會有充分可變性存在於機器人操作來使個別機器人誤認能被準確地依賴之靜止和長存物體時所用的方式上。The example recognizes that positioning using reference maps is computationally expensive for robots. In addition, there will be sufficient variability in the way robots operate to make individual robots misidentify stationary and long-lived objects that can be accurately relied upon.

描述於本文之一或多個實例可透過使用可由一或多個處理器所執行之指令來實現。這些指令可攜載於一電腦可讀媒體上。下文配合圖式顯示或描述之機器提供處理資源和電腦可讀媒體之實例,於其上,用以實現本文描述的實例之指令能夠被攜載及/或執行。特別是,以本文所描述的實例所示出之各種機器包括(數個)處理器和用以保存資料和指令之各種形式的記憶體。電腦可讀媒體之實例包括永久記憶體儲存器裝置,諸如個人電腦或伺服器上的硬碟。電腦儲存器媒體之其他實例包括可攜式儲存器單元,諸如CD或DVD單元、快閃記憶體(例如於智慧型電話、多功能裝置或平板上所攜載者)、及磁性記憶體。電腦、終端機、網路致能裝置(例如行動裝置,諸如蜂巢式電話)皆是利用處理器、記憶體和儲存於電腦可讀媒體上的指令之機器和裝置之實例。此外,一些實例可用電腦程式或可攜載此程式的電腦可用載體媒體之形式來實現。One or more examples described herein may be implemented using instructions executable by one or more processors. These instructions can be carried on a computer-readable medium. The following provides examples of processing resources and computer-readable media in conjunction with the machines shown or described in the drawings, on which instructions to implement the examples described herein can be carried and / or executed. In particular, the various machines shown in the examples described herein include (several) processors and various forms of memory for storing data and instructions. Examples of computer-readable media include permanent memory storage devices, such as hard drives on personal computers or servers. Other examples of computer storage media include portable storage units, such as CD or DVD units, flash memory (such as those carried on smart phones, multi-function devices, or tablets), and magnetic memory. Computers, terminals, and network-enabled devices (such as mobile devices, such as cellular phones) are all examples of machines and devices that utilize processors, memory, and instructions stored on computer-readable media. In addition, some examples may be implemented in the form of a computer program or a computer that can carry the program in a carrier medium.

圖1繪示用以產生用於執行定位之一參考地圖之一範例機器人。特別是,圖1描繪一機器人100,其具有一控制系統130、一推進機構140、一組影像感測器150和一空間判定資源152。如同所描述地,控制系統130能使用影像感測器150來從一給定區域101獲得影像資料。控制系統130可實現地圖產生邏輯112,以從該給定區域之該影像資料產生一參考地圖115。在產生參考地圖115時,地圖產生邏輯112可利用語意分段來從參考地圖115檢知和排除不想要的物體。該等不想要的物體包括動態物體和靜止物體,其本質上非持續留存於它們個別的位置(例如易於被移動之物體)。於是,地圖產生邏輯112能產生該參考地圖115,以包括靜止和長存物體,而不是動態或非長存安設之物體。在一些實例中,地圖產生邏輯112能增加被描繪於一給定參考地圖115中之靜態和長存物體之數量相較於非長存或動態物體之數量的比例。在一些實作態樣中,地圖產生邏輯112能被實現來僅把符合相對於該給定區域之一靜止和長存物體的一臨界信賴度位準之那些物體納入。FIG. 1 shows an example robot for generating a reference map for performing positioning. In particular, FIG. 1 depicts a robot 100 having a control system 130, a propulsion mechanism 140, a set of image sensors 150, and a space determination resource 152. As described, the control system 130 can use the image sensor 150 to obtain image data from a given area 101. The control system 130 may implement map generation logic 112 to generate a reference map 115 from the image data in the given area. When generating the reference map 115, the map generation logic 112 may utilize semantic segmentation to detect and exclude unwanted objects from the reference map 115. Such undesirable objects include dynamic objects and stationary objects, which are essentially non-persistent in their individual locations (such as objects that are easily moved). Therefore, the map generation logic 112 can generate the reference map 115 to include stationary and permanent objects, rather than dynamic or non-permanent objects. In some examples, the map generation logic 112 can increase the ratio of the number of static and persistent objects depicted in a given reference map 115 to the number of non-permanent or dynamic objects. In some implementations, the map generation logic 112 can be implemented to include only those objects that meet a critical confidence level relative to one of the stationary and persistent objects of the given area.

除了選擇哪些物體應顯示於參考地圖115中,控制系統130亦使一座標或位置識別符與該機器人100在參考地圖115被擷取時之位置相關聯。更特定地說,控制系統130判定對於產生參考地圖115之該組感測器而言為特定之座標或其他位置識別符。多種技術可用於判定機器人100相對於一參考框架或點之位置。舉例而言,機器人100可坐落於該給定區域之中間,或與感測器可見之一標記對齊。再者,機器人100可被配備有一空間判定資源152,以使該機器人能透過自我監控判定其位置。舉例而言,機器人100可包括移動感測器(例如加速度計、陀螺儀及/或慣性質量單元(IMU)),機器人100能使用它來追蹤其本身相對於一參考點之位置。另外或替換地,空間判定資源152可對應於一里程計,當機器人100自具有一已知位置或座標(例如一給定房間之入口)之一參考點移動時,該里程計能追蹤例如推進機構140之輪轉數。控制系統130可追蹤該里程計,以及判定來自該等移動感測器之資訊,俾判定於執行取得影像資料之掃描的時間時,其本身在給定區域101內的相對位置。控制系統130然後可使於執行該掃描的時間時之機器人100的經判定位置,與從該掃描之影像資料所產生之所得參考地圖115相關聯。以此方式,藉由使影像資料能依三角法來劃分出於執行掃描之時間時,與機器人100的位置相距的距離資訊,參考地圖115可隨後被用於定位。該距離資訊然後能被轉換成相對於執行針對該參考地圖的掃描之時間時該機器人100的位置之一座標。In addition to selecting which objects should be displayed on the reference map 115, the control system 130 also associates a landmark or a position identifier with the position of the robot 100 when the reference map 115 is captured. More specifically, the control system 130 determines that it is a specific coordinate or other position identifier for the set of sensors that generated the reference map 115. Various techniques can be used to determine the position of the robot 100 relative to a reference frame or point. For example, the robot 100 may be located in the middle of the given area, or aligned with a mark visible by the sensor. Furthermore, the robot 100 may be equipped with a space determination resource 152 so that the robot can determine its position through self-monitoring. For example, the robot 100 may include a motion sensor (such as an accelerometer, gyroscope, and / or inertial mass unit (IMU)) that the robot 100 can use to track its own position relative to a reference point. Additionally or alternatively, the space determination resource 152 may correspond to an odometer, which can track, for example, progress when the robot 100 moves from a reference point with a known position or coordinate (eg, entrance to a given room) The number of rotations of the institution 140. The control system 130 can track the odometer and determine the information from the movement sensors to determine the relative position of itself in the given area 101 when the scan for acquiring the image data is performed. The control system 130 can then associate the determined position of the robot 100 at the time the scan was performed with the reference map 115 generated from the scanned image data. In this way, by allowing the image data to be triangulated to divide the distance information from the position of the robot 100 when the scan is performed, the reference map 115 can then be used for positioning. The distance information can then be converted into a coordinate relative to the position of the robot 100 when the scan for the reference map is performed.

在一些實例中,機器人100可在一第一時刻產生參考地圖115,以及然後在隨後時刻使用參考地圖115,以定出其本身於一給定區域及/或參考框架內的位置。作為附加或替換地,一機器人可傳送參考地圖115給另一個機器人,或給一參考地圖資料庫。更進一步地,機器人可在使用該等地圖時于任意時點更新一參考地圖115。In some examples, the robot 100 may generate the reference map 115 at a first moment, and then use the reference map 115 at a later moment to determine its own position within a given area and / or reference frame. As an addition or alternative, a robot may transmit the reference map 115 to another robot, or to a reference map database. Furthermore, the robot can update a reference map 115 at any time when using these maps.

除其他優點外,數個實例認知到,傳統方式下的定位可能耗費較大運算力且不可信賴。舉例來說,執行定位之該機器人可能會實施即時影像處理,以執行用以檢出和比較一目前視像的物體和一參考地圖的物體之複雜操作。當該目前視像之影像資料描繪多個物體移動之擁擠景象,此等操作會變得更運算密集和不可信賴。為了緩和此等場景之複雜性,機器人100可使用語意分段法來產生參考地圖115,以可靠地檢測出一個場景之動態及/或非長存物體。當此等物體被檢知,它們可從參考地圖115被排除,使得該參考地圖較不會充斥著不適合的物體,否則該等不適合的物體在隨後使用時可能會產生錯誤定位結果。Among other advantages, several examples have realized that positioning in the traditional way may consume a lot of computing power and is unreliable. For example, the robot performing positioning may perform real-time image processing to perform complex operations for detecting and comparing an object currently in view and an object referring to a map. When the current video data depicts a crowded scene of multiple objects moving, these operations become more computationally intensive and untrustworthy. To alleviate the complexity of these scenes, the robot 100 can use the semantic segmentation method to generate the reference map 115 to reliably detect the dynamic and / or non-permanent objects of a scene. When these objects are detected, they can be excluded from the reference map 115, so that the reference map is less likely to be filled with unsuitable objects, otherwise the unsuitable objects may produce incorrect positioning results when subsequently used.

推進機構140包括一介面142、至少一馬達144和一轉向系統146。介面142將推進機構140連接至控制系統130,以使推進機構140能從控制系統130接收指令。推進機構140可從控制系統130接收關於方向和速度之指令。該等指令可用於驅動該至少一馬達144及指揮該轉向系統146。該至少一馬達144可包括用以推進該機器人之一或多個馬達。舉例來說,一馬達可驅動機器人100之全部輪子,或是每個輪子可由其自己的馬達、或是輪子與馬達之任何其他組合來驅動。轉向系統146可包括機構組件(例如軸件、連桿、液壓裝置、皮帶等),以操縱該等輪子之一角度(例如同步驅動、關節式驅動等),或利用多個馬達間的一速度差(例如差動式驅動等),或其等任意組合,來根據從控制系統130所接收之指令引導機器人100。The propulsion mechanism 140 includes an interface 142, at least one motor 144, and a steering system 146. The interface 142 connects the propulsion mechanism 140 to the control system 130 so that the propulsion mechanism 140 can receive instructions from the control system 130. The propulsion mechanism 140 may receive instructions regarding direction and speed from the control system 130. The commands can be used to drive the at least one motor 144 and direct the steering system 146. The at least one motor 144 may include one or more motors for propelling the robot. For example, one motor can drive all the wheels of the robot 100, or each wheel can be driven by its own motor, or any other combination of wheels and motors. The steering system 146 may include mechanism components (such as shafts, connecting rods, hydraulic devices, belts, etc.) to manipulate an angle of the wheels (such as synchronous drive, articulated drive, etc.), or use a speed between multiple motors The difference (e.g., differential drive, etc.), or any combination thereof, guides the robot 100 according to instructions received from the control system 130.

該組影像感測器150可能包括一或多種類型的攝影機,包括三維影像感測攝影機(例如利用一距離感測器之攝影機、LiDAR (光達)、立體攝影機等)。舉例來說,該組影像感測器150可包括一雷射感測器,其以一脈衝雷射光照射一標的,以及以一感測器測量反射的脈衝。作為另一實例,該組影像感測器150可包括一攝影機感測器,其能被動地獲得二維影像資料。更進一步地,該組影像感測器150可包括一對立體攝影機,其協調操作以產生在場景中之一給定物體的一個三維繪圖。三維資訊亦可從移動來獲得(例如得自運動之結構)。The set of image sensors 150 may include one or more types of cameras, including three-dimensional image sensing cameras (eg, cameras using a distance sensor, LiDAR, stereo cameras, etc.). For example, the set of image sensors 150 may include a laser sensor, which illuminates a target with a pulse of laser light, and measures the reflected pulse with a sensor. As another example, the set of image sensors 150 may include a camera sensor that can passively obtain two-dimensional image data. Still further, the set of image sensors 150 may include a pair of stereo cameras that cooperate to produce a three-dimensional drawing of a given object in the scene. Three-dimensional information can also be obtained from movement (e.g. from a moving structure).

控制系統130可包括一記憶體110和一處理器120。記憶體110可為任何形式(例如RAM、DRAM等),以及可能包括地圖產生邏輯112。地圖產生邏輯112可能包括用以在機器人100行經一區部或區域時控制機器人100之指令。地圖產生邏輯112亦可包括用以使機器人100能產生機器人100所行經之該區部或區域的地圖之指令。地圖產生邏輯112亦可包括資料(例如模型、影像、樣版等),其在地圖建構過程期間要被控制系統130參考,以助判定由機器人100所檢測到的物體之身分,以及判定針對該等經檢出物體的每一者之一預定分類。處理器120可從記憶體110存取該地圖產生邏輯112,以控制機器人100和產生參考地圖115。此外,如圖1所示,控制系統130可與機器人100整合(例如使用硬體、韌體及/或軟體)。更一般地而言,地圖產生邏輯112可被實現為一控制系統300,例如於圖3之一實例顯示。控制系統300可配合機器人100實現或自該機器人100之遠端來實現。The control system 130 may include a memory 110 and a processor 120. The memory 110 may be in any form (eg, RAM, DRAM, etc.) and may include map generation logic 112. The map generation logic 112 may include instructions for controlling the robot 100 when the robot 100 travels through a section or area. The map generation logic 112 may also include instructions to enable the robot 100 to generate a map of the area or area that the robot 100 travels through. The map generation logic 112 may also include data (such as models, images, templates, etc.) that are referenced by the control system 130 during the map construction process to help determine the identity of the object detected by the robot 100 and determine One of each of the detected objects is scheduled to be classified. The processor 120 can access the map generation logic 112 from the memory 110 to control the robot 100 and generate the reference map 115. In addition, as shown in FIG. 1, the control system 130 may be integrated with the robot 100 (for example, using hardware, firmware, and / or software). More generally, the map generation logic 112 may be implemented as a control system 300, such as shown in the example of FIG. 3. The control system 300 can be implemented in conjunction with the robot 100 or from the far end of the robot 100.

圖2A至圖2C繪示針對一給定區域(例如住宅房間)產生一參考地圖115之一機器人100的範例。在圖2A中,機器人100將其本身定置於一已知位置211,以從一場景擷取影像資料。該影像資料可描繪出人202、一廢紙簍204、一複印機206和壁部208。該機器人可在追蹤其本身相對於一參考點210或參考框架的位置時,同時擷取該影像資料。2A to 2C illustrate an example of a robot 100 generating a reference map 115 for a given area (eg, a residential room). In FIG. 2A, the robot 100 positions itself at a known position 211 to capture image data from a scene. The image data can depict a person 202, a wastebasket 204, a copier 206 and a wall 208. The robot can simultaneously capture the image data while tracking its position relative to a reference point 210 or reference frame.

在圖2B中,機器人100使用地圖邏輯112和從區域201所擷取之影像資料,來產生參考地圖115。機器人100可排除動態物體(例如人202)和非長存物體(例如廢紙簍204)。機器人100可利用例如隨圖4A和圖4B之一實例描述之一地圖產生程序,俾識別人202和廢紙簍204為對於參考地圖115來說之不想要的物體。參考地圖115然後可描繪出複印機206之一邊界。In FIG. 2B, the robot 100 uses the map logic 112 and the image data extracted from the area 201 to generate the reference map 115. The robot 100 may exclude dynamic objects (such as people 202) and non-permanent objects (such as wastebasket 204). The robot 100 may use, for example, a map generation program described with an example of FIGS. 4A and 4B to identify the person 202 and the wastebasket 204 as undesirable objects for the reference map 115. The reference map 115 may then depict one of the boundaries of the copier 206.

在圖2C中,另一機器人(或替換地,相同的機器人100)可在一不同時間使用參考地圖115,以定位其本身。機器人100可擷取區域201內的該場景之一目前視像215,以及使用參考地圖115來識別該場景中之哪個物體(例如複印機206)要為了定位的目的所使用。機器人200然後可利用參考地圖115依三角法測定該目前視像215中之該複印機之經描繪特徵,俾判定機器人200相對於產生參考地圖115的機器人100之一相對位置。In FIG. 2C, another robot (or alternatively, the same robot 100) may use the reference map 115 at a different time to locate itself. The robot 100 can capture one of the current views 215 of the scene in the area 201 and use the reference map 115 to identify which object in the scene (eg, the copier 206) is to be used for positioning purposes. The robot 200 can then use the reference map 115 to triangulate the drawn features of the copier in the current view 215 to determine the relative position of the robot 200 relative to one of the robots 100 that generated the reference map 115.

圖3繪示用以產生一參考地圖之一範例控制系統,該參考地圖與在掃描時間時該機器人的位置相關聯。如同隨一些範例所描述地,一控制系統300可被實現來使用從一機器人的感測器組所產生之感測器資料,諸如隨圖1之一範例所描述者。FIG. 3 illustrates an example control system for generating a reference map that is associated with the position of the robot at the time of scanning. As described with some examples, a control system 300 can be implemented to use sensor data generated from a robot's sensor set, such as described with one of the examples of FIG. 1.

在圖3中,控制系統300包括一記憶體310和一處理器320。記憶體310可能是任何形式,包括RAM、DRAM或ROM。記憶體310能例如透過軟體(例如一應用程式)之安裝來儲存指令。處理器320可從記憶體310存取指令,以控制該機器人100。根據一些範例,處理器320存取多組指令,包括:(i)用以從一給定區域的一掃描獲得影像資料之一第一組指令312;(ii)用以對該影像資料執行影像分析之一第二組指令314;以及(iii)用以產生一參考地圖之一第三組指令316,該參考地圖排除該組不想要的物體以及與在掃描的時候之該機器人的位置相關聯。In FIG. 3, the control system 300 includes a memory 310 and a processor 320. The memory 310 may be in any form, including RAM, DRAM, or ROM. The memory 310 can store instructions, for example, through the installation of software (for example, an application program). The processor 320 can access instructions from the memory 310 to control the robot 100. According to some examples, the processor 320 accesses multiple sets of instructions, including: (i) a first set of instructions 312 to obtain image data from a scan of a given area; (ii) to execute an image on the image data Analyze a second set of instructions 314; and (iii) a third set of instructions 316 to generate a reference map that excludes the set of unwanted objects and is associated with the position of the robot at the time of scanning .

在一些實例中,控制系統300能被實現為一工作機器人之一整合組件,例如用以與此等機器人例常地執行之參考地圖建構操作搭配使用。舉例來說,當機器人行經一給定區域以建構參考地圖時,控制系統300可即時進行指令312-316。變化地,控制系統300可被實現為一遠端或分開的實體。舉例來說,控制系統300可接收感測器資料,其例如使用一無線通訊及/或網路通道從機器人100發送過來。在此等實例中,控制系統300可使用該經發送感測器資料產生該給定區域之一參考地圖,以及然後一旦參考地圖被產生或更新時,將該參考地圖傳送回該機器人。In some examples, the control system 300 can be implemented as an integrated component of a working robot, for example, for use with reference map construction operations that these robots routinely perform. For example, when the robot travels through a given area to construct a reference map, the control system 300 can immediately execute instructions 312-316. Alternatively, the control system 300 may be implemented as a remote or separate entity. For example, the control system 300 may receive sensor data, which is sent from the robot 100 using, for example, a wireless communication and / or network channel. In such instances, the control system 300 may use the sent sensor data to generate a reference map of the given area, and then, once the reference map is generated or updated, transfer the reference map back to the robot.

於進一步變化型態中,控制系統300可傳送該經產生參考地圖給一不同於用來獲得該給定區域的感測器資料的機器人之不同機器人。舉例而言,該控制系統可使用一第一機器人產生針對一給定區域之一參考地圖,以及傳送該經產生參考地圖給一第二機器人,或替換地給一機器人隊群。作為另一實例,控制系統300可能被實現於機器人100亦或一遠端實體上,以從另一個機器人、或替代地從一感測裝置或總成,接收針對該給定區域之該感測器資料。In a further variation, the control system 300 may transmit the generated reference map to a different robot than the robot used to obtain sensor data for the given area. For example, the control system may use a first robot to generate a reference map for a given area, and transmit the generated reference map to a second robot, or alternatively to a robot team. As another example, the control system 300 may be implemented on the robot 100 or a remote entity to receive the sensing for the given area from another robot, or alternatively from a sensing device or assembly器 资料。 Device information.

控制器系統300可同步地(例如即時)操作,以使用正從機器人的感測器組所獲得之感測器資料,建構用於定位目的之一參考地圖。變化地,該等指令312-316可部份地或整體地用一非同步方式實現。舉例來說,在控制系統與機器人100整合之範例中,機器人100可在一稍晚時間、當例如該機器人有可用之更多運算資源、或當該機器人為離線時,才執行指令312、314及/或316。類似地,在控制系統300在遠端或分開的一實例中,控制系統300可獨立於該機器人之操作來執行該等指令312-316。The controller system 300 can operate synchronously (eg, in real time) to use the sensor data being obtained from the sensor set of the robot to construct a reference map for positioning purposes. Varyingly, the instructions 312-316 may be implemented partially or entirely in an asynchronous manner. For example, in an example where the control system is integrated with the robot 100, the robot 100 may execute instructions 312, 314 at a later time, when, for example, the robot has more computing resources available, or when the robot is offline And / or 316. Similarly, in an example where the control system 300 is remote or separate, the control system 300 can execute the instructions 312-316 independently of the operation of the robot.

圖4A繪示用以產生在掃描的時間時與機器人之位置相關聯的一參考地圖之一範例方法。如圖4A和4B中所繪示之範例方法,可使用圖1至圖3的實例所繪示的組件來實現。因此,對圖1至圖3之元件所做之引用參照,係用於闡明用以執行所描述的一步驟或子步驟之一合適元件或組件之目的。FIG. 4A illustrates an example method for generating a reference map associated with the position of the robot at the time of scanning. The example method shown in FIGS. 4A and 4B can be implemented using the components shown in the examples of FIGS. 1 to 3. Therefore, the references made to the elements of FIGS. 1 to 3 are for the purpose of clarifying a suitable element or component for performing one of the described steps or sub-steps.

參照圖4A之一範例,影像資料係從該給定區域之一掃描獲得(方塊410)。該影像資料可例如從一攝影機和深度(或距離)感測器、一光達攝影機、一對立體攝影機及/或其組合(統稱「影像感測器」)獲得。在一些變化中,該影像資料係即時地獲得,舉例而言,行經該給定區域之一機器人具有機上影像感測器(「感測機器人」)。在一些變化型態中,該影像資料則係在該機器人行經該給定區域之後的一些時期,才從記憶體獲得。Referring to an example of FIG. 4A, image data is scanned from one of the given areas (block 410). The image data can be obtained, for example, from a camera and a depth (or distance) sensor, a LiDAR camera, a pair of stereo cameras, and / or a combination thereof (collectively referred to as "image sensors"). In some variations, the image data is obtained in real time, for example, a robot traveling through the given area has an on-board image sensor ("sensing robot"). In some variations, the image data is obtained from the memory only after the robot travels through the given area.

在一些範例中,該影像資料係由存在於獲得該影像資料之該機器人上的控制系統300予以處理。在一些變化型態中,該影像資料係藉由存在於與獲得影像資料的感測機器人作本地通訊(例如區域無線鏈路)之另一機器人上的控制系統300獲得。更進一步地,處理該影像資料之控制系統300可為與該感測機器人直接或間接通訊的一遠端網路電腦,諸如一伺服器。In some examples, the image data is processed by the control system 300 existing on the robot that obtained the image data. In some variations, the image data is obtained by a control system 300 that exists on another robot that is in local communication (eg, a regional wireless link) with the sensing robot that obtained the image data. Furthermore, the control system 300 for processing the image data may be a remote network computer, such as a server, directly or indirectly communicating with the sensing robot.

一旦獲得該影像資料,即可執行影像分析以判定一組不想要的物體。影像分析可對由該組影像感測器150所擷取之二維或三維影像執行。該等不想要的物體包括動態物體及/或被辨識為屬於一預定分類之物體(方塊420)。由控制系統300所執行之該影像分析可包括物體檢測和分類,其中該物體以類型、種類或例別來分類。此外,被用來判定物體何時不想要用於一參考地圖之分類動作,可基於一永久性之判定來進行。舉例來說,若物體被認為是動態的(例如移動中)、本質上動態的、或不固定的(例如非長存地設置),則此等物體對於該參考地圖而言即可能是不想要的。在一些範例中,物體可藉由類型(例如椅子、桌子等)來辨識,以及永久性的分類可基於與該物體類型相關聯之預定特性來進行。Once the image data is obtained, image analysis can be performed to determine a set of unwanted objects. Image analysis can be performed on the 2D or 3D images captured by the set of image sensors 150. Such unwanted objects include dynamic objects and / or objects identified as belonging to a predetermined category (block 420). The image analysis performed by the control system 300 may include object detection and classification, where the object is classified by type, category, or instance. In addition, the classification action used to determine when an object does not want to be used for a reference map can be made based on a permanent determination. For example, if objects are considered to be dynamic (such as moving), dynamic in nature, or not fixed (such as non-permanent settings), these objects may not be wanted for the reference map of. In some examples, objects can be identified by type (eg, chair, table, etc.), and permanent classification can be based on predetermined characteristics associated with the object type.

于一些變化型態中,可提供該物體分類來執行影像分析,以基於類似性將物體分類為分立的類別或群組。類似物體之分群可用來界定多個分類體系之任一者,其能夠隨時間遷移而界定出多種分類及反映出該物體是固定還是不固定之一永久性特性,以及其他更具顆粒性的分類。In some variations, the object classification may be provided to perform image analysis to classify objects into discrete categories or groups based on similarity. Grouping of similar objects can be used to define any of multiple classification systems, which can define multiple classifications over time and reflect a permanent characteristic of whether the object is fixed or not, and other more granular classifications .

一些變化型態提供物體對於多種分類、類別或群組之指派要依一可能性來實現。該等物體的分類可基於一信賴度分數或值(例如介於0.0和1.0之間的值),其能代表有關分類為正確之可能的一信賴度水準。因此,舉例來說,對於一物體被分類為固定之該物體的分類,即可反映出關於該物體是否可能在未來被移動之一信賴度值。Some variations provide that objects can be assigned to multiple categories, categories, or groups based on a single possibility. The classification of such objects may be based on a reliability score or value (eg, a value between 0.0 and 1.0), which can represent a level of confidence that the classification may be correct. Therefore, for example, for an object to be classified as a fixed classification of the object, it can reflect a reliability value as to whether the object may be moved in the future.

如同隨一些範例所描述地,一動態物體可為被檢測為在給定區域被感測時正移動者。為了識別此種物體,控制系統300可比較在多個間隔緊密的時間區間內之一經擷取場景的影像資料。若一物體在一時間區間內出現在一經擷取場景的一區域中,且在另一時間區間內出現在該經擷取場景之另一區域中,則因為該物體於該給定區域被感測時正在移動,故該物體可被識別為動態。As described with some examples, a dynamic object may be a person who is detected to be moving when a given area is sensed. In order to identify such objects, the control system 300 may compare the image data of a captured scene in a plurality of closely spaced time intervals. If an object appears in an area of a captured scene in a time interval and appears in another area of the captured scene in another time interval, it is because the object is felt in the given area It is moving at the time of measurement, so the object can be recognized as dynamic.

雖然以時間為基礎之物體檢測法可被用於檢測此種動態物體,但是一些範例亦可利用物體分類法,其中一經檢測物體係基於如一經擷取場景中所描繪之該物體的經檢出特性,來判定屬於一特定類別形式。在此等情況中,即使所檢出物體在該給定區域正進行感測時沒有移動,該經檢出物體亦可被識別為動態。舉例來說,一隻貓可能在一給定區域正被感測時持續躺臥不動一段時間,但是控制系統300仍會辨識出那隻貓是甚麼,以及將其識別為一動態物體。Although the time-based object detection method can be used to detect such dynamic objects, some examples can also use the object classification method, in which a detected object system is based on the detection of the object as depicted in a captured scene Characteristics to determine whether it belongs to a specific category form. In these cases, even if the detected object does not move while the given area is being sensed, the detected object can be recognized as dynamic. For example, a cat may continue to lie still for a while while a given area is being sensed, but the control system 300 will still recognize what the cat is and recognize it as a dynamic object.

以類似方式,物體分類法亦可用來識別對於地圖產生之目的而言被認為是不想要的之其他物體。此等物體可能具有本質上靜態之一特性,但不是長存地設置(例如靜態且在一段長時間期間處在相同位置)。此等不想要的物體可能包括能藉由與一動態物體接觸而容易地被移動之物體。舉例來說,一房間內的一書桌用椅子可為靜態,但可能隨時間遷移發生移動。另一方面,一張大桌子可能被假設成靜態且長存地設置於相同房間內。控制系統300可對描繪此種物體之影像資料執行分析,以識別多個實體特性(例如形狀、書寫特徵(signature feature)等),它們即為一物體類型或類別的特性。基於經判定的物體類型或類別,控制系統300即可作判定,以識別該物體為對於地圖產生之目的而言是不想要的。In a similar manner, the object classification method can also be used to identify other objects that are considered undesirable for the purpose of map generation. These objects may have a characteristic that is static in nature, but are not permanently set (eg static and in the same position for a long period of time). Such unwanted objects may include objects that can be easily moved by contact with a dynamic object. For example, a desk chair in a room may be static, but may move over time. On the other hand, a large table may be assumed to be statically and permanently installed in the same room. The control system 300 may perform analysis on the image data depicting such objects to identify multiple physical characteristics (such as shapes, signature features, etc.), which are characteristics of an object type or category. Based on the determined object type or category, the control system 300 can make a determination to identify the object as undesirable for the purpose of map generation.

控制系統300可產生排除該組不想要的物體並與該機器人在掃描之時的位置相關聯(方塊430)的一參考地圖。針對一給定區域之該參考地圖的產生,可涵蓋一初始建圖程序或活動,以及可導致更新該給定區域的地圖之隨後活動。尤其是,一些範例認知到,地圖更新能在該機器人為了任何任務或活動感測一給定區域時、以及遭遇一未知或其他對於其位置而言為非預期的物體時為之。The control system 300 may generate a reference map that excludes the set of unwanted objects and is associated with the position of the robot at the time of scanning (block 430). The generation of the reference map for a given area may cover an initial mapping procedure or activity, as well as subsequent activities that may lead to updating the map of the given area. In particular, some examples recognize that map updating can be done when the robot senses a given area for any task or activity, and when it encounters an unknown or other object that is unexpected for its location.

在執行影像分析時,控制系統300可利用包含多種模型、實況資料及/或樣板之一資料庫,識別物體之類型、類別和子類別。該等模型、實況資料及/或樣板之資料庫亦可以隨一給定區域內該機器人之重複使用來更新。根據一些範例,為了影像分析之目的而維持之該資料庫,能以一機器人隨時間經過而於一給定區域遭遇之物體來更新。此外,當機器人為了產生一參考地圖以外之其它目的(例如清掃、吸塵、遞送包裹等)於一給定區域中運作時,機器人可利用該影像資料庫。尤其是,機器人可維持不想要的物體之一資料庫,以及當機器人行經一給定區域以及遭遇一非預期物體時,控制系統300可執行影像分析,以拿該物體來與先前已遭遇到且由機器人分類過之不想要的物體之一集合作比較。為了執行此種比較,控制系統300可執行物體分類及/或辨識(例如檢測物體之書寫特徵,以及拿書寫特徵來與於該給定區域中隨時間經過所遭遇到的其他物體的書寫特徵作比較)。若該機器人沒有把一非預期物體辨識為先前已辨識出之一物體,則機器人可依類別(例如桌子或椅子)重新分類該物體,以及基於該物體類型之判定做出關於該物體是否為不想要者之判定。因此,多個範例認知到,機器人可在該機器人被佈署的任何時間更新一給定區域之一參考地圖,以識別新遭遇的物體以及識別此種物體為不想要的(例如動態或非長存靜止)或想要的(例如長存地設置),以供地圖產生之目的用。When performing image analysis, the control system 300 may utilize a database containing various models, live data, and / or templates to identify the type, category, and sub-category of objects. The database of these models, live data and / or templates can also be updated with repeated use of the robot in a given area. According to some examples, the database maintained for the purpose of image analysis can be updated with objects encountered by a robot over a given area over time. In addition, when the robot is operating in a given area for purposes other than generating a reference map (such as cleaning, vacuuming, delivering packages, etc.), the robot can use the image database. In particular, the robot can maintain a database of undesired objects, and when the robot travels through a given area and encounters an unexpected object, the control system 300 can perform image analysis to use the object to match the previously encountered and Cooperative comparison of a collection of unwanted objects classified by robots. In order to perform this comparison, the control system 300 can perform object classification and / or recognition (e.g., to detect the writing features of the object and use the writing features to compare with the writing features of other objects encountered in the given area over time Compare). If the robot does not recognize an unexpected object as an object that has been previously recognized, the robot can reclassify the object according to the category (such as a table or chair) and make a decision about whether the object is not wanted based on the type of the object The judgment of the important person. Therefore, many examples recognize that the robot can update a reference map of a given area at any time when the robot is deployed to identify newly encountered objects and identify such objects as undesirable (such as dynamic or non-long) (Still still) or desired (such as long-term storage settings) for the purpose of map generation.

此外,在一些範例中,為了參考地圖產生之目的,控制系統300可更新用來指定物體為想要或不想要者的該等模型、實況資料及樣板影像。舉例而言,若機器人重複地遭遇針對參考地圖產生被指定為不想要者的一物體,但是接著檢知該物體於一段長時間期間內在其位置上呈靜態,則控制系統300可重新指定該物體及把該物體含括於該給定區域的地圖之一更新版本中。物體之重新指定可與控制系統300將所遭遇物體重新分類為不同於一先前物體類型分類之一物體類型相一致,其中該經重新分類的物體類型係已知為靜態且長存地設置。例如,一機器人可初始地繪製一房間之地圖,以及基於一桌子的尺寸及/或其桌腳(例如小牌桌)識別該桌子為不想要者。然而,若機器人于一段長時間期間內於相同位置重複地遭遇該等桌腳,則控制系統300即可把該物體重新分類為靜態且長存地安置安置之類型(例如桌型固定物)。在此等情況中,控制系統300可更新該給定區域之參考地圖以納入該桌子。In addition, in some examples, for the purpose of reference map generation, the control system 300 may update the models, live data, and template images used to designate objects as wanted or undesired. For example, if the robot repeatedly encounters an object designated as undesirable for the reference map, but then detects that the object is static in its position for a long period of time, the control system 300 may reassign the object And include the object in one of the updated versions of the map of the given area. The reassignment of objects may be consistent with the control system 300 reclassifying the encountered objects into an object type different from a previous object type classification, where the reclassified object type is known to be static and persistently set. For example, a robot may initially draw a map of a room, and identify a table as an unwanted person based on the size of a table and / or its table legs (eg, a small card table). However, if the robot repeatedly encounters the table legs at the same position for a long period of time, the control system 300 can reclassify the object into a static and permanently placed type (such as a table-type fixture). In such cases, the control system 300 may update the reference map of the given area to include the table.

相反地,若機器人識別一特定物體為靜態且長存地設置之類型(例如一桌子),使得該物體被包括在地圖中,但是隨後檢知該物體為已移動,則然後控制系統300可將該物體重新分類為不想要的一類型,使得其被排除於參考地圖之外。替換地,控制系統300可指定被移位物體之物體類型為不想要者,使得若機器人遭遇類似形貌之其他物體,則那些物體也將被指定為不想要者,以及被排除於給定區域之參考地圖之外。Conversely, if the robot recognizes that a particular object is of a static and permanently installed type (such as a table), so that the object is included in the map, but then detects that the object has moved, then the control system 300 may The object is reclassified into an unwanted type, so that it is excluded from the reference map. Alternatively, the control system 300 may designate the object type of the displaced object as unwanted, so that if the robot encounters other objects of similar appearance, those objects will also be designated as unwanted and excluded from the given area Outside the reference map.

參照圖4B,控制系統300可對描繪一給定區域之影像資料執行物體檢測和分類,俾為了產生那個區域的參考地圖之目的識別出不想要的物體(方塊460)。在執行物體分類時,控制系統300可使用一語意分段程序或技術(方塊450)。在此種程序中,一種逐像素(pixel by pixel)分析被執行來將描繪於一影像中之一經檢知物體劃分成前景。一旦被劃分,控制系統300可識別經劃分物體之一形狀或外周圍特徵。控制系統300然後可拿該經檢知形狀或外周圍特徵來與模型和樣版之資料庫比較,以識別一符合的物體類型。Referring to FIG. 4B, the control system 300 may perform object detection and classification on image data depicting a given area, in order to identify unwanted objects for the purpose of generating a reference map of that area (block 460). When performing object classification, the control system 300 may use a semantic segmentation program or technique (block 450). In this procedure, a pixel by pixel analysis is performed to divide a detected object depicted in an image into foreground. Once divided, the control system 300 can recognize the shape or outer surrounding features of one of the divided objects. The control system 300 can then compare the detected shape or external features with the database of models and templates to identify a matching object type.

如同隨一些範例描述地,模型和樣版之資料庫可部分地基於歷史資料,根據由機器人先前擷取之影像,對應于先前已處理和指定為想要或不想要者之影像。As described with some examples, the database of models and templates can be based in part on historical data, based on images previously captured by the robot, corresponding to images that have been previously processed and designated as wanted or undesired.

在一些範例中,該分類方案可把物體指定為靜態和長存地安置、不想要或動態者之一。控制系統300可藉由比較該經劃分物體與先前遭遇的物體之樣版或模型影像,來判定一新檢測物體之一分類。若一新遭遇物體被視為外觀上充分符合一先前分類過的物體,則該早先物體之分類可被指定給新遭遇物體。隨時間過去,控制系統300可基於機器人在一給定區域中感測到的東西來將物體重新分類。In some examples, the classification scheme may designate objects as one of statically and permanently placed, unwanted, or dynamic. The control system 300 can determine a classification of a newly detected object by comparing the template or model image of the divided object with the previously encountered object. If a newly encountered object is deemed to sufficiently conform to a previously classified object, then the classification of the earlier object may be assigned to the newly encountered object. Over time, the control system 300 may reclassify objects based on what the robot senses in a given area.

在一些變化型態中,一信賴度分數係與符合的物體類型相關聯,以及當可能有多種物體類型時,該信賴度分數可用來選擇最適當的物體類型。為了參考地圖產生之目的,當一物體滿足針對經檢知物體類型之一信賴度臨界位準,則該物體可被指定為不想要者。In some variations, a reliability score is associated with a conforming object type, and when there may be multiple object types, the reliability score may be used to select the most appropriate object type. For the purpose of reference map generation, when an object meets the criticality level of reliability for one of the detected object types, the object can be designated as an unwanted one.

控制系統300亦可使不同分類的物體與不同層相關聯(方塊470)。舉例來說,當物體依永久性來分類時,不同的物體永久性分類可與不同層相關聯。更進一步地,控制系統300可啟動不同層,使得參考地圖僅描繪對應於一給定分類(例如對應的永久性分類或由使用者定義的分類之物體)之選定層。基於所判定層,控制系統300可執行物體分類及/或辨識,以拿該物體來與相同層及/或類型的其他物體作比較。The control system 300 may also associate objects of different classifications with different layers (block 470). For example, when objects are classified permanently, different permanent classifications of objects may be associated with different layers. Furthermore, the control system 300 can activate different layers so that the reference map only depicts selected layers corresponding to a given classification (eg, corresponding permanent classification or objects defined by the user). Based on the determined layer, the control system 300 can perform object classification and / or recognition to compare the object with other objects of the same layer and / or type.

圖5繪示用以操作一機器人來產生一參考地圖之一方法。諸如隨圖5的範例所描述者之一方法,可例如使用譬如隨圖1之範例所描述之一範例機器人來實現。於是,為了例示用以執行所描述的一步驟或子步驟之合適組件之目的,將參照圖1之元件為之。FIG. 5 illustrates a method for operating a robot to generate a reference map. A method such as that described with the example of FIG. 5 can be implemented, for example, using an example robot such as described with the example of FIG. 1. Thus, for the purpose of illustrating suitable components for performing a described step or sub-step, reference will be made to the elements of FIG. 1.

參照圖5之一範例,一機器人可操作來掃描一給定區域以取得影像資料(方塊510)。舉例來說,機器人100可使用例如以一個(或多個)二維攝影機(例如具有廣角鏡頭、魚眼鏡頭之照相機)及/或一個(或多個)三維影像感測器(例如LiDAR (光達)感測器、一對立體攝影機及/或具有距離感測器之攝影機)所提供之一組影像感測器,來獲得影像資料。Referring to an example of FIG. 5, a robot can operate to scan a given area to obtain image data (block 510). For example, the robot 100 may use, for example, one (or more) two-dimensional cameras (such as cameras with wide-angle lenses and fisheye lenses) and / or one (or more) three-dimensional image sensors (such as LiDAR (light ) Sensors, a pair of stereo cameras and / or cameras with distance sensors) provide a set of image sensors to obtain image data.

當使用一空間判定資源152來執行掃描動作時,機器人100可判定其位置(方塊520)。空間判定資源152可相當於例如用以檢測一參考點(例如視覺標記)之一感測機構。空間判定資源152之資源可例如包括一移動感測器,諸如一加速度計及/或陀螺儀。額外或替代地,空間判定資源152可能包括一里程計。舉例而言,使用對應的感測機構,機器人100可檢知其相對於參考點之初始位置。機器人100可使用用於線性距離之里程計和用以檢測側向運動和方向改變之加速度計及/或陀螺儀,而追蹤其本身於給定區域的運動。藉由追蹤其本身參照一參考點之運動,機器人100即可判定它在執行一掃描之際時的位置。When a spatial decision resource 152 is used to perform the scanning action, the robot 100 may determine its position (block 520). The spatial decision resource 152 may be equivalent to a sensing mechanism for detecting a reference point (such as a visual mark), for example. The resources of the spatial decision resource 152 may include, for example, a motion sensor, such as an accelerometer and / or gyroscope. Additionally or alternatively, the space determination resource 152 may include an odometer. For example, using the corresponding sensing mechanism, the robot 100 can detect its initial position relative to the reference point. The robot 100 may use an odometer for linear distance and an accelerometer and / or gyroscope for detecting lateral movement and direction change, while tracking its own movement in a given area. By tracking its own movement with reference to a reference point, the robot 100 can determine its position when performing a scan.

在一些範例中,機器人100對自給定區域所獲得之影像資料執行影像分析,以檢測出由該影像資料所描繪之一組不想要的物體(方塊530)。此等不想要的物體可對應於具有指出個別物體為動態或非長存地設置(例如物體可能被另一物體移動)的一永久性分類之物體。因此,該影像分析可識別出於其位置為動態、本質上非長存之物體,或是不利於定位或建圖的目的之其他不想要之物體。In some examples, the robot 100 performs image analysis on the image data obtained from a given area to detect a group of unwanted objects depicted by the image data (block 530). Such undesired objects may correspond to objects with a permanent classification indicating that individual objects are set dynamically or non-permanently (eg, the object may be moved by another object). Therefore, the image analysis can identify objects whose position is dynamic, not perpetual in nature, or other undesirable objects that are not conducive to positioning or mapping purposes.

機器人100產生了排除從給定區域之影像資料檢出之該組不想要的物體之一參考地圖115 (方塊540)。此參考地圖可與在掃描時之機器人100的位置相關聯。以此方式,參考地圖115能隨後被用來基於在掃描時之機器人100的位置、及描繪或以其他方式呈現於參考地圖中之物體,來定位該機器人(或另一機器人)。The robot 100 generates a reference map 115 that excludes one of the group of unwanted objects detected from the image data of the given area (block 540). This reference map may be associated with the position of the robot 100 at the time of scanning. In this way, the reference map 115 can then be used to locate the robot (or another robot) based on the position of the robot 100 at the time of scanning, and objects depicted or otherwise presented in the reference map.

在一些範例中,參考地圖係利用一同時定位和建圖(SLAM)演算法或類似的演算法而產生,此等演算法係用以對一未知環境進行繪圖(建圖)同時同步地持續追蹤一機器人在該環境中的位置。In some examples, the reference map is generated using a simultaneous positioning and mapping (SLAM) algorithm or a similar algorithm. These algorithms are used to map (map) an unknown environment while continuously tracking simultaneously The position of a robot in the environment.

雖然一些範例提供一機器人來執行影像分析,以檢出該組不想要的物體,但是其他範例(例如隨圖3所描述者)可採用與機器人100分開或遠端之一控制系統。同樣地,一分開或遠端的電腦可基於檢出的該組不想要的物體之排除來產生該參考地圖。Although some examples provide a robot to perform image analysis to detect the group of unwanted objects, other examples (such as those described with FIG. 3) may use a control system that is separate from or remote to the robot 100. Similarly, a separate or remote computer can generate the reference map based on the exclusion of the detected group of unwanted objects.

吾人考慮到的是,本文所描述之範例延伸至本文所述之個別元件及概念,而獨立於其他概念、想法或系統,以及一些範例意欲含括本申請案中任一處敘述之諸多元件的多種組合。雖然多個範例已於本文參照附圖詳細描述,但是應瞭解的是,此等概念並不侷限於那些精確的範例。因此,意圖的是,此等概念之範疇係由後附申請專利範圍及其等效範圍來定義。更進一步地,期待的是,個別地抑或作為一範例之一部分所描述之一特定特徵,可能與其他個別描述的特徵或其他範例的部分相結合,即便此等其他特徵和範例並未提到該特定特徵。因此,即使未描述到組合,不應妨礙對此等組合擁有權利。I consider that the examples described in this article extend to the individual elements and concepts described in this article, independent of other concepts, ideas or systems, and that some examples are intended to include many elements described anywhere in this application. Various combinations. Although many examples have been described in detail herein with reference to the accompanying drawings, it should be understood that these concepts are not limited to those precise examples. Therefore, it is intended that the scope of these concepts is defined by the appended patent application scope and its equivalent scope. Furthermore, it is expected that a particular feature described individually or as part of an example may be combined with other individually described features or parts of other examples, even if these other features and examples do not mention the Specific characteristics. Therefore, even if a combination is not described, it should not prevent the right to have such a combination.

100、200‧‧‧機器人100、200‧‧‧robot

101‧‧‧(給定)區域 101‧‧‧ (given) area

110、310‧‧‧記憶體 110, 310‧‧‧ memory

112‧‧‧地圖(產生)邏輯 112‧‧‧Map (generate) logic

115‧‧‧參考地圖 115‧‧‧Reference map

120、320‧‧‧處理器 120, 320‧‧‧ processor

130、300‧‧‧控制系統 130、300‧‧‧Control system

140‧‧‧推進機構 140‧‧‧Promotion agency

142‧‧‧介面 142‧‧‧Interface

144‧‧‧馬達 144‧‧‧Motor

146‧‧‧轉向系統 146‧‧‧Steering system

150‧‧‧影像感測器 150‧‧‧Image sensor

152‧‧‧空間判定資源 152‧‧‧ Space judgment resources

201‧‧‧區域 201‧‧‧Region

202‧‧‧人 202‧‧‧ people

204‧‧‧廢紙簍 204‧‧‧ waste paper basket

206‧‧‧複印機 206‧‧‧Copier

208‧‧‧壁部 208‧‧‧ Wall

210‧‧‧參考點、區域 210‧‧‧Reference point, area

211‧‧‧已知位置 211‧‧‧ Known location

215‧‧‧目前視像 215‧‧‧ Current video

312~316‧‧‧指令 312 ~ 316‧‧‧Command

410~430、450~470、510~540‧‧‧方塊 410 ~ 430, 450 ~ 470, 510 ~ 540‧‧‧ block

圖1繪示用以產生用於定位之一參考地圖之一範例機器人。FIG. 1 illustrates an example robot used to generate a reference map for positioning.

圖2A至圖2C繪示產生一參考地圖之一機器人的實例。2A to 2C illustrate an example of a robot that generates a reference map.

圖3繪示用以產生用於定位之一參考地圖之一範例控制系統。FIG. 3 illustrates an example control system for generating a reference map for positioning.

圖4A繪示用以產生用於定位之一參考地圖之一範例方法。FIG. 4A illustrates an example method for generating a reference map for positioning.

圖4B繪示用以執行利用語意分段(semantic segmentation)之影像分析來產生一參考地圖之一範例方法,該參考地圖排除目前影像中不希望者被描繪為地圖的一部分。FIG. 4B shows an example method for performing image analysis using semantic segmentation to generate a reference map that excludes undesired persons in the current image from being depicted as part of the map.

圖5繪示用以操作一機器人來產生一參考地圖之一範例方法。FIG. 5 illustrates an example method for operating a robot to generate a reference map.

Claims (15)

一種機器人,其包含: 一組影像感測器; 一空間判定資源; 一控制系統;以及 一推進機構; 其中該控制系統: 執行一給定區域之一掃描,以利用該組影像感測器來獲得該給定區域之影像資料; 使用該空間判定資源,判定於該掃描的時間時該機器人相對於一參考點之一位置; 對該影像資料執行影像分析,以檢出由該影像資料所描繪之一組不想要的物體,該組不想要的物體為一動態物體或一預定類別之不想要的物體中之至少一者;以及 產生一排除該組不想要的物體之參考地圖,該參考地圖係與於該掃描的時間時該機器人之該位置相關聯。A robot that includes: A set of image sensors; A space judgment resource; A control system; and 1. Promotion organization; Among which the control system: Perform a scan of a given area to use the set of image sensors to obtain image data of the given area; Use the space determination resource to determine a position of the robot relative to a reference point at the time of the scan; Perform image analysis on the image data to detect a group of unwanted objects depicted by the image data, the group of unwanted objects being at least one of a dynamic object or an unwanted object of a predetermined category; as well as A reference map that excludes the set of unwanted objects is generated, the reference map being associated with the position of the robot at the time of the scan. 如請求項1之機器人,其中該空間判定資源包括一移動感測器。The robot according to claim 1, wherein the space determination resource includes a movement sensor. 如請求項1之機器人,其中該空間判定資源包括一里程計。The robot of claim 1, wherein the space determination resource includes an odometer. 如請求項1之機器人,其中該控制系統於操作該推進機構來使該機器人於該給定區域內移動時,重複地執行該給定區域之一掃描,該控制系統針對該等掃描的每一者判定該機器人相對於該參考點或一第二參考點中的至少一者之位置。The robot according to claim 1, wherein the control system repeatedly executes one scan of the given area when operating the propulsion mechanism to move the robot within the given area, the control system for each of the scans The user determines the position of the robot relative to at least one of the reference point or a second reference point. 如請求項1之機器人,其中控制系統檢測出包括該組不想要的物體的該給定區域中之多個物體,以及為該等所檢知物體中的每一者判定一邊界區域。The robot of claim 1, wherein the control system detects a plurality of objects in the given area including the set of unwanted objects, and determines a boundary area for each of the detected objects. 如請求項1之機器人,其中該控制系統使用語意分段法(semantic segmentation)來執行影像分析。The robot of claim 1, wherein the control system uses semantic segmentation to perform image analysis. 如請求項1之機器人,其中該控制系統執行影像分析,以判定包括該組不想要的物體之多個物體。The robot of claim 1, wherein the control system performs image analysis to determine a plurality of objects including the group of unwanted objects. 如請求項1之機器人,其中該參考地圖係分層,以識別不同層中的不同類別之物體。Like the robot of claim 1, wherein the reference map is layered to identify objects of different classes in different layers. 一種控制系統,其包含: 用以儲存指令之一記憶體;以及 至少一處理器,用以執行該等指令作下列動作: 獲得一給定區域之影像資料,該影像資料係從由一機器人上提供的多個感測器對該給定區域所作的一掃描獲得; 對該影像資料執行影像分析,以判定一組不想要的物體,該組不想要的物體為一動態物體或一預定類別之不想要的靜止物體中之至少一者;以及 產生一排除該組不想要的物體之參考地圖,該參考地圖係與於該掃描的時間時該機器人之一位置相關聯。A control system, including: A memory for storing instructions; and At least one processor for executing these instructions to perform the following actions: Obtain image data of a given area, the image data is obtained from a scan of the given area by multiple sensors provided on a robot; Performing image analysis on the image data to determine a group of unwanted objects, the group of unwanted objects being at least one of a dynamic object or an undesired stationary object of a predetermined category; and A reference map that excludes the set of unwanted objects is generated, the reference map being associated with a position of the robot at the time of the scan. 如請求項9之控制系統,其中該至少一處理器執行該等指令來於該機器人在該給定區域內移動時重複地執行該給定區域之一掃描。The control system of claim 9, wherein the at least one processor executes the instructions to repeatedly perform a scan of the given area when the robot moves within the given area. 如請求項10之控制系統,其中該至少一處理器執行該等指令來針對該等掃描的每一者判定該機器人相對於一第一參考點或一第二參考點中至少一者之位置。The control system of claim 10, wherein the at least one processor executes the instructions to determine the position of the robot relative to at least one of a first reference point or a second reference point for each of the scans. 如請求項9之控制系統,其中該至少一處理器執行該等指令來檢測出包括該組不想要的物體的該給定區域中之多個物體,以及為該等所檢出物體中的每一者判定一邊界區域。The control system of claim 9, wherein the at least one processor executes the instructions to detect a plurality of objects in the given area including the set of unwanted objects, and for each of the detected objects One determines a boundary area. 如請求項9之控制系統,其中該至少一處理器執行該等指令來使用語意分段法執行影像分析。The control system of claim 9, wherein the at least one processor executes the instructions to perform image analysis using semantic segmentation. 如請求項9之控制系統,其中該至少一處理器執行該等指令來執行影像分析,以判定包括該組不想要的物體之多個物體。The control system of claim 9, wherein the at least one processor executes the instructions to perform image analysis to determine a plurality of objects including the group of unwanted objects. 一種用以操作機器人之方法,該方法包含下列步驟: 從一給定區域之一掃描獲得影像資料; 對該影像資料執行影像分析,以判定一組不想要的物體,該組不想要的物體包括一動態物體或一預定類別之不想要的物體中之至少一者;以及 產生一排除該組不想要的物體之參考地圖,該參考地圖係與於該掃描的時間時該機器人之一位置相關聯。A method for operating a robot. The method includes the following steps: Scanning to obtain image data from one of the given areas; Perform image analysis on the image data to determine a group of unwanted objects, the group of unwanted objects including at least one of a dynamic object or a predetermined category of unwanted objects; and A reference map that excludes the set of unwanted objects is generated, the reference map being associated with a position of the robot at the time of the scan.
TW107138043A 2017-10-31 2018-10-26 Robot, control system and method for operating the robot TWI684136B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
WOPCT/US17/59396 2017-10-31
??PCT/US17/59396 2017-10-31
PCT/US2017/059396 WO2019089018A1 (en) 2017-10-31 2017-10-31 Mobile robots to generate reference maps for localization

Publications (2)

Publication Number Publication Date
TW201933177A true TW201933177A (en) 2019-08-16
TWI684136B TWI684136B (en) 2020-02-01

Family

ID=66332671

Family Applications (1)

Application Number Title Priority Date Filing Date
TW107138043A TWI684136B (en) 2017-10-31 2018-10-26 Robot, control system and method for operating the robot

Country Status (3)

Country Link
US (1) US11703334B2 (en)
TW (1) TWI684136B (en)
WO (1) WO2019089018A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI736960B (en) * 2019-08-28 2021-08-21 財團法人車輛研究測試中心 Synchronous positioning and mapping optimization method
TWI788253B (en) * 2021-06-30 2022-12-21 台達電子國際(新加坡)私人有限公司 Adaptive mobile manipulation apparatus and method

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3867757A4 (en) * 2018-10-16 2022-09-14 Brain Corporation Systems and methods for persistent mapping of environmental parameters using a centralized cloud server and a robotic network
US20200182623A1 (en) * 2018-12-10 2020-06-11 Zebra Technologies Corporation Method, system and apparatus for dynamic target feature mapping
US11113526B2 (en) * 2019-07-23 2021-09-07 Toyota Research Institute, Inc. Training methods for deep networks
JP2022013117A (en) * 2020-07-03 2022-01-18 オムロン株式会社 Route formulation system, mobile robot, route formulation program, and control program for mobile robot
CN112363158B (en) * 2020-10-23 2024-03-12 浙江华睿科技股份有限公司 Pose estimation method for robot, robot and computer storage medium
US11145076B1 (en) * 2020-10-27 2021-10-12 R-Go Robotics Ltd Incorporation of semantic information in simultaneous localization and mapping
CN113096182A (en) * 2021-03-03 2021-07-09 北京邮电大学 Method and device for positioning mobile object, electronic equipment and storage medium
CN115602040A (en) * 2021-07-09 2023-01-13 华为技术有限公司(Cn) Map information generating method, map information using method, map information generating apparatus, map information using apparatus, map information storing medium, and program

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1804149B1 (en) 2005-12-28 2011-05-18 ABB Research Ltd. Mobile robot
KR100988568B1 (en) 2008-04-30 2010-10-18 삼성전자주식회사 Robot and method for building map of the same
US8918209B2 (en) 2010-05-20 2014-12-23 Irobot Corporation Mobile human interface robot
DE102012109004A1 (en) 2012-09-24 2014-03-27 RobArt GmbH Robots and methods for autonomous inspection or processing of floor surfaces
KR101772084B1 (en) 2015-07-29 2017-08-28 엘지전자 주식회사 Moving robot and controlling method thereof
US9682481B2 (en) 2015-10-26 2017-06-20 X Development Llc Communication of information regarding a robot using an optical identifier
WO2017076928A1 (en) 2015-11-02 2017-05-11 Starship Technologies Oü Method, device and assembly for map generation
US9720415B2 (en) 2015-11-04 2017-08-01 Zoox, Inc. Sensor-based object-detection optimization for autonomous vehicles
US10496766B2 (en) * 2015-11-05 2019-12-03 Zoox, Inc. Simulation system and methods for autonomous vehicles
KR102403504B1 (en) 2015-11-26 2022-05-31 삼성전자주식회사 Mobile Robot And Method Thereof
US10265859B2 (en) * 2016-02-09 2019-04-23 Cobalt Robotics Inc. Mobile robot with removable fabric panels
KR102012549B1 (en) * 2017-01-25 2019-08-20 엘지전자 주식회사 Method of drawing map by identifying moving object and robot implementing thereof
US10832078B2 (en) * 2017-08-11 2020-11-10 Mitsubishi Electric Research Laboratories, Inc. Method and system for concurrent reconstruction of dynamic and static objects
US10794710B1 (en) * 2017-09-08 2020-10-06 Perceptin Shenzhen Limited High-precision multi-layer visual and semantic map by autonomous units

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI736960B (en) * 2019-08-28 2021-08-21 財團法人車輛研究測試中心 Synchronous positioning and mapping optimization method
TWI788253B (en) * 2021-06-30 2022-12-21 台達電子國際(新加坡)私人有限公司 Adaptive mobile manipulation apparatus and method

Also Published As

Publication number Publication date
TWI684136B (en) 2020-02-01
WO2019089018A1 (en) 2019-05-09
US11703334B2 (en) 2023-07-18
US20200300639A1 (en) 2020-09-24

Similar Documents

Publication Publication Date Title
TWI684136B (en) Robot, control system and method for operating the robot
US10198823B1 (en) Segmentation of object image data from background image data
CN108885459B (en) Navigation method, navigation system, mobile control system and mobile robot
US10726264B2 (en) Object-based localization
CN111989537B (en) System and method for detecting human gaze and gestures in an unconstrained environment
US11227434B2 (en) Map constructing apparatus and map constructing method
US11562524B2 (en) Mobile robots to generate occupancy maps
JP6976350B2 (en) Imaging system for locating and mapping scenes, including static and dynamic objects
US11334086B2 (en) Autonomous robots and methods of operating the same
CN113116224B (en) Robot and control method thereof
KR20210029586A (en) Method of slam based on salient object in image and robot and cloud server implementing thereof
US20210405650A1 (en) Robot generating map and configuring correlation of nodes based on multi sensors and artificial intelligence, and moving based on map, and method of generating map
US20210256245A1 (en) Real-time multi-view detection of objects in multi-camera environments
Chen et al. Design and Implementation of AMR Robot Based on RGBD, VSLAM and SLAM
Sun et al. Real-time and fast RGB-D based people detection and tracking for service robots
US20180350216A1 (en) Generating Representations of Interior Space
WO2019207875A1 (en) Information processing device, information processing method, and program
Qian et al. An improved ORB-SLAM2 in dynamic scene with instance segmentation
US20240135686A1 (en) Method and electronic device for training neural network model by augmenting image representing object captured by multiple cameras
KR20240057297A (en) Method and electronic device for training nueral network model
Kim et al. Object recognition using smart tag and stereo vision system on pan-tilt mechanism
Jung et al. Visual Positioning System based on Voxel Labeling using Object Simultaneous Localization And Mapping
Song et al. Path-Tracking Control Based on Deep ORB-SLAM2
Singh et al. Efficient deep learning-based semantic mapping approach using monocular vision for resource-limited mobile robots
Gautam et al. Experience based localization in wide open indoor environments

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees