WO2018228256A1 - Système et procédé de détermination d'emplacement cible de tâche d'intérieur par mode de reconnaissance d'image - Google Patents

Système et procédé de détermination d'emplacement cible de tâche d'intérieur par mode de reconnaissance d'image Download PDF

Info

Publication number
WO2018228256A1
WO2018228256A1 PCT/CN2018/090174 CN2018090174W WO2018228256A1 WO 2018228256 A1 WO2018228256 A1 WO 2018228256A1 CN 2018090174 W CN2018090174 W CN 2018090174W WO 2018228256 A1 WO2018228256 A1 WO 2018228256A1
Authority
WO
WIPO (PCT)
Prior art keywords
module
photo
location
camera
indoor
Prior art date
Application number
PCT/CN2018/090174
Other languages
English (en)
Chinese (zh)
Inventor
潘景良
陈灼
李腾
陈嘉宏
高鲁
Original Assignee
炬大科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 炬大科技有限公司 filed Critical 炬大科技有限公司
Publication of WO2018228256A1 publication Critical patent/WO2018228256A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • G06Q10/047Optimisation of routes or paths, e.g. travelling salesman problem
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F15/00Coin-freed apparatus with meter-controlled dispensing of liquid, gas or electricity
    • G07F15/003Coin-freed apparatus with meter-controlled dispensing of liquid, gas or electricity for electricity

Definitions

  • the invention relates to the field of indoor mission target location determining systems.
  • the present invention relates to a system and method for determining an indoor mission target position by image recognition.
  • Existing mobile robots or other electronic devices on the market use a tracing sensor or infrared or ultrasonic to scan a two-dimensional or three-dimensional map of the space in which they are located, and move independently by random positioning and movement or collision bounce, while performing other pre-operations.
  • the operation mode is implemented by the user issuing commands through remote control or remote control of the base station, and some models can detect obstacles to avoid obstacles.
  • the indoor task target position determining system of the present invention can be applied to any kind of mobile robot or electronic device, and the robot or device cooperates with the user to determine the moving target of the robot or the device through the identification comparison of the system through the picture information provided by the user. position. This solves the problem that the robot cannot judge the ground condition, the location and the best moving route. Replace the robot's tracking sensor with the human eye. Combine the advantages of human beings with the advantages of robots through this system to make up for the various weaknesses of existing mobile robots.
  • the indoor task target position determining system of the invention relies on human-computer interaction and high interactivity, can improve the working efficiency of the robot, and at the same time reduce the workload of the user, and the human intelligence compensates for the technical limitations of the robot itself.
  • the invention relates to a system for determining an indoor task target position by image recognition, the system comprising: an in-memory database module, a camera and an image processing module; the in-memory database module pre-stores the characterized indoor photos and metadata thereof.
  • the metadata includes location information; the camera captures the target location and transmits the captured target location photo to the image processing module; the image processing module preprocesses the target location photo to extract features, and extracts the extracted features from the in-memory database module The content is compared and the location of the target location is determined based on the matched indoor photo and its metadata.
  • the system of the present invention also includes a wireless transmission module for wirelessly transmitting photos and/or their metadata.
  • system of the present invention further includes a learning module for learning photo features in the in-memory database module and storing the learned information back into the in-memory database module.
  • system of the present invention further includes an encoder for generating location information for the photo.
  • system of the present invention also includes a map location module for mapping and indoor positioning.
  • system of the present invention further includes a data processing module for generating preliminary path information based on the location of the target location in conjunction with the map.
  • the user may also actively add photos and metadata to the in-memory database module, the image processing module may pre-process the added photos to extract features, and the learning module may learn the added photos.
  • the system of the present invention further includes one or more of a mobile robot, a camera, an in-memory database module, a wireless transmission module, an image processing module, a data processing module, a learning module, an encoder, and a map positioning module. Integrated in the mobile robot.
  • the system of the present invention further comprises one or more of a smart charging post, an in-memory database module, a wireless transmission module, an image processing module, a data processing module, a learning module, an encoder, and a map positioning module.
  • a smart charging post In the smart charging pile.
  • the invention also relates to a method for determining an indoor task target position by image recognition, comprising:
  • the camera photographs the target location and transmits the photographed target location photo to the image processing module;
  • the image processing module preprocesses the photo of the target location to extract features
  • the in-memory database module Comparing the extracted features with pre-stored feature-recognized indoor photos in the in-memory database module, wherein the in-memory database module further stores metadata of the indoor photos, the metadata including the location information;
  • the location of the target location is determined based on the matching indoor photo and its metadata.
  • the invention also relates to a system for establishing an indoor photo database, the system comprising: a camera, an encoder, a distance compensation value calculation module, an image processing module and an in-memory database module; a camera for taking a photo; and an encoder for recording a position of the camera;
  • the distance compensation value calculation module is configured to calculate the distance compensation value;
  • the image processing module is configured to process the photo to extract features;
  • the in-memory database module is configured to store the characterized photo and its position information.
  • the invention also relates to a method of establishing an indoor photo database, comprising:
  • the characterized photo and its location information are stored in the in-memory database module.
  • FIG. 1 is a schematic diagram of a system for determining an indoor mission target position by image recognition in accordance with an embodiment of the present invention.
  • the reference numerals are as follows: mobile phone APP 1, mobile robot or electronic device 2, smart charging pile 3.
  • the mobile robot or electronic device includes a camera 4, a wireless transmission module 5, an ultrasonic sensor 6, a laser sensor 7, an encoder 8, a map positioning module 9, an obstacle avoidance module 10, a path planning module 11, a motion control module 12, and the like.
  • the smart charging station includes a wireless transmission module 13, a memory data module 14, an image processing module 15, a machine learning module 16, a data processing module 17, and the like.
  • the invention relates to a system for determining an indoor task target position by image recognition, the system comprising: an in-memory database module, a camera and an image processing module; the in-memory database module pre-stores the characterized indoor photos and metadata thereof.
  • the metadata includes location information; the camera captures the target location and transmits the captured target location photo to the image processing module; the image processing module preprocesses the target location photo to extract features, and extracts the extracted features from the in-memory database module The content is compared and the location of the target location is determined based on the matched indoor photo and its metadata.
  • the metadata may also include any other photo-related information such as, but not limited to, the location the camera is facing, the direction the camera is facing, the angle, the aperture, the focal length, the sensitivity, the white balance, and the like.
  • the position information of the photo is determined based on the position and distance compensation value when the camera takes the photo.
  • the distance compensation value is defined as the distance between the camera and the subject when the photo was taken.
  • the distance compensation value can be determined by a laser.
  • the distance compensation value can be determined by a fixed shooting focal length (shooting distance); in particular, the camera can take a set of photos with a fixed focal length, and then screen out clear photos, which can be determined based on the focal length. The distance between the object in the photo and the camera.
  • the alignment in the image processing module is based on a feature template matching algorithm. In a specific embodiment, the alignment in the image processing module is based on a Scale Invariant Feature Transform (SIFT) algorithm or an Accelerated Robust Feature (SURF) algorithm.
  • SIFT Scale Invariant Feature Transform
  • SURF Accelerated Robust Feature
  • the system of the present invention also includes a wireless transmission module for wirelessly transmitting photos, their metadata, and/or other information.
  • the signal transmission modes between the wireless transmission modules include, but are not limited to, Bluetooth, WIFI, ZigBee, infrared, ultrasonic, ultra-wideband, etc., and preferred signal transmission methods are Bluetooth and WIFI.
  • the pre-stored characterization of the identified indoor photos is characterized by an image processing module. This characterization can be done based on, for example, an edge detection algorithm.
  • the system of the present invention further includes a learning module for learning photo features in the in-memory database module and storing the learned information back into the in-memory database module.
  • Learning involves calculating and correcting feature weight values, such as learning through artificial intelligence neural network learning methods.
  • the learning is based on a Convolutional Neural Network (CNN).
  • CNN Convolutional Neural Network
  • system of the present invention further includes an encoder that can be integrated with the camera and configured to generate a real-time location of the camera.
  • the system of the present invention also includes a map location module for mapping and indoor positioning.
  • the rendered map is a two-dimensional indoor plan view.
  • the system of the present invention further includes a distance sensor for sensing the contour and/or edge of the room, and the map location module maps the map based on the data recorded by the distance sensor.
  • system of the present invention further includes a data processing module for generating preliminary path information based on the location of the target location in conjunction with the map.
  • the user may also actively add photos and metadata to the in-memory database module, the image processing module may pre-process the added photos to extract features, and the learning module may learn the added photos.
  • the system of the present invention further includes one of a mobile robot, a camera, an in-memory database module, a wireless transmission module, an image processing module, a data processing module, a learning module, an encoder, a map positioning module, and a distance sensor or A variety of can be integrated into the mobile robot.
  • the mobile robot may also include a path planning module for generating a path, a sensor for sensing an obstacle, an obstacle avoidance module for providing obstacle information, and/or a motion control module for controlling movement of the mobile robot.
  • the system of the present invention further includes one or more of a smart charging pile, an in-memory database module, a wireless transmission module, an image processing module, a data processing module, a learning module, an encoder, a map positioning module, and a distance sensor.
  • the species can be integrated into a smart charging post. Smart charging posts can be used to accommodate power-hungry modules, reducing the power consumption of other devices.
  • the invention also relates to a method for determining an indoor task target position by image recognition, comprising:
  • the camera photographs the target location and transmits the photographed target location photo to the image processing module;
  • the image processing module preprocesses the photo of the target location to extract features
  • the in-memory database module Comparing the extracted features with pre-stored feature-recognized indoor photos in the in-memory database module, wherein the in-memory database module further stores metadata of the indoor photos, the metadata including the location information;
  • the location of the target location is determined based on the matching indoor photo and its metadata.
  • the invention also relates to a system for establishing an indoor photo database, the system comprising: a camera, an encoder, a distance compensation value calculation module, an image processing module and an in-memory database module; a camera for taking a photo; and an encoder for recording a position of the camera;
  • the distance compensation value calculation module is configured to calculate the distance compensation value;
  • the image processing module is configured to process the photo to extract features;
  • the in-memory database module is configured to store the characterized photo and its position information.
  • the positional information of the photo is determined based on the position and distance compensation value when the camera takes the photo.
  • the invention also relates to a method of establishing an indoor photo database, comprising:
  • the characterized photo and its location information are stored in the in-memory database module.
  • the encoder can be integrated with the camera and configured to generate a real-time location of the camera.
  • the distance compensation value is defined as the distance between the camera and the subject when the photo is taken.
  • the distance compensation value calculation module may determine the distance compensation value by a laser.
  • the distance compensation value calculation module may determine the distance compensation value by fixing the shooting focal length of the camera; specifically, the camera may take a set of photos using a fixed focal length, and then the distance compensation value calculation module may filter out clear photos. And based on the focal length, the distance between the object in the clear photo and the camera is determined.
  • the whole application system includes: mobile APP 1, mobile robot or electronic device 2, smart charging pile 3.
  • the mobile robot or electronic device includes a camera 4, a wireless signal module 5, an ultrasonic sensor 6, a laser sensor 7, an encoder 8, a map positioning module 9, an obstacle avoidance module 10, a path planning module 11, a motion control module 12, and the like.
  • the smart charging station includes a wireless signal module 13, a memory data module 14, an image processing module 15, a machine learning module 16, a data processing module 17, and the like.
  • the mobile robot or electronic device is guided to a position photographed by the user based on image recognition.
  • the mobile robot or electronic device uses a laser or a camera (such as Visual SLAM) to scan the entire indoor layout.
  • the encoder records the motion line displacement information and angular displacement information of the robot, and draws a map through the map positioning module.
  • the camera is mounted on the camera for high-speed photographing, and combined with the position information of the encoder, a large number of discrete feature photograph points are formed, wherein the photo position information is obtained by the encoder data and a distance compensation value (for example, a fixed photographing distance).
  • the mobile robot or electronic device and the smart charging pile are connected through the respective wireless signal modules to transmit data, map information, positioning information, and location information corresponding to a large number of indoor feature photos and photos photographed by the robot, and pass through a local wireless network (WIFI, Bluetooth, etc.). Passing to the smart charging pile and temporarily storing it in its memory data module, then characterizing the photo (edge detection algorithm, etc.) via the image processing module, and learning the photo features by the machine learning module, calculating and correcting the feature weights Value (artificial intelligence neural network learning method, such as convolutional neural network (CNN)), and then the processed information is stored back into the in-memory database module to establish a learnable database.
  • a local wireless network WIFI, Bluetooth, etc.
  • the user can take a photo of the location to be cleaned by the mobile phone APP (or photographing from the location where the cleaning is needed), and the mobile phone is connected with the charging station wireless communication module (WIFI, Bluetooth, etc.) ), the photos taken are sent to the smart charging pile in real time, and the image processing analysis module in the smart charging pile will preprocess the photos, extract features, and then use feature template matching algorithms (such as SIFT, SURF, etc.) and pictures.
  • the library's photos are quickly compared and analyzed, and the optimized template matching algorithm improves the matching accuracy of different scales, angles (rotations), and similar positions.
  • the target image to be traveled is determined by the matched image and its position information.
  • the successfully matched data information is processed by the data processing module, and the preliminary path information is generated by combining the map, and is transmitted back to the robot path planning module, and the information of the obstacle avoidance module is planned in real time.
  • the optimal path is sent and the path information is sent to the robot's internal motion control module to start the task.
  • the user can assist in adding the location information or interacting with the robot through the APP to help the robot deep learning to improve the success rate of the image comparison.
  • the APP When manually adding location information (relative position with the charging post) in the indoor flat map displayed by the APP, if a photo is recorded before this location, the APP will simultaneously display all relevant photo collections at this location, and classify the new photo to this point.
  • the location information manually added by the user is stored in the charging pile memory database module together with the photos taken at that time, enriching the charging pile picture learning database.
  • the charging pile data processing module will plan a preliminary path according to the position information manually input by the user in this task, and transmit it to the robot path planning module.
  • the intelligent learning module in the charging pile utilizes a machine learning algorithm based on Convolutional Neural Network (CNN), or learns the photo features through user feedback or affirmative position information, charging the pile to the indoor
  • CNN Convolutional Neural Network
  • the obstacle avoidance module will provide obstacle information to the path planning module in real time according to the data of the sensor, and adjust the optimal path.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Multimedia (AREA)
  • Evolutionary Biology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Development Economics (AREA)
  • Artificial Intelligence (AREA)
  • Game Theory and Decision Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

L'invention concerne également un système et un procédé pour déterminer un emplacement cible de tâche d'intérieur par un mode de reconnaissance d'image. Le système comprend : un module de base de données de mémoire (14), une caméra (4) et un module de traitement d'image (15). Le module de base de données de mémoire (14) stocke préalablement des photos et des métadonnées d'intérieur reconnues par caractérisation des photos intérieures, les métadonnées comprenant des informations d'emplacement. La caméra (4) photographie un lieu cible et transmet une photo de lieu cible photographiée au module de traitement d'image (15) ; le module de traitement d'image (15) pré-traite la photo de lieu cible afin d'extraire des caractéristiques, compare les caractéristiques extraites avec le contenu dans le module de base de données de mémoire (14), et détermine l'emplacement du lieu cible selon une photo et des métadonnées d'intérieur appariées de la photo intérieure.
PCT/CN2018/090174 2017-06-12 2018-06-07 Système et procédé de détermination d'emplacement cible de tâche d'intérieur par mode de reconnaissance d'image WO2018228256A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710437074.5A CN108460801A (zh) 2017-06-12 2017-06-12 一种通过图像识别方式达成室内任务目标位置确定的系统和方法
CN201710437074.5 2017-06-12

Publications (1)

Publication Number Publication Date
WO2018228256A1 true WO2018228256A1 (fr) 2018-12-20

Family

ID=63220235

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/090174 WO2018228256A1 (fr) 2017-06-12 2018-06-07 Système et procédé de détermination d'emplacement cible de tâche d'intérieur par mode de reconnaissance d'image

Country Status (2)

Country Link
CN (1) CN108460801A (fr)
WO (1) WO2018228256A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110048486A (zh) * 2019-05-14 2019-07-23 南京信息工程大学 基于手机特征识别的智能多点无线充电装置及其实现方法
CN112215892A (zh) * 2020-10-22 2021-01-12 常州大学 一种场地机器人的位置及其运动路径的监测方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110309715B (zh) * 2019-05-22 2021-05-25 北京邮电大学 基于深度学习的灯具识别的室内定位方法、装置及系统
CN110334648B (zh) * 2019-07-02 2022-01-11 北京云迹科技有限公司 适用于机器人的充电桩识别系统和方法
CN110473256A (zh) * 2019-07-18 2019-11-19 中国第一汽车股份有限公司 一种车辆定位方法及系统
CN110631586A (zh) * 2019-09-26 2019-12-31 珠海市一微半导体有限公司 基于视觉slam的地图构建的方法、导航系统及装置
CN111027540A (zh) * 2019-11-08 2020-04-17 深兰科技(上海)有限公司 一种寻找目标对象的方法和设备
CN111814953B (zh) * 2020-06-16 2024-02-13 上海瀚讯信息技术股份有限公司 一种基于通道剪枝的深度卷积神经网络模型的定位方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102706344A (zh) * 2012-06-08 2012-10-03 中兴通讯股份有限公司 定位处理方法及装置
CN103067856A (zh) * 2011-10-24 2013-04-24 康佳集团股份有限公司 基于图像识别的地理位置定位方法及系统
CN104657389A (zh) * 2013-11-22 2015-05-27 高德软件有限公司 定位方法、系统及移动终端
CN106289263A (zh) * 2016-08-25 2017-01-04 乐视控股(北京)有限公司 室内导航方法和装置
CN207051978U (zh) * 2017-06-12 2018-02-27 炬大科技有限公司 一种通过图像识别方式达成室内任务目标位置确定的系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101924992B (zh) * 2010-07-30 2013-11-20 中国电信股份有限公司 通过移动终端获取景物资讯的方法、系统和设备
CN104748738B (zh) * 2013-12-31 2018-06-15 深圳先进技术研究院 室内定位导航方法和系统
CN105841687B (zh) * 2015-01-14 2019-12-06 上海智乘网络科技有限公司 室内定位方法和系统
CN105352508A (zh) * 2015-10-22 2016-02-24 深圳创想未来机器人有限公司 机器人定位导航方法及装置
CN106020201B (zh) * 2016-07-13 2019-02-01 广东奥讯智能设备技术有限公司 移动机器人3d导航定位系统及导航定位方法
CN106218434A (zh) * 2016-08-26 2016-12-14 安徽能通新能源科技有限公司 一种智能充电桩及其应用方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103067856A (zh) * 2011-10-24 2013-04-24 康佳集团股份有限公司 基于图像识别的地理位置定位方法及系统
CN102706344A (zh) * 2012-06-08 2012-10-03 中兴通讯股份有限公司 定位处理方法及装置
CN104657389A (zh) * 2013-11-22 2015-05-27 高德软件有限公司 定位方法、系统及移动终端
CN106289263A (zh) * 2016-08-25 2017-01-04 乐视控股(北京)有限公司 室内导航方法和装置
CN207051978U (zh) * 2017-06-12 2018-02-27 炬大科技有限公司 一种通过图像识别方式达成室内任务目标位置确定的系统

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110048486A (zh) * 2019-05-14 2019-07-23 南京信息工程大学 基于手机特征识别的智能多点无线充电装置及其实现方法
CN112215892A (zh) * 2020-10-22 2021-01-12 常州大学 一种场地机器人的位置及其运动路径的监测方法
CN112215892B (zh) * 2020-10-22 2024-03-12 常州大学 一种场地机器人的位置及其运动路径的监测方法

Also Published As

Publication number Publication date
CN108460801A (zh) 2018-08-28

Similar Documents

Publication Publication Date Title
WO2018228256A1 (fr) Système et procédé de détermination d'emplacement cible de tâche d'intérieur par mode de reconnaissance d'image
CN108885459B (zh) 导航方法、导航系统、移动控制系统及移动机器人
CN207051978U (zh) 一种通过图像识别方式达成室内任务目标位置确定的系统
US10102429B2 (en) Systems and methods for capturing images and annotating the captured images with information
US11400600B2 (en) Mobile robot and method of controlling the same
CN109890573B (zh) 移动机器人的控制方法、装置、移动机器人及存储介质
CN109074083B (zh) 移动控制方法、移动机器人及计算机存储介质
CN106780608B (zh) 位姿信息估计方法、装置和可移动设备
WO2020113452A1 (fr) Procédé et dispositif de surveillance pour cible mobile, système de surveillance et robot mobile
TWI684136B (zh) 機器人、控制系統以及用以操作機器人之方法
WO2019232804A1 (fr) Procédé et système de mise à jour de logiciel, et robot mobile et serveur
WO2019001237A1 (fr) Dispositif électronique mobile, et procédé dans un dispositif électronique mobile
US20190184569A1 (en) Robot based on artificial intelligence, and control method thereof
WO2019019819A1 (fr) Dispositif électronique mobile et procédé de traitement de tâches dans une région de tâche
WO2018228254A1 (fr) Dispositif électronique mobile et procédé pour utilisation dans un dispositif électronique mobile
KR20200027087A (ko) 로봇 및 그의 제어 방법
WO2018228258A1 (fr) Dispositif électronique mobile et procédé associé
Grewal et al. Autonomous wheelchair navigation in unmapped indoor environments
KR102147210B1 (ko) 인공지능 이동 로봇의 제어 방법
CN206833252U (zh) 一种移动电子设备
Shi et al. Fuzzy dynamic obstacle avoidance algorithm for basketball robot based on multi-sensor data fusion technology
JP7354528B2 (ja) 自律移動装置、自律移動装置のレンズの汚れ検出方法及びプログラム
Lu et al. Interactive Motion Planning for Autonomous Robotic Photo Taking
KR102483779B1 (ko) 딥러닝 기반의 자율주행카트 및 이의 제어방법
WO2022089548A1 (fr) Robot de service et procédé de commande associé, et robot mobile et procédé de commande associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18816996

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18816996

Country of ref document: EP

Kind code of ref document: A1