WO2022000757A1 - Ar-based robot internet of things interaction method and apparatus, and medium - Google Patents

Ar-based robot internet of things interaction method and apparatus, and medium Download PDF

Info

Publication number
WO2022000757A1
WO2022000757A1 PCT/CN2020/112502 CN2020112502W WO2022000757A1 WO 2022000757 A1 WO2022000757 A1 WO 2022000757A1 CN 2020112502 W CN2020112502 W CN 2020112502W WO 2022000757 A1 WO2022000757 A1 WO 2022000757A1
Authority
WO
WIPO (PCT)
Prior art keywords
iot
terminal device
task
robot
interaction method
Prior art date
Application number
PCT/CN2020/112502
Other languages
French (fr)
Chinese (zh)
Inventor
王龙龙
高明
金长新
Original Assignee
济南浪潮高新科技投资发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 济南浪潮高新科技投资发展有限公司 filed Critical 济南浪潮高新科技投资发展有限公司
Publication of WO2022000757A1 publication Critical patent/WO2022000757A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • G06Q10/047Optimisation of routes or paths, e.g. travelling salesman problem
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present application relates to the field of information technology, and in particular, to an AR-based robot Internet of Things interaction method, device, and medium.
  • Augmented Reality also known as augmented reality
  • AR technology is a technology that calculates the position and angle of the camera in real time and superimposes text, images, videos, and 3D models on the real environment.
  • the surrounding environment and predicting the direction of light, the virtual objects are accurately "placed" in the real environment, the virtual objects are integrated with the real environment with the help of display devices, and a new environment with real sensory effects is presented to the user.
  • the Internet of Things (IOT) technology refers to the connection of any object with the network through information sensing equipment and according to the agreed protocol, and the object exchanges and communicates with the information transmission medium to realize intelligent identification, positioning, tracking, monitoring and other functions.
  • IOT equipment that is, Internet of Everything network equipment.
  • IOT devices have the characteristics of low power consumption, wide coverage, multiple connections, and low cost, and are widely used in intelligent transportation, smart home, public safety and other fields.
  • a robot is an intelligent machine that can work semi-autonomously or fully autonomously, with basic characteristics such as perception, decision-making, and execution.
  • Robot Internet of Things With the development of Internet of Things technology and robotics, the concept of Robot Internet of Things has been proposed.
  • the intelligence of robots can be expanded, and the purpose of intelligent decision-making and manipulation of actual objects can be achieved.
  • the robot and the IOT device operate independently, and there is a lack of interaction between the two.
  • the ROS operating system and devices such as lidar or camera for data collection need to be configured on the robot side to create a scene map that can be used for navigation.
  • the rendering of the robot makes the hardware composition of the robot very complex and the hardware cost is very high.
  • the purpose of this application is to provide an AR-based robot IoT interaction method, device and medium.
  • an AR-based robot Internet of Things interaction method including:
  • the navigation path is drawn by the terminal device according to a scene map, and the scene map is set according to the scene picture collected by the camera in the terminal device and the data of the IMU sensor in the terminal device;
  • the device information of the IOT device is stored in the form of a two-dimensional code, so that the terminal device scans the two-dimensional code and registers the IOT device in the scene map drawn by the terminal device.
  • the device information specifically includes function information and location information of the IOT device.
  • it also includes:
  • the IOT task continues to be executed.
  • it also includes:
  • it also includes:
  • the execution progress of the IOT task and the IOT device to be accessed are displayed through the terminal device.
  • it also includes:
  • the IOT task is stored.
  • an AR-based robot Internet of Things interactive device including:
  • the acquisition module is used to acquire the navigation path and the IOT task, wherein the navigation path is drawn by the terminal device according to the scene map, and the scene map is based on the scene picture collected by the camera in the terminal device and the IMU sensor in the terminal device. data settings;
  • a receiving module for receiving the current location determined by the camera
  • a travel module configured to travel according to the navigation path from the current location to complete the IOT task.
  • the present application also provides an AR-based robot Internet of Things interaction device, including a memory for storing computer programs;
  • the processor is configured to implement the steps of the AR-based robot Internet of Things interaction method when executing the computer program.
  • the present application also provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the AR-based robot as described above is realized.
  • the steps of the networking interaction method are described above.
  • the acquired navigation path is drawn by the terminal device according to the scene map, and the scene map is based on the scene picture collected by the camera in the terminal device and the inertial measurement in the terminal device.
  • the data setting of the Inertial measurement unit (IMU) sensor so the robot can complete the IOT task according to the pre-planned navigation path after acquiring the IOT task, without the need to configure the lidar or camera on the robot side, and use the built-in device in the terminal device.
  • the camera and IMU sensor can realize the repeated use of the hardware device, which simplifies the hardware structure of the robot and reduces the hardware cost.
  • there is no need to configure the ROS operating system on the robot side and the path planning is transferred to the terminal device to complete. Because the user uses the terminal device on a daily basis, the operation is more proficient, convenient and quick, which improves the user experience.
  • FIG. 1 is a flowchart of an AR-based robot Internet of Things interaction method provided by an embodiment of the present application
  • FIG. 2 is a flowchart of another AR-based robot Internet of Things interaction method provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of a robot task line provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of an AR-based robot Internet of Things interaction device provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of another AR-based robot Internet of Things interaction device according to an embodiment of the present application.
  • the core of this application is to provide an AR-based robot IoT interaction method, device and medium, wherein the AR-based robot IoT interaction method utilizes the built-in camera and IMU sensor in the terminal device to realize the repeated use of hardware devices, simplifying the The hardware composition of the robot is reduced, and the hardware cost is reduced.
  • the terminal device mentioned in the present invention is a mobile or fixed networked computing device, including products such as smart phones and tablet computers, and the corresponding products above include cameras and IMU sensors.
  • an AR application is installed in the terminal device.
  • the AR application can be developed based on Google's mobile ARCore SDK, or based on software development kits such as Apple's ARKit SDK or Unity's ARFundation SDK, which does not affect this technology. implementation of the plan.
  • the terminal device can use the ordinary Android system or the iOS system, the robot uses the programmable control robot, and the IOT devices in the surrounding environment include 3D printers, humidity sensors, etc. Terminal devices, robots and IOT devices communicate and connect in the same local area network.
  • the AR-based robot IoT interaction method mentioned in this application can be implemented by a micro control unit (MCU) or other types of control devices in the robot, which does not affect the implementation of the technical solution.
  • MCU micro control unit
  • FIG. 1 is a flowchart of an AR-based robot IoT interaction method provided by an embodiment of the present application. As shown in Figure 1, the method includes:
  • the navigation path is drawn by the terminal device according to the scene map, and the scene map is set according to the scene picture collected by the camera in the terminal device and the data of the IMU sensor in the terminal device.
  • the terminal device during the movement of the terminal device, its built-in camera captures the surrounding scene images, and the AR application marks the feature points in the scene images, and tracks the movement process of these points over time.
  • the AR application marks the feature points in the scene images, and tracks the movement process of these points over time.
  • Combining the movement of these points with data from the IMU sensor in the end device estimates the camera's position and screen orientation as the end device moves, and detects the floor plane so that virtual content can be rendered from the correct perspective and overlaid onto the camera On the acquired scene picture, the coordinate system in the real environment is restored.
  • the user can plan the navigation path in the path planning interface of the AR application.
  • the path can be planned by hand-drawing and hand-held movement.
  • the hand-drawn way is to add the robot's navigation path by hand-drawn curves on the screen.
  • the internal implementation takes the terminal device plane as the starting point. According to the points drawn by the user's finger during the drawing process, the ray is calculated from the terminal device plane, projected on the target plane, and the path drawing is completed by connecting the points into a line.
  • the hand-held movement mode the user holds the terminal device and walks along the expected navigation path, and the terminal device records the entire path. Both methods have their own advantages.
  • the hand-drawn method is suitable for small areas, which is convenient and quick; the hand-held movement method is suitable for larger work areas and is more direct.
  • the user places the terminal device at a designated position on the robot, and the position may be a groove or a water platform, which is not limited in this application.
  • the robot communicates with the terminal device through the local area network, and obtains the IOT task from the terminal device.
  • S11 Receive the current location determined by the camera.
  • S12 Start from the current location and travel according to the navigation path to complete the IOT task.
  • the robot obtains information such as the current location and the current posture through the built-in camera of the terminal device, and then travels according to the set navigation path to sequentially access the IOT device and complete the corresponding IOT task.
  • the acquired navigation path is drawn by the terminal device according to the scene map, and the scene map is based on the scene picture collected by the camera in the terminal device and the scene map in the terminal device.
  • the data setting of the IMU sensor so the robot can complete the IOT task according to the pre-planned navigation path after obtaining the IOT task, without the need to configure the lidar or camera on the robot side, and use the built-in camera and IMU sensor in the terminal device.
  • the repeated use of the hardware device is realized, the hardware structure of the robot is simplified, and the hardware cost is reduced.
  • the device information of the IOT device is stored in the form of a two-dimensional code, so that the terminal device scans the two-dimensional code and registers the IOT device in the scene map drawn by the terminal device.
  • the device information specifically includes function information and location information of the IOT device.
  • the device information of the IOT device specifically includes its IP address, interaction protocol and location information, etc. These information can be stored in the form of a QR code or barcode, and made into a sticker and attached to the IOT device. After scanning by the terminal device You can obtain the corresponding information, register the IOT device on the established scene map, and generate a virtual icon at the location of the device. The user clicks the virtual icon to pop up the function menu, and at the same time, a pointer to the device will be generated in front of the device. Virtual guide route to form visual guide.
  • the AR-based robot Internet of Things interaction method stores the device information of the IOT device in the form of a two-dimensional code, so that the user can directly obtain the relevant information by scanning, and the operation is proficient, convenient and fast, and the user's experience is improved. Use experience.
  • FIG. 2 is a flowchart of another AR-based robot IoT interaction method provided by an embodiment of the present application. As shown in Figure 2, on the basis of the above embodiment, the method further includes:
  • the planning of the charging path is consistent with the planning method of the navigation path in the above-mentioned embodiment. Therefore, please refer to the description of the embodiment in the navigation path section, which will not be repeated here.
  • S15 Determine whether the current power is less than the set value, if yes, go to S16, if not, go back to S14.
  • S16 Interrupt the IOT task and perform charging according to the charging path to the charging location.
  • the robot when completing the task, the robot will detect its own state at any time. When the power is less than the set value, the robot will request to temporarily interrupt the task, go to a nearby charging point for charging, and continue to execute the unfinished task after completing the charging. Task.
  • the purpose of the set value is to remind the robot to interrupt the IOT task when the power of the robot drops to this value, so the set value can be set to a fixed value.
  • the method also includes:
  • the data of the robot during the execution of the IOT task will be automatically uploaded to the terminal device, and the full name of the task execution process will be recorded, which is convenient for users to analyze and optimize the task later.
  • the method also includes:
  • FIG. 3 is a schematic diagram of a task line of a robot according to an embodiment of the present application.
  • the present application also provides a task management method. All tasks are connected in series by a task line.
  • the task line represents a task flow and displays the current robot task progress and added tasks.
  • the task line can display the current task progress and the IOT device to be accessed in real time. Users can edit the path, add or delete IOT tasks, and set the task completion time and repetition times.
  • the user can simulate the task process in the AR application of the terminal device before executing the task, and the whole process is presented on the terminal device interface in an augmented reality manner.
  • the AR-based robot IoT interaction method provided by the embodiments of the present application can display the IOT task execution progress through the terminal device, which facilitates the user to understand the task progress and edit the task in time, and improves the user experience.
  • the method also includes:
  • the IOT task is stored.
  • the user when encountering repetitive tasks, such as moving the printed model of the 3D printer on the assembly line to the sorting point, the user can set a task repetition signal for the task.
  • the IOT task will be stored and executed for the specified number of runs or repeated loops according to the prompt of the signal.
  • the AR-based robot IoT interaction method provided by the embodiments of the present application enhances the interaction between the robot and the surrounding IOT devices because the robot can repeatedly perform or perform a certain task according to the user's needs, thereby expanding the human and the surrounding environment. interactive capabilities.
  • the AR-based robot IoT interaction method is described in detail, and the present application also provides embodiments corresponding to the AR-based robot IoT interaction device. It should be noted that this application describes the embodiments of the device part from two perspectives, one is based on the perspective of functional modules, and the other is based on the perspective of hardware.
  • FIG. 4 is a schematic structural diagram of an AR-based robot Internet of Things interaction device according to an embodiment of the present application. As shown in Figure 4, based on the perspective of functional modules, the device includes:
  • the first acquisition module 10 is used to acquire the navigation path and the IOT task, wherein the navigation path is drawn by the terminal device according to the scene map, and the scene map is set according to the scene picture collected by the camera in the terminal device and the data of the IMU sensor in the terminal device.
  • the receiving module 11 is used for receiving the current location determined by the camera.
  • the traveling module 12 is used for traveling according to the navigation path from the current location to complete the IOT task.
  • it also includes:
  • the second acquisition module is used to acquire the charging path.
  • the detection module is used to detect the current power.
  • the judgment module is used to judge whether the current power is less than the set value.
  • the charging module is used to interrupt the IOT task and perform charging according to the charging path to the charging location.
  • the execution module is used to continue to execute the IOT task after charging is completed.
  • the acquired navigation path is drawn by the terminal device according to the scene map, and the scene map is based on the scene picture collected by the camera in the terminal device and the IMU sensor in the terminal device. Therefore, the robot can complete the IOT task according to the pre-planned navigation path after obtaining the IOT task. It is not necessary to configure the lidar or camera and other devices on the robot side.
  • the built-in camera and IMU sensor in the terminal device can be used to realize the hardware.
  • the repeated use of the device simplifies the hardware structure of the robot and reduces the hardware cost.
  • there is no need to configure the ROS operating system on the robot side and the path planning is transferred to the terminal device to complete. Because the user uses the terminal device on a daily basis, the operation is more proficient, convenient and quick, which improves the user experience.
  • FIG. 5 is a structural diagram of an AR-based robot Internet of Things interaction device provided by another embodiment of the present application. As shown in FIG. 5 , based on the hardware structure, the device includes: a memory 20 for storing computer programs;
  • the processor 21 is configured to implement the steps of the AR-based robot Internet of Things interaction method in the foregoing embodiment when executing the computer program.
  • the processor 21 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • the processor 21 can use at least one hardware form among DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, programmable logic array) accomplish.
  • the processor 21 may also include a main processor and a co-processor.
  • the main processor is a processor used to process data in the wake-up state, also called CPU (Central Processing Unit, central processing unit); the co-processor is A low-power processor for processing data in a standby state.
  • the processor 21 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing the content that needs to be displayed on the display screen.
  • the processor 21 may further include an AI (Artificial Intelligence, artificial intelligence) processor, where the AI processor is used to process computing operations related to machine learning.
  • AI Artificial Intelligence, artificial intelligence
  • Memory 20 may include one or more computer-readable storage media, which may be non-transitory. Memory 20 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash storage devices. In this embodiment, the memory 20 is at least used to store the following computer program 201, where, after the computer program is loaded and executed by the processor 21, it can implement the relevant steps of the AR-based robot Internet of Things interaction method disclosed in any of the foregoing embodiments . In addition, the resources stored in the memory 20 may also include an operating system 202, data 203, etc., and the storage mode may be short-term storage or permanent storage. The operating system 202 may include Windows, Unix, Linux, and the like. The data 203 may include, but is not limited to, location information of the IOT device, and the like.
  • the bus 22 may also be a peripheral component interconnect standard (peripheral component interconnect, referred to as PCI) bus or an extended industry standard architecture (extended industry standard architecture, referred to as EISA) bus or the like.
  • PCI peripheral component interconnect
  • EISA extended industry standard architecture
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of presentation, only one thick line is used in FIG. 5, but it does not mean that there is only one bus or one type of bus.
  • FIG. 5 does not constitute a limitation on the AR-based robot Internet of Things interactive device, and may include more or less components than those shown.
  • the AR-based robot Internet of Things interaction device includes a memory and a processor.
  • the processor executes a program stored in the memory, the processor can implement the following method: obtain a navigation path, and the navigation path is drawn by a terminal device according to a scene map obtained, and the scene map is set according to the scene picture collected by the camera in the terminal device and the data of the IMU sensor in the terminal device, so the robot can complete the IOT task according to the pre-planned navigation path after acquiring the IOT task, without needing to perform the IOT task on the robot side.
  • the hardware device can be reused, which simplifies the hardware structure of the robot and reduces the hardware cost.
  • the present application also provides an embodiment corresponding to a computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium, and when the computer program is executed by the processor, the steps described in the foregoing method embodiments are implemented.
  • the methods in the above embodiments are implemented in the form of software functional units and sold or used as independent products, they may be stored in a computer-readable storage medium.
  • the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .

Abstract

Disclosed are an AR-based robot Internet of Things (IOT) interaction method and apparatus, and a medium. The method comprises the step of obtaining a navigation path. Because the obtained navigation path is drawn by a terminal device according to a scene map and the scene map is set according to a scene picture acquired by a camera in the terminal device and data of an IMU sensor in the terminal device, a robot can complete an IOT task according to a pre-planned navigation path after obtaining the IOT task, apparatuses such as a laser radar or a camera do not need to be configured at a robot end, repeated use of hardware apparatuses can be achieved by using the built-in camera and IMU sensor of the terminal device, the hardware composition of the robot is simplified, and hardware costs are reduced. In addition, an ROS operation system does not need to be configured at the robot end, path planning is transferred to the terminal device to be completed, operation is more skilled, convenient and rapid because a user uses the terminal device in daily life, and the use experience of the user is improved.

Description

一种基于AR的机器人物联网交互方法、装置及介质An AR-based robot Internet of Things interaction method, device and medium 技术领域technical field
本申请涉及信息技术领域,特别是涉及一种基于AR的机器人物联网交互方法、装置及介质。The present application relates to the field of information technology, and in particular, to an AR-based robot Internet of Things interaction method, device, and medium.
背景技术Background technique
增强现实(Augmented Reality,AR),也被称为扩增现实,AR技术是一种实时计算摄像头位置和角度并在真实环境上叠加文字、图像、视频、3D模型的技术,该技术可以通过感知周围环境和预测光线方向,将虚拟对象准确“放置”在真实环境中,借助显示设备将虚拟对象与真实环境融为一体,并呈现给使用者一个感官效果真实的新环境。Augmented Reality (AR), also known as augmented reality, AR technology is a technology that calculates the position and angle of the camera in real time and superimposes text, images, videos, and 3D models on the real environment. The surrounding environment and predicting the direction of light, the virtual objects are accurately "placed" in the real environment, the virtual objects are integrated with the real environment with the help of display devices, and a new environment with real sensory effects is presented to the user.
物联网(Internet of Things,IOT)技术是指通过信息传感设备,按约定的协议,将任何物体与网络相连接,物体通过信息传播媒介进行信息交换和通信,以实现智能化识别、定位、跟踪、监管等功能。IOT设备,即万物互联网络设备。IOT设备具有低功耗、广覆盖、多连接、成本低等特点,广泛应用于智能交通、智能家居、公共安全等多种领域。The Internet of Things (IOT) technology refers to the connection of any object with the network through information sensing equipment and according to the agreed protocol, and the object exchanges and communicates with the information transmission medium to realize intelligent identification, positioning, tracking, monitoring and other functions. IOT equipment, that is, Internet of Everything network equipment. IOT devices have the characteristics of low power consumption, wide coverage, multiple connections, and low cost, and are widely used in intelligent transportation, smart home, public safety and other fields.
机器人是一种能够半自主或全自主工作的智能机器,具有感知、决策、执行等基本特征。A robot is an intelligent machine that can work semi-autonomously or fully autonomously, with basic characteristics such as perception, decision-making, and execution.
随着物联网技术和机器人技术的发展,机器人物联网的概念被提出,通过机器人和IOT设备连接,扩展机器人的智能,达到智能化决策和操控实际物体的目的。但是现阶段机器人与IOT设备独立运行,两者之间缺少交互,且在交互过程中需要在机器人端配置ROS操作系统和用以采集数据的激光雷达或者摄像头等装置,进行可用于导航的场景地图的绘制,使得机器人的硬件构成非常复杂,硬件成本很高。With the development of Internet of Things technology and robotics, the concept of Robot Internet of Things has been proposed. Through the connection of robots and IOT devices, the intelligence of robots can be expanded, and the purpose of intelligent decision-making and manipulation of actual objects can be achieved. However, at this stage, the robot and the IOT device operate independently, and there is a lack of interaction between the two. During the interaction process, the ROS operating system and devices such as lidar or camera for data collection need to be configured on the robot side to create a scene map that can be used for navigation. The rendering of the robot makes the hardware composition of the robot very complex and the hardware cost is very high.
鉴于上述现有技术,寻求一种能够简化机器人硬件构成同时实现无障碍人机交互的方法是本领域技术人员亟待解决的问题。In view of the above-mentioned prior art, it is an urgent problem for those skilled in the art to seek a method that can simplify the hardware composition of the robot and simultaneously realize barrier-free human-machine interaction.
发明内容SUMMARY OF THE INVENTION
本申请的目的是提供一种基于AR的机器人物联网交互方法、装置及介质。The purpose of this application is to provide an AR-based robot IoT interaction method, device and medium.
为解决上述技术问题,本申请提供一种基于AR的机器人物联网交互方法,包括:In order to solve the above technical problems, the present application provides an AR-based robot Internet of Things interaction method, including:
获取导航路径和IOT任务,其中,所述导航路径由终端设备根据场景地图绘制得到,所述场景地图根据终端设备中摄像头采集到的场景画面和所述终端设备中IMU传感器的数据设置;Obtain a navigation path and an IOT task, wherein the navigation path is drawn by the terminal device according to a scene map, and the scene map is set according to the scene picture collected by the camera in the terminal device and the data of the IMU sensor in the terminal device;
接收由所述摄像头确定的当前所在位置;receiving the current location determined by the camera;
从所述当前所在位置开始依据所述导航路径行进以完成所述IOT任务。From the current location, travel according to the navigation path to complete the IOT task.
优选地,所述IOT设备的设备信息以二维码形式存储以便所述终端设备扫描所述二维码后将所述IOT设备注册到所述终端设备绘制的所述场景地图中。Preferably, the device information of the IOT device is stored in the form of a two-dimensional code, so that the terminal device scans the two-dimensional code and registers the IOT device in the scene map drawn by the terminal device.
优选地,所述设备信息具体包括所述IOT设备的功能信息和位置信息。Preferably, the device information specifically includes function information and location information of the IOT device.
优选地,还包括:Preferably, it also includes:
获取充电路径;Get the charging path;
检测当前电量;Check the current power;
判断所述当前电量是否小于设定值,若是,则中断所述IOT任务并依据所述充电路径到充电地点进行充电;Determine whether the current power is less than the set value, and if so, interrupt the IOT task and proceed to the charging location for charging according to the charging path;
充电完成后继续执行所述IOT任务。After the charging is completed, the IOT task continues to be executed.
优选地,还包括:Preferably, it also includes:
将执行所述IOT任务的过程写入任务记录中。Write the process of executing the IOT task into the task record.
优选地,还包括:Preferably, it also includes:
通过所述终端设备显示所述IOT任务执行进度和将要访问的IOT设备。The execution progress of the IOT task and the IOT device to be accessed are displayed through the terminal device.
优选地,还包括:Preferably, it also includes:
当接收到任务重复性信号时,存储所述IOT任务。When a task repetition signal is received, the IOT task is stored.
为解决上述技术问题,本申请还提供一种基于AR的机器人物联网交互装置,包括:In order to solve the above technical problems, the present application also provides an AR-based robot Internet of Things interactive device, including:
获取模块,用于获取导航路径和IOT任务,其中,所述导航路径由终端设备根据场景地图绘制得到,所述场景地图根据终端设备中摄像头采集到的场景画面和所述终端设备中IMU传感器的数据设置;The acquisition module is used to acquire the navigation path and the IOT task, wherein the navigation path is drawn by the terminal device according to the scene map, and the scene map is based on the scene picture collected by the camera in the terminal device and the IMU sensor in the terminal device. data settings;
接收模块,用于接收由所述摄像头确定的当前所在位置;a receiving module for receiving the current location determined by the camera;
行进模块,用于从所述当前所在位置开始依据所述导航路径行进以完成所述IOT任务。A travel module, configured to travel according to the navigation path from the current location to complete the IOT task.
为解决上述技术问题,本申请还提供一种基于AR的机器人物联网交互装置,包括存储器,用于存储计算机程序;In order to solve the above technical problems, the present application also provides an AR-based robot Internet of Things interaction device, including a memory for storing computer programs;
处理器,用于执行所述计算机程序时实现如所述的基于AR的机器人物联网交互方法的步骤。The processor is configured to implement the steps of the AR-based robot Internet of Things interaction method when executing the computer program.
为解决上述技术问题,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如所述的基于AR的机器人物联网交互方法的步骤。In order to solve the above technical problems, the present application also provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the AR-based robot as described above is realized. The steps of the networking interaction method.
本申请所提供的基于AR的机器人物联网交互方法,由于获取到的导航路径是由终端设备根据场景地图绘制得到的,而场景地图根据终端设备中摄像头采集到的场景画面和终端设备中惯性测量单元(Inertial measurement unit,IMU)传感器的数据设置,所以机器人可以在获取到IOT任务后按照预先规划好的导航路径完成IOT任务,无需在机器人端配置激光雷达或者摄像头等装置,利用终端设备中内置的摄像头和IMU传感器即可实现硬件装置的重复使用,简化了机器人的硬件构成,降低了硬件成本。此外,也无需在机器人端配置ROS操作系统,将路径规划转移至终端设备来完成,因为用户日常使用终端设备所以操作起来更加熟练方便快捷,提高了用户的使用体验。In the AR-based robot Internet of Things interaction method provided by this application, the acquired navigation path is drawn by the terminal device according to the scene map, and the scene map is based on the scene picture collected by the camera in the terminal device and the inertial measurement in the terminal device. The data setting of the Inertial measurement unit (IMU) sensor, so the robot can complete the IOT task according to the pre-planned navigation path after acquiring the IOT task, without the need to configure the lidar or camera on the robot side, and use the built-in device in the terminal device. The camera and IMU sensor can realize the repeated use of the hardware device, which simplifies the hardware structure of the robot and reduces the hardware cost. In addition, there is no need to configure the ROS operating system on the robot side, and the path planning is transferred to the terminal device to complete. Because the user uses the terminal device on a daily basis, the operation is more proficient, convenient and quick, which improves the user experience.
附图说明Description of drawings
为了更清楚地说明本申请实施例,下面将对实施例中所需要使用的附图做简单的介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to describe the embodiments of the present application more clearly, the following will briefly introduce the drawings that are used in the embodiments. Obviously, the drawings in the following description are only some embodiments of the present application, which are not relevant to ordinary skills in the art. As far as personnel are concerned, other drawings can also be obtained from these drawings on the premise of no creative work.
图1为本申请实施例提供的一种基于AR的机器人物联网交互方法的流程图;1 is a flowchart of an AR-based robot Internet of Things interaction method provided by an embodiment of the present application;
图2为本申请实施例提供的另一种基于AR的机器人物联网交互方法的流程图;FIG. 2 is a flowchart of another AR-based robot Internet of Things interaction method provided by an embodiment of the present application;
图3为本申请实施例提供的一种机器人任务线示意图;3 is a schematic diagram of a robot task line provided by an embodiment of the present application;
图4为本申请实施例提供的一种基于AR的机器人物联网交互装置的结构示意图;FIG. 4 is a schematic structural diagram of an AR-based robot Internet of Things interaction device provided by an embodiment of the present application;
图5为本申请实施例提供的另一种基于AR的机器人物联网交互装置的结构示意图。FIG. 5 is a schematic structural diagram of another AR-based robot Internet of Things interaction device according to an embodiment of the present application.
具体实施方式detailed description
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下,所获得的所有其他实施例,都属于本申请保护范围。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments are only a part of the embodiments of the present application, but not all of the embodiments. All other embodiments obtained by those of ordinary skill in the art based on the embodiments in the present application without creative work fall within the protection scope of the present application.
本申请的核心是提供一种基于AR的机器人物联网交互方法、装置及介质,其中,基于AR的机器人物联网交互方法利用终端设备中内置的摄像头和IMU传感器实现了硬件装置的重复使用,简化了机器人的硬件构成,降低了硬件成本。The core of this application is to provide an AR-based robot IoT interaction method, device and medium, wherein the AR-based robot IoT interaction method utilizes the built-in camera and IMU sensor in the terminal device to realize the repeated use of hardware devices, simplifying the The hardware composition of the robot is reduced, and the hardware cost is reduced.
为了使本技术领域的人员更好地理解本申请方案,下面结合附图和具体实施方式对本申请作进一步的详细说明。In order to make those skilled in the art better understand the solution of the present application, the present application will be further described in detail below with reference to the accompanying drawings and specific embodiments.
需要说明的是,本发明所提到的终端设备为移动或固定联网的计算设备,包括智能手机、平板电脑等产品,则对应的上述产品中包含有摄像头和IMU传感器。此外,终端设备中安装有AR应用程序,该AR应用程序可以基于Google的移动端ARCore SDK进行开发,也可以基于Apple的ARKit SDK或者Unity的ARFundation SDK等软件开发包进行开发,均不影响本技术方案的实现。终端设备可以采用普通的Android系统或者iOS系统,机器人采用可编程控制的机器人,周围环境的IOT设备包括3D打印机、湿度传感器等。终端设备、机器人与IOT设备在同一局域网中通信连接。可以理解地是,本申请中提到的基于AR的机器人物联网交互方法可以由机器人中的微控制单元(MCU)或者其它类型的控制器件来实现,均不影响技术方案的实现。It should be noted that the terminal device mentioned in the present invention is a mobile or fixed networked computing device, including products such as smart phones and tablet computers, and the corresponding products above include cameras and IMU sensors. In addition, an AR application is installed in the terminal device. The AR application can be developed based on Google's mobile ARCore SDK, or based on software development kits such as Apple's ARKit SDK or Unity's ARFundation SDK, which does not affect this technology. implementation of the plan. The terminal device can use the ordinary Android system or the iOS system, the robot uses the programmable control robot, and the IOT devices in the surrounding environment include 3D printers, humidity sensors, etc. Terminal devices, robots and IOT devices communicate and connect in the same local area network. It is understandable that the AR-based robot IoT interaction method mentioned in this application can be implemented by a micro control unit (MCU) or other types of control devices in the robot, which does not affect the implementation of the technical solution.
图1为本申请实施例提供的一种基于AR的机器人物联网交互方法的流程图。如图1所示,该方法包括:FIG. 1 is a flowchart of an AR-based robot IoT interaction method provided by an embodiment of the present application. As shown in Figure 1, the method includes:
S10:获取导航路径和IOT任务。S10: Get the navigation path and the IOT task.
其中,导航路径由终端设备根据场景地图绘制得到,场景地图根据终端设备中摄像头采集到的场景画面和终端设备中IMU传感器的数据设置。The navigation path is drawn by the terminal device according to the scene map, and the scene map is set according to the scene picture collected by the camera in the terminal device and the data of the IMU sensor in the terminal device.
在具体实施中,终端设备在移动过程中,其内置摄像头采集周围的场景画面,AR应用程序对场景画面中的特征点进行标记,并跟踪这些点随着时间变化的移动过程。将这些点的移动与终端设备中的IMU传感器的数据结合,估计出在终端设备移动时摄像头的位置和屏幕方向,检测出地板平面,从而可以从正确的透视角度渲染虚拟内容,并叠加到摄像头获取的场景画面上,从而还原出真实环境中的坐标系。In a specific implementation, during the movement of the terminal device, its built-in camera captures the surrounding scene images, and the AR application marks the feature points in the scene images, and tracks the movement process of these points over time. Combining the movement of these points with data from the IMU sensor in the end device estimates the camera's position and screen orientation as the end device moves, and detects the floor plane so that virtual content can be rendered from the correct perspective and overlaid onto the camera On the acquired scene picture, the coordinate system in the real environment is restored.
根据周围环境绘制出场景地图后,用户可以在AR应用程序的路径规划界面中进行导航路径的规划。需要说明的是,可以通过手绘和手持移动两种方式规划路径。手绘方式是在屏幕上手绘曲线添加机器人的导航路径。内部实现是以终端设备平面为起点,根据用户绘制过程中手指划过的点,从终端设备平面开始计算出射线,投影到目标平面上,通过连点成线,完成路径绘制。手持移动方式是用户手持终端设备,沿着期待的导航路径行走,终端设备记录下整个路径。两种方式各有优势,手绘方式适用于小范围区域,方便快捷;手持移动方式适用于较大的工作区域,更加直接。After drawing the scene map according to the surrounding environment, the user can plan the navigation path in the path planning interface of the AR application. It should be noted that the path can be planned by hand-drawing and hand-held movement. The hand-drawn way is to add the robot's navigation path by hand-drawn curves on the screen. The internal implementation takes the terminal device plane as the starting point. According to the points drawn by the user's finger during the drawing process, the ray is calculated from the terminal device plane, projected on the target plane, and the path drawing is completed by connecting the points into a line. In the hand-held movement mode, the user holds the terminal device and walks along the expected navigation path, and the terminal device records the entire path. Both methods have their own advantages. The hand-drawn method is suitable for small areas, which is convenient and quick; the hand-held movement method is suitable for larger work areas and is more direct.
在具体实施中,在导航路径设置完成后,用户将终端设备置于机器人上的指定位置,该位置可以是一个凹槽或者一个水平台面,本申请均不作限定。机器人通过局域网与终端设备进行通信连接,从终端设备中获取IOT任务。In a specific implementation, after the setting of the navigation path is completed, the user places the terminal device at a designated position on the robot, and the position may be a groove or a water platform, which is not limited in this application. The robot communicates with the terminal device through the local area network, and obtains the IOT task from the terminal device.
S11:接收由摄像头确定的当前所在位置。S11: Receive the current location determined by the camera.
S12:从当前所在位置开始依据导航路径行进以完成IOT任务。S12: Start from the current location and travel according to the navigation path to complete the IOT task.
在具体实施中,机器人通过终端设备内置的摄像头获取当前所在位置以及当前姿势等信息,然后按照设定好的导航路径行进依次访问IOT设备并完成相应的IOT任务。In the specific implementation, the robot obtains information such as the current location and the current posture through the built-in camera of the terminal device, and then travels according to the set navigation path to sequentially access the IOT device and complete the corresponding IOT task.
本申请实施例所提供的基于AR的机器人物联网交互方法,由于获取到的导航路径是由终端设备根据场景地图绘制得到的,而场景地图根据终端设备中摄像头采集到的场景画面和终端设备中IMU传感器的数据设置,所以机器人可以在获取到IOT任务后按照预先规划好的导航路径完成IOT任务,无需在机器人端配置激光雷达或者摄像头等装置,利用终端设备中内置的摄像头 和IMU传感器即可实现硬件装置的重复使用,简化了机器人的硬件构成,降低了硬件成本。此外,也无需在机器人端配置ROS操作系统,将路径规划转移至终端设备来完成,因为用户日常使用终端设备所以操作起来更加熟练方便快捷,提高了用户的使用体验。In the AR-based robot IoT interaction method provided by the embodiments of the present application, the acquired navigation path is drawn by the terminal device according to the scene map, and the scene map is based on the scene picture collected by the camera in the terminal device and the scene map in the terminal device. The data setting of the IMU sensor, so the robot can complete the IOT task according to the pre-planned navigation path after obtaining the IOT task, without the need to configure the lidar or camera on the robot side, and use the built-in camera and IMU sensor in the terminal device. The repeated use of the hardware device is realized, the hardware structure of the robot is simplified, and the hardware cost is reduced. In addition, there is no need to configure the ROS operating system on the robot side, and the path planning is transferred to the terminal device to complete. Because the user uses the terminal device on a daily basis, the operation is more proficient, convenient and quick, which improves the user experience.
在上述实施例的基础上,IOT设备的设备信息以二维码形式存储以便终端设备扫描二维码后将IOT设备注册到终端设备绘制的场景地图中。Based on the above embodiment, the device information of the IOT device is stored in the form of a two-dimensional code, so that the terminal device scans the two-dimensional code and registers the IOT device in the scene map drawn by the terminal device.
设备信息具体包括IOT设备的功能信息和位置信息。The device information specifically includes function information and location information of the IOT device.
在具体实施中,IOT设备的设备信息具体包括其IP地址,交互协议和位置信息等,这些信息可以以二维码或条形码的形式存储,并制作成贴纸贴在IOT设备上,终端设备扫描后即可获取到相应信息,并将IOT设备注册到已建立的场景地图上,并在该设备所在位置生成一个虚拟图标,用户点击虚拟图标即可弹出功能菜单,同时在设备前方会产生指向设备的虚拟引导路线,形成可视化指引。In the specific implementation, the device information of the IOT device specifically includes its IP address, interaction protocol and location information, etc. These information can be stored in the form of a QR code or barcode, and made into a sticker and attached to the IOT device. After scanning by the terminal device You can obtain the corresponding information, register the IOT device on the established scene map, and generate a virtual icon at the location of the device. The user clicks the virtual icon to pop up the function menu, and at the same time, a pointer to the device will be generated in front of the device. Virtual guide route to form visual guide.
本申请实施例所提供的基于AR的机器人物联网交互方法,将IOT设备的设备信息以二维码形式存储,使得用户可以通过扫描直接获取到相关信息,操作起来熟练方便快捷,提高了用户的使用体验。The AR-based robot Internet of Things interaction method provided by the embodiments of the present application stores the device information of the IOT device in the form of a two-dimensional code, so that the user can directly obtain the relevant information by scanning, and the operation is proficient, convenient and fast, and the user's experience is improved. Use experience.
图2为本申请实施例提供的另一种基于AR的机器人物联网交互方法的流程图。如图2所示,在上述实施例的基础上,该方法还包括:FIG. 2 is a flowchart of another AR-based robot IoT interaction method provided by an embodiment of the present application. As shown in Figure 2, on the basis of the above embodiment, the method further includes:
S13:获取充电路径。S13: Obtain the charging path.
在具体实施中,充电路径的规划与上述实施例中导航路径的规划方式相一致,因此请参见导航路径部分的实施例的描述,这里暂不赘述。In the specific implementation, the planning of the charging path is consistent with the planning method of the navigation path in the above-mentioned embodiment. Therefore, please refer to the description of the embodiment in the navigation path section, which will not be repeated here.
S14:检测当前电量。S14: Detect the current power.
S15:判断当前电量是否小于设定值,若是,则进入S16,若否,则返回S14。S15: Determine whether the current power is less than the set value, if yes, go to S16, if not, go back to S14.
S16:中断IOT任务并依据充电路径到充电地点进行充电。S16: Interrupt the IOT task and perform charging according to the charging path to the charging location.
S17:充电完成后继续执行IOT任务。S17: Continue to execute the IOT task after the charging is completed.
在具体实施中,在完成任务时,机器人会随时检测自身的状态,当电量小于设定值时,机器人将请求暂时中断任务,去附近的充电点进行充电,在完成充电后继续执行未完成的任务。In the specific implementation, when completing the task, the robot will detect its own state at any time. When the power is less than the set value, the robot will request to temporarily interrupt the task, go to a nearby charging point for charging, and continue to execute the unfinished task after completing the charging. Task.
需要说明的是,设定值的目的是在机器人的电量下降至该值时提醒机器人中断IOT任务,所以该设定值可以设置成一个固定的值。It should be noted that the purpose of the set value is to remind the robot to interrupt the IOT task when the power of the robot drops to this value, so the set value can be set to a fixed value.
本申请实施例所提供的基于AR的机器人物联网交互方法,由于在当前电量小于设定值时机器人会暂停任务根据充电路径去充电地点进行充电,保证了机器人和IOT设备的正常交互,不会丢失没有保存的IOT任务进度,提高了用户的使用体验。In the AR-based robot IoT interaction method provided by the embodiments of the present application, since the robot will suspend the task and go to the charging location according to the charging path when the current power is less than the set value, the normal interaction between the robot and the IoT device is ensured, and no Losing unsaved IOT task progress improves user experience.
在上述实施例的基础上,该方法还包括:On the basis of the above-mentioned embodiment, the method also includes:
将执行IOT任务的过程写入任务记录中。Write the process of executing the IOT task into the task record.
在具体实施中,机器人在IOT任务执行过程中的数据会自动上传至终端设备,将任务执行过程全称记录下来,方便用户后期进行分析和任务优化。In the specific implementation, the data of the robot during the execution of the IOT task will be automatically uploaded to the terminal device, and the full name of the task execution process will be recorded, which is convenient for users to analyze and optimize the task later.
在上述实施例的基础上,该方法还包括:On the basis of the above-mentioned embodiment, the method also includes:
通过终端设备显示IOT任务执行进度和将要访问的IOT设备。Display the progress of the IOT task execution and the IOT devices to be accessed through the terminal device.
图3为本申请实施例提供的一种机器人任务线示意图。如图3所示,为了可视化整个工作流程,本申请还提供一种任务管理方法,所有的任务被一条任务线串联起来,任务线表示一个任务流程,显示了当前机器人任务进度和添加的任务。该任务线可以实时显示当前的任务进度和将要访问的IOT设备,用户可以编辑其中的路径,添加或删除IOT任务,设置任务完成时间、重复次数等。在任务编辑完成后,用户可以在执行任务之前在终端设备的AR应用程序中模拟一遍任务流程,整个过程以增强现实的方式呈现在终端设备界面。FIG. 3 is a schematic diagram of a task line of a robot according to an embodiment of the present application. As shown in FIG. 3 , in order to visualize the entire workflow, the present application also provides a task management method. All tasks are connected in series by a task line. The task line represents a task flow and displays the current robot task progress and added tasks. The task line can display the current task progress and the IOT device to be accessed in real time. Users can edit the path, add or delete IOT tasks, and set the task completion time and repetition times. After the task editing is completed, the user can simulate the task process in the AR application of the terminal device before executing the task, and the whole process is presented on the terminal device interface in an augmented reality manner.
本申请实施例所提供的基于AR的机器人物联网交互方法,由于可以通过终端设备显示IOT任务执行进度,便于用户及时了解任务进度以及进行任务编辑,提高了用户的使用体验。The AR-based robot IoT interaction method provided by the embodiments of the present application can display the IOT task execution progress through the terminal device, which facilitates the user to understand the task progress and edit the task in time, and improves the user experience.
在上述实施例的基础上,该方法还包括:On the basis of the above-mentioned embodiment, the method also includes:
当接收到任务重复性信号时,存储IOT任务。When a task repetition signal is received, the IOT task is stored.
在具体实施中,当遇到重复性的任务时,比如把流水线上的3D打印机的打印模型搬到分拣点,用户可以将该任务设置一个任务重复性信号,当机器人接收到该信号时,将会存储IOT任务,并根据该信号的提示执行指定运行次数或者反复循环执行。In the specific implementation, when encountering repetitive tasks, such as moving the printed model of the 3D printer on the assembly line to the sorting point, the user can set a task repetition signal for the task. When the robot receives the signal, The IOT task will be stored and executed for the specified number of runs or repeated loops according to the prompt of the signal.
本申请实施例所提供的基于AR的机器人物联网交互方法,由于机器人可以按照用户需求重复执行或者单一执行某项任务,增强了机器人和周围的IOT设备的交互能力,从而扩展了人与周围环境的交互能力。The AR-based robot IoT interaction method provided by the embodiments of the present application enhances the interaction between the robot and the surrounding IOT devices because the robot can repeatedly perform or perform a certain task according to the user's needs, thereby expanding the human and the surrounding environment. interactive capabilities.
在上述实施例中,对于基于AR的机器人物联网交互方法进行了详细描述,本申请还提供基于AR的机器人物联网交互装置对应的实施例。需要说明的是,本申请从两个角度对装置部分的实施例进行描述,一种是基于功能模块的角度,另一种是基于硬件的角度。In the above embodiments, the AR-based robot IoT interaction method is described in detail, and the present application also provides embodiments corresponding to the AR-based robot IoT interaction device. It should be noted that this application describes the embodiments of the device part from two perspectives, one is based on the perspective of functional modules, and the other is based on the perspective of hardware.
图4为本申请实施例提供的一种基于AR的机器人物联网交互装置的结构示意图。如图4所示,基于功能模块的角度,该装置包括:FIG. 4 is a schematic structural diagram of an AR-based robot Internet of Things interaction device according to an embodiment of the present application. As shown in Figure 4, based on the perspective of functional modules, the device includes:
第一获取模块10,用于获取导航路径和IOT任务,其中,导航路径由终端设备根据场景地图绘制得到,场景地图根据终端设备中摄像头采集到的场景画面和终端设备中IMU传感器的数据设置。The first acquisition module 10 is used to acquire the navigation path and the IOT task, wherein the navigation path is drawn by the terminal device according to the scene map, and the scene map is set according to the scene picture collected by the camera in the terminal device and the data of the IMU sensor in the terminal device.
接收模块11,用于接收由摄像头确定的当前所在位置。The receiving module 11 is used for receiving the current location determined by the camera.
行进模块12,用于从当前所在位置开始依据导航路径行进以完成IOT任务。The traveling module 12 is used for traveling according to the navigation path from the current location to complete the IOT task.
作为优选地实施方式,还包括:As a preferred embodiment, it also includes:
第二获取模块,用于获取充电路径。The second acquisition module is used to acquire the charging path.
检测模块,用于检测当前电量。The detection module is used to detect the current power.
判断模块,用于判断当前电量是否小于设定值。The judgment module is used to judge whether the current power is less than the set value.
充电模块,用于中断IOT任务并依据充电路径到充电地点进行充电。The charging module is used to interrupt the IOT task and perform charging according to the charging path to the charging location.
执行模块,用于充电完成后继续执行IOT任务。The execution module is used to continue to execute the IOT task after charging is completed.
由于装置部分的实施例与方法部分的实施例相互对应,因此装置部分的实施例请参见方法部分的实施例的描述,这里暂不赘述。Since the embodiment of the apparatus part corresponds to the embodiment of the method part, for the embodiment of the apparatus part, please refer to the description of the embodiment of the method part, which will not be repeated here.
本申请所提供的基于AR的机器人物联网交互装置,由于获取到的导航路径是由终端设备根据场景地图绘制得到的,而场景地图根据终端设备中摄像头采集到的场景画面和终端设备中IMU传感器的数据设置,所以机器人可以在获取到IOT任务后按照预先规划好的导航路径完成IOT任务,无需在机器人端配置激光雷达或者摄像头等装置,利用终端设备中内置的摄像头和IMU传感器即可实现硬件装置的重复使用,简化了机器人的硬件构成,降低了硬件成本。此外,也无需在机器人端配置ROS操作系统,将路径规划转移至终端设备来完成,因为用户日常使用终端设备所以操作起来更加熟练方便快捷,提高了用户的使用体验。In the AR-based robot IoT interaction device provided by this application, the acquired navigation path is drawn by the terminal device according to the scene map, and the scene map is based on the scene picture collected by the camera in the terminal device and the IMU sensor in the terminal device. Therefore, the robot can complete the IOT task according to the pre-planned navigation path after obtaining the IOT task. It is not necessary to configure the lidar or camera and other devices on the robot side. The built-in camera and IMU sensor in the terminal device can be used to realize the hardware. The repeated use of the device simplifies the hardware structure of the robot and reduces the hardware cost. In addition, there is no need to configure the ROS operating system on the robot side, and the path planning is transferred to the terminal device to complete. Because the user uses the terminal device on a daily basis, the operation is more proficient, convenient and quick, which improves the user experience.
图5为本申请另一实施例提供的基于AR的机器人物联网交互装置的结构图,如图5所示,基于硬件结构的角度,该装置包括:存储器20,用于存储计算机程序;FIG. 5 is a structural diagram of an AR-based robot Internet of Things interaction device provided by another embodiment of the present application. As shown in FIG. 5 , based on the hardware structure, the device includes: a memory 20 for storing computer programs;
处理器21,用于执行计算机程序时实现如上述实施例中基于AR的机器人物联网交互方法的步骤。The processor 21 is configured to implement the steps of the AR-based robot Internet of Things interaction method in the foregoing embodiment when executing the computer program.
其中,处理器21可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器21可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器21也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器21可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器21还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。The processor 21 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 21 can use at least one hardware form among DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, programmable logic array) accomplish. The processor 21 may also include a main processor and a co-processor. The main processor is a processor used to process data in the wake-up state, also called CPU (Central Processing Unit, central processing unit); the co-processor is A low-power processor for processing data in a standby state. In some embodiments, the processor 21 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing the content that needs to be displayed on the display screen. In some embodiments, the processor 21 may further include an AI (Artificial Intelligence, artificial intelligence) processor, where the AI processor is used to process computing operations related to machine learning.
存储器20可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器20还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。本实施例中,存储器20至少用于存储以下计算机程序201,其中,该计算机程序被处理器21加 载并执行之后,能够实现前述任一实施例公开的基于AR的机器人物联网交互方法的相关步骤。另外,存储器20所存储的资源还可以包括操作系统202和数据203等,存储方式可以是短暂存储或者永久存储。其中,操作系统202可以包括Windows、Unix、Linux等。数据203可以包括但不限于IOT设备的位置信息等。Memory 20 may include one or more computer-readable storage media, which may be non-transitory. Memory 20 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash storage devices. In this embodiment, the memory 20 is at least used to store the following computer program 201, where, after the computer program is loaded and executed by the processor 21, it can implement the relevant steps of the AR-based robot Internet of Things interaction method disclosed in any of the foregoing embodiments . In addition, the resources stored in the memory 20 may also include an operating system 202, data 203, etc., and the storage mode may be short-term storage or permanent storage. The operating system 202 may include Windows, Unix, Linux, and the like. The data 203 may include, but is not limited to, location information of the IOT device, and the like.
在一些实施例中,还可以包含有总线22可以是外设部件互连标准(peripheral component interconnect,简称PCI)总线或扩展工业标准结构(extended industry standard architecture,简称EISA)总线等。该总线可以分为地址总线、数据总线、控制总线等。为便于表示,图5中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。In some embodiments, the bus 22 may also be a peripheral component interconnect standard (peripheral component interconnect, referred to as PCI) bus or an extended industry standard architecture (extended industry standard architecture, referred to as EISA) bus or the like. The bus can be divided into address bus, data bus, control bus and so on. For ease of presentation, only one thick line is used in FIG. 5, but it does not mean that there is only one bus or one type of bus.
本领域技术人员可以理解,图5中示出的结构并不构成对基于AR的机器人物联网交互装置的限定,可以包括比图示更多或更少的组件。Those skilled in the art can understand that the structure shown in FIG. 5 does not constitute a limitation on the AR-based robot Internet of Things interactive device, and may include more or less components than those shown.
本申请实施例提供的基于AR的机器人物联网交互装置,包括存储器和处理器,处理器在执行存储器存储的程序时,能够实现如下方法:获取导航路径,导航路径是由终端设备根据场景地图绘制得到的,而场景地图根据终端设备中摄像头采集到的场景画面和终端设备中IMU传感器的数据设置,所以机器人可以在获取到IOT任务后按照预先规划好的导航路径完成IOT任务,无需在机器人端配置激光雷达或者摄像头等装置,利用终端设备中内置的摄像头和IMU传感器即可实现硬件装置的重复使用,简化了机器人的硬件构成,降低了硬件成本。此外,也无需在机器人端配置ROS操作系统,将路径规划转移至终端设备来完成,因为用户日常使用终端设备所以操作起来更加熟练方便快捷,提高了用户的使用体验。The AR-based robot Internet of Things interaction device provided by the embodiment of the present application includes a memory and a processor. When the processor executes a program stored in the memory, the processor can implement the following method: obtain a navigation path, and the navigation path is drawn by a terminal device according to a scene map obtained, and the scene map is set according to the scene picture collected by the camera in the terminal device and the data of the IMU sensor in the terminal device, so the robot can complete the IOT task according to the pre-planned navigation path after acquiring the IOT task, without needing to perform the IOT task on the robot side. By configuring devices such as lidar or camera, and using the built-in camera and IMU sensor in the terminal device, the hardware device can be reused, which simplifies the hardware structure of the robot and reduces the hardware cost. In addition, there is no need to configure the ROS operating system on the robot side, and the path planning is transferred to the terminal device to complete. Because the user uses the terminal device on a daily basis, the operation is more proficient, convenient and quick, which improves the user experience.
最后,本申请还提供一种计算机可读存储介质对应的实施例。计算机可读存储介质上存储有计算机程序,计算机程序被处理器执行时实现如上述方法实施例中记载的步骤。Finally, the present application also provides an embodiment corresponding to a computer-readable storage medium. A computer program is stored on the computer-readable storage medium, and when the computer program is executed by the processor, the steps described in the foregoing method embodiments are implemented.
可以理解的是,如果上述实施例中的方法以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机 软件产品存储在一个存储介质中,执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。It can be understood that, if the methods in the above embodiments are implemented in the form of software functional units and sold or used as independent products, they may be stored in a computer-readable storage medium. Based on this understanding, the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .
以上对本申请所提供的一种基于AR的机器人物联网交互方法、装置及介质进行了详细介绍。说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以对本申请进行若干改进和修饰,这些改进和修饰也落入本申请权利要求的保护范围内。The AR-based robot Internet of Things interaction method, device and medium provided in this application have been described in detail above. The various embodiments in the specification are described in a progressive manner, and each embodiment focuses on the differences from other embodiments, and the same and similar parts between the various embodiments can be referred to each other. As for the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant part can be referred to the description of the method. It should be pointed out that for those of ordinary skill in the art, without departing from the principles of the present application, several improvements and modifications can also be made to the present application, and these improvements and modifications also fall within the protection scope of the claims of the present application.
还需要说明的是,在本说明书中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。It should also be noted that, in this specification, relational terms such as first and second, etc. are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply these entities or operations. There is no such actual relationship or sequence between operations. Moreover, the terms "comprising", "comprising" or any other variation thereof are intended to encompass a non-exclusive inclusion such that a process, method, article or device that includes a list of elements includes not only those elements, but also includes not explicitly listed or other elements inherent to such a process, method, article or apparatus. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in a process, method, article or apparatus that includes the element.

Claims (10)

  1. 一种基于AR的机器人物联网交互方法,其特征在于,包括:An AR-based robot Internet of Things interaction method, characterized in that it includes:
    获取导航路径和IOT任务,其中,所述导航路径由终端设备根据场景地图绘制得到,所述场景地图根据终端设备中摄像头采集到的场景画面和所述终端设备中IMU传感器的数据设置;Obtain a navigation path and an IOT task, wherein the navigation path is drawn by the terminal device according to a scene map, and the scene map is set according to the scene picture collected by the camera in the terminal device and the data of the IMU sensor in the terminal device;
    接收由所述摄像头确定的当前所在位置;receiving the current location determined by the camera;
    从所述当前所在位置开始依据所述导航路径行进以完成所述IOT任务。From the current location, travel according to the navigation path to complete the IOT task.
  2. 如权利要求1所述的基于AR的机器人物联网交互方法,其特征在于,所述IOT设备的设备信息以二维码形式存储以便所述终端设备扫描所述二维码后将所述IOT设备注册到所述终端设备绘制的所述场景地图中。The AR-based robot Internet of Things interaction method according to claim 1, wherein the device information of the IOT device is stored in the form of a two-dimensional code, so that the terminal device scans the two-dimensional code and the IOT device Registered in the scene map drawn by the terminal device.
  3. 如权利要求2所述的基于AR的机器人物联网交互方法,所述设备信息具体包括所述IOT设备的功能信息和位置信息。The AR-based robot IoT interaction method according to claim 2, wherein the device information specifically includes function information and location information of the IoT device.
  4. 如权利要求1所述的基于AR的机器人物联网交互方法,其特征在于,还包括:The AR-based robot Internet of Things interaction method according to claim 1, further comprising:
    获取充电路径;Get the charging path;
    检测当前电量;Check the current power;
    判断所述当前电量是否小于设定值,若是,则中断所述IOT任务并依据所述充电路径到充电地点进行充电;Determine whether the current power is less than the set value, and if so, interrupt the IOT task and proceed to the charging location for charging according to the charging path;
    充电完成后继续执行所述IOT任务。After the charging is completed, the IOT task continues to be executed.
  5. 如权利要求1所述的基于AR的机器人物联网交互方法,其特征在于,还包括:The AR-based robot Internet of Things interaction method according to claim 1, further comprising:
    将执行所述IOT任务的过程写入任务记录中。Write the process of executing the IOT task into the task record.
  6. 如权利要求1所述的基于AR的机器人物联网交互方法,其特征在于,还包括:The AR-based robot Internet of Things interaction method according to claim 1, further comprising:
    通过所述终端设备显示所述IOT任务执行进度和将要访问的IOT设备。The execution progress of the IOT task and the IOT device to be accessed are displayed through the terminal device.
  7. 如权利要求1至6任一项所述的基于AR的机器人物联网交互方法,其特征在于,还包括:The AR-based robot Internet of Things interaction method according to any one of claims 1 to 6, further comprising:
    当接收到任务重复性信号时,存储所述IOT任务。When a task repetition signal is received, the IOT task is stored.
  8. 一种基于AR的机器人物联网交互装置,其特征在于,包括:An AR-based robot Internet of Things interactive device, characterized in that it includes:
    获取模块,用于获取导航路径和IOT任务,其中,所述导航路径由终端设备根据场景地图绘制得到,所述场景地图根据终端设备中摄像头采集到的场景画面和所述终端设备中IMU传感器的数据设置;The acquisition module is used to acquire the navigation path and the IOT task, wherein the navigation path is drawn by the terminal device according to the scene map, and the scene map is based on the scene picture collected by the camera in the terminal device and the IMU sensor in the terminal device. data settings;
    接收模块,用于接收由所述摄像头确定的当前所在位置;a receiving module for receiving the current location determined by the camera;
    行进模块,用于从所述当前所在位置开始依据所述导航路径行进以完成所述IOT任务。A travel module, configured to travel according to the navigation path from the current location to complete the IOT task.
  9. 一种基于AR的机器人物联网交互装置,其特征在于,包括存储器,用于存储计算机程序;An AR-based robot Internet of Things interaction device, characterized in that it includes a memory for storing computer programs;
    处理器,用于执行所述计算机程序时实现如权利要求1至7任一项所述的基于AR的机器人物联网交互方法的步骤。The processor is configured to implement the steps of the AR-based robot Internet of Things interaction method according to any one of claims 1 to 7 when executing the computer program.
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述的基于AR的机器人物联网交互方法的步骤。A computer-readable storage medium, characterized in that, a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the AR-based storage medium according to any one of claims 1 to 7 is implemented. Steps of the Robot IoT Interaction Method.
PCT/CN2020/112502 2020-06-29 2020-08-31 Ar-based robot internet of things interaction method and apparatus, and medium WO2022000757A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010605081.3 2020-06-29
CN202010605081.3A CN111784797A (en) 2020-06-29 2020-06-29 Robot networking interaction method, device and medium based on AR

Publications (1)

Publication Number Publication Date
WO2022000757A1 true WO2022000757A1 (en) 2022-01-06

Family

ID=72760190

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/112502 WO2022000757A1 (en) 2020-06-29 2020-08-31 Ar-based robot internet of things interaction method and apparatus, and medium

Country Status (2)

Country Link
CN (1) CN111784797A (en)
WO (1) WO2022000757A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112346459A (en) * 2020-11-04 2021-02-09 深圳优地科技有限公司 Robot operation method and device, robot and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102866706A (en) * 2012-09-13 2013-01-09 深圳市银星智能科技股份有限公司 Cleaning robot adopting smart phone navigation and navigation cleaning method thereof
CN108459597A (en) * 2017-07-26 2018-08-28 炬大科技有限公司 A kind of mobile electronic device and method for handling the task of mission area
CN108769113A (en) * 2018-04-18 2018-11-06 特斯联(北京)科技有限公司 A kind of robot device and its management system for Internet of Things running maintenance
CN109460040A (en) * 2018-12-28 2019-03-12 珠海凯浩电子有限公司 It is a kind of that map system and method are established by mobile phone shooting photo array floor
WO2019209882A1 (en) * 2018-04-23 2019-10-31 Purdue Research Foundation Augmented reality interface for authoring tasks for execution by a programmable robot

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107092264A (en) * 2017-06-21 2017-08-25 北京理工大学 Towards the service robot autonomous navigation and automatic recharging method of bank's hall environment
CN108594692A (en) * 2017-12-18 2018-09-28 深圳市奇虎智能科技有限公司 A kind of cleaning equipment control method, device, computer equipment and storage medium
CN108681402A (en) * 2018-05-16 2018-10-19 Oppo广东移动通信有限公司 Identify exchange method, device, storage medium and terminal device
CN110543170A (en) * 2019-08-21 2019-12-06 广东博智林机器人有限公司 Charging control method and device for robot and robot with charging control device
CN110554699A (en) * 2019-08-26 2019-12-10 广东博智林机器人有限公司 Robot control system and control method
CN110908380B (en) * 2019-11-29 2022-10-14 国网智能科技股份有限公司 Autonomous inspection method and system for cable tunnel robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102866706A (en) * 2012-09-13 2013-01-09 深圳市银星智能科技股份有限公司 Cleaning robot adopting smart phone navigation and navigation cleaning method thereof
CN108459597A (en) * 2017-07-26 2018-08-28 炬大科技有限公司 A kind of mobile electronic device and method for handling the task of mission area
CN108769113A (en) * 2018-04-18 2018-11-06 特斯联(北京)科技有限公司 A kind of robot device and its management system for Internet of Things running maintenance
WO2019209882A1 (en) * 2018-04-23 2019-10-31 Purdue Research Foundation Augmented reality interface for authoring tasks for execution by a programmable robot
CN109460040A (en) * 2018-12-28 2019-03-12 珠海凯浩电子有限公司 It is a kind of that map system and method are established by mobile phone shooting photo array floor

Also Published As

Publication number Publication date
CN111784797A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
US10929980B2 (en) Fiducial marker patterns, their automatic detection in images, and applications thereof
Park et al. Deep learning-based smart task assistance in wearable augmented reality
WO2022078467A1 (en) Automatic robot recharging method and apparatus, and robot and storage medium
US20140362084A1 (en) Information processing device, authoring method, and program
CN109671118A (en) A kind of more people's exchange methods of virtual reality, apparatus and system
CN110926334B (en) Measuring method, measuring device, electronic device and storage medium
CN104781849A (en) Fast initialization for monocular visual simultaneous localization and mapping (SLAM)
US20220414910A1 (en) Scene contour recognition method and apparatus, computer-readable medium, and electronic device
CN107885871A (en) Synchronous superposition method, system, interactive system based on cloud computing
WO2019217159A1 (en) Immersive feedback loop for improving ai
CN104982090A (en) Personal information communicator
CN115578433B (en) Image processing method, device, electronic equipment and storage medium
KR20200136723A (en) Method and apparatus for generating learning data for object recognition using virtual city model
CN109934165A (en) A kind of joint point detecting method, device, storage medium and electronic equipment
WO2022000757A1 (en) Ar-based robot internet of things interaction method and apparatus, and medium
CN113378605B (en) Multi-source information fusion method and device, electronic equipment and storage medium
Mazzamuto et al. A Wearable Device Application for Human-Object Interactions Detection.
JP7375149B2 (en) Positioning method, positioning device, visual map generation method and device
CN107766476A (en) Mass-rent data processing method, device, equipment and storage medium based on building block number evidence
WO2023025175A1 (en) Spatial positioning method and apparatus
CN116642490A (en) Visual positioning navigation method based on hybrid map, robot and storage medium
US10755459B2 (en) Object painting through use of perspectives or transfers in a digital medium environment
CN115082690A (en) Target recognition method, target recognition model training method and device
CN114241046A (en) Data annotation method and device, computer equipment and storage medium
CN114329675A (en) Model generation method, model generation device, electronic device, and readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20942550

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20942550

Country of ref document: EP

Kind code of ref document: A1