WO2022118656A1 - 作業現場でモバイルロボットをシミュレーションする装置及び方法 - Google Patents
作業現場でモバイルロボットをシミュレーションする装置及び方法 Download PDFInfo
- Publication number
- WO2022118656A1 WO2022118656A1 PCT/JP2021/042214 JP2021042214W WO2022118656A1 WO 2022118656 A1 WO2022118656 A1 WO 2022118656A1 JP 2021042214 W JP2021042214 W JP 2021042214W WO 2022118656 A1 WO2022118656 A1 WO 2022118656A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- mobile robot
- virtual mobile
- environment
- map
- virtual
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000013507 mapping Methods 0.000 claims abstract description 41
- 230000033001 locomotion Effects 0.000 claims abstract description 37
- 238000004088 simulation Methods 0.000 claims description 46
- 230000008569 process Effects 0.000 claims description 20
- 238000004519 manufacturing process Methods 0.000 claims description 18
- 230000005855 radiation Effects 0.000 claims description 6
- 230000001133 acceleration Effects 0.000 claims description 4
- 230000004807 localization Effects 0.000 description 37
- 230000006870 function Effects 0.000 description 12
- 238000012545 processing Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000003860 storage Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000001678 irradiating effect Effects 0.000 description 2
- 238000004020 luminiscence type Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 230000006386 memory function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 239000002994 raw material Substances 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B29/00—Maps; Plans; Charts; Diagrams, e.g. route diagram
- G09B29/003—Maps
- G09B29/006—Representation of non-cartographic information on maps, e.g. population distribution, wind direction, radiation levels, air and sea routes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
- B25J19/021—Optical sensing devices
- B25J19/023—Optical sensing devices including video camera means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0253—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
Definitions
- the present invention relates to an apparatus and method for simulating a mobile robot at a work site, specifically, information necessary for simulating a mobile robot and / or planning the placement of one or more mobile robots at the work site. Regarding equipment and methods for collecting.
- autonomous mobile robots are used to perform tasks such as transporting objects / goods, executing production work, and / or inspecting the environment. These robots can interact with humans, such as receiving instructions to move between designated locations.
- an apparatus and a method for simulating a mobile robot at a work site are provided.
- FIG. 1 It is a figure which shows the system structure which comprises the apparatus which concerns on one Embodiment of this disclosure. It is a figure which shows the flowchart of the method which concerns on one Example of this disclosure. It is a screen diagram of the graphical user interface which shows the initialization step performed before the simulation of a virtual mobile robot. It is the figure which cut out the screen of the graphical user interface in which the virtual mobile robot is arranged, and is the figure which highlighted the virtual mobile robot. It is a figure which shows the screen of the graphical user interface in which a virtual mobile robot is arranged. It is a figure which shows the example of the load list including the load which can be selected by the user for mounting on the virtual mobile robot of one Embodiment of this disclosure.
- autonomous mobile robots may be installed to promote production processes and manufacturing processes.
- Each mobile robot is equipped with a machine or device and may transport the mobile robot to a mobile station to perform a task using the mounted machine or device.
- payload is used to refer to an onboard machine or appliance.
- the embodiments of the present disclosure can provide devices and methods that can simplify the time for planning and deploying mobile robots and reduce the time to a short period of several weeks.
- the device and method provide a simple and convenient solution for collecting information from the worksite for planning.
- the device and method also provide potential users with demonstration tools for demonstrating the capabilities of mobile robots and simulating post-deployment scenarios for mobile robots.
- the device may be a handheld mobile device such as a smartphone, tablet terminal, laptop or the like.
- the device may be a real-life mobile robot (a real-life robot present at the work site) having a movable mobile base.
- the device may be a movable cart that can be pushed by the user.
- the mobile base may have wheels and / or trucks for moving the mobile robot.
- the mobile robot may be configured to move around the work site by itself. Further, the mobile robot may be remotely controlled by the user to move to the work site.
- Movable carts may have a mobile base, also equipped with wheels and / or trucks, and may be pushed around by the user.
- the device may include one or more sensors used for navigation / area mapping purposes and one or more cameras for acquiring images (photos or videos).
- the term "mapping" means scanning or sensing the worksite environment for the purpose of creating a map for the worksite environment. Mapping means the process of creating a part of a map or the process of creating an entire map. Mapping input refers to data for creating a map of all or part of it. Such mapping inputs can be obtained from one or more sensors. Further, the one or more sensors may be a laser sensor (for LiDAR system or the like), an infrared sensor, an ultrasonic sensor, or the like, or may include them.
- LiDAR relates to a technique for measuring a distance (distance measurement) by irradiating an object with a laser beam and measuring the reflection thereof with a sensor.
- An object can be represented digitally in three dimensions (3D) by utilizing the difference in the reflection time and wavelength of the laser.
- This device may be configured to have a SLAM (Simultaneous Localization and Mapping) function.
- SLAM can be executed using, for example, an AMCL (Adaptive Monte Carlo Localization) algorithm.
- the AMCL algorithm relates to a stochastic localization system for an object (known as an AMCL node) that moves on a two-dimensional (2D) map, that is, is navigated on a 2D map.
- the AMCL node is configured to work with laser scans and laser maps by LiDAR, but may be extended to work with other sensor data such as sonar and stereo.
- SLAM and AMCL When SLAM and AMCL are included, it is necessary to create a 2D map of the environment (usually a top view of the environment).
- This device is configured to operate or execute software.
- This software includes mobile applications for mobile operating systems (Google's Android, iphone operating systems, etc.) and applications that run on portable computer (notebooks, laptops, etc.) operating systems (Windows, MacOS, etc.). You may.
- the software works with one or more sensors to perform navigation / area mapping of the worksite environment.
- the software also helps the user acquire data other than navigation / area mapping data (cycle time, utilization, etc.), as described in more detail below.
- the software is configured to create a virtual mobile robot for display and to simulate the operation and / or operation of the created virtual mobile robot in a worksite environment.
- the software may provide a graphical user interface to facilitate the user's configuration and / or use of the software.
- the present device may have a display such as an LCD, an LED, or an OLED screen.
- the display is used to display the graphical user interface of the software.
- the present device may display the environment of the work site as the background of the display, and may be configured to superimpose or expand the created virtual mobile robot in the displayed environment.
- the environment may be acquired by one or more cameras.
- the one or more sensors may be other types of sensors such as a laser sensor, an infrared sensor, and an ultrasonic sensor. These sensors can map the environment, provide data for rendering graphics representing the environment, and display it on a display.
- the environment displayed as the background is a complete virtual environment created based on the mapping input (for example, a virtual environment can simulate a fully furnished environment) or a partial virtual environment with overlays.
- the appearance of the virtual graphic content and / or the graphic overlay may be customizable.
- the user can input to control the operation of the virtual mobile robot. Such user input can be used to program the actual mobile robot to be deployed.
- a display is installed in a remote location for the user to simulate and / or collect information from the simulation of the virtual mobile robot.
- a graphical user interface for operating a virtual mobile robot can be provided to the user.
- the graphic of the virtual mobile robot may be in any form as long as a movable object is displayed. For example, it may be a simple sphere or a box-shaped object without elaborate design. Further, the graphic may be a realistic representation of an actual mobile robot.
- this device is configured to execute jobs a) to e) as follows.
- the map may be a two-dimensional (2D) and / or a three-dimensional (3D) map. All references to "map" in this disclosure refer to this map.
- the 2D map may be a top view of the environment.
- the 3D map may be created first and then converted to a 2D map.
- the map may be generated by first moving the device around the work environment and scanning and mapping the environment. After the map is generated, object data related to obstacles, including localization data, global path data, stationary objects and / or dynamic objects, and / or other information may be collected. Examples of dynamic objects include humans, machines, and / or random objects placed on the floor.
- a map is generated as the appliance moves through the worksite environment, while simultaneously collecting localization data, global path data, object data about obstacles, including stationary and / or dynamic objects, and / or other information. It is also possible to do.
- A) Acquire localization data that is, data regarding the position of the virtual mobile robot on the generated map.
- This device which exists in the work site environment, can be regarded as the hardware of a virtual mobile robot. Therefore, when the device tracks the position, the position of the virtual mobile robot can be tracked by using the position of the device.
- Localization means that the device tracks the position on the map. Localization is a process used by the device to estimate its position in the environment based on data collected from one or more sensors. For example, there are two types of localization. Laser localization uses data from a laser sensor (such as a LiDAR scanner) of this device together with a map to calculate its position. This may be set as the default method for the device to track its location in the environment (or operating space).
- the device may use light to track its location in the environment.
- the device is a mobile device such as a smartphone with a torchlight function and / or if the device is an actual mobile robot controlled by the user or a mobile cart pushed by the user. It may be a headlight.
- the optical localization process is used in a dynamic environment where the object moves frequently with respect to the device and can be localized based on the object and / or the past position of the device.
- the map is compared with the data collected from one or more sensors and the position of the virtual mobile robot is corrected. With this information, the device can track the position of the virtual mobile robot on the map.
- a route planning method called cost-based route planning may be used for automatic route calculation.
- the virtual mobile robot instructed to proceed to the goal (such as a waypoint set by the user on the software) is the most until the virtual mobile robot reaches the goal from the current location of the virtual mobile robot based on the information on the map. Search for an efficient route on the map. This route is called a global route and is the optimum route between two points.
- the virtual mobile robot heads for the goal along this path, but in the process, it encounters obstacles that are not recorded on the map (objects that are obstructing the movement of the virtual mobile robot and are not recorded on the map). It can be avoided.
- the device detects an object that is not on the map, the device changes the local route of the virtual mobile robot to avoid the object. If the virtual mobile robot cannot follow the global route, the device may replan a new global route for the virtual mobile robot based on the information on the map.
- the cycle time is the time required for the virtual mobile robot to complete a work cycle including virtually executing a task assigned to the virtual mobile robot.
- the operating rate data is data indicating how much the virtual mobile robot is used.
- the traffic condition for moving at the work site can be mentioned.
- the present device may be configured to receive the following six types of inputs (i) to (vi) from the user.
- the user may input waypoints to a specific point or area in the environment displayed on the display as the device moves. For example, if the display is a touch screen, the user may select a point or area in the environment shown on the display and set it as a waypoint. Waypoints may also be set by selecting points or areas on the created map via a graphical user interface. Each waypoint may be commented or a goal may be set. For example, a charging area for simulating the charging of a secondary battery of a virtual mobile robot, an area for exchanging loads mounted on the virtual mobile robot, a repair area for the virtual mobile robot, and specific production by the virtual mobile robot / Waypoints may be set as points or areas for performing specific actions, such as areas for performing warehouse operations.
- the waypoint also functions as a guide for navigating the path of the virtual mobile robot simulated by this device.
- the waypoints may be numbered to indicate the priority of the virtual mobile robot as it moves between the waypoints.
- the user may partition one or more zones or areas in the environment where the virtual mobile robot is allowed to enter, and / or zones or areas where the virtual mobile robot is not allowed to enter. Resistance or priority levels may be assigned to these partitioned zones or areas for route planning purposes if it is preferred or not fully permitted to enter certain zones. Details of resistance and priority are provided below.
- the user may enter route planning parameters via the graphical user interface of the software executed by the device.
- the route planning parameters control how the virtual mobile robot moves in the environment.
- a non-exhaustive list of route planning parameters is as follows. 1) Maximum moving speed and maximum rotation speed of the virtual mobile robot. 2) Radius of gyration of the virtual mobile robot. 3) Grid resolution of the route planning grid.
- the route planning grid may be related to a route planning method called cost-based route planning described above.
- cost-based route planning is that the generated 2D map (top view) is a discrete 100 mm grid (usually sufficient) called a route planning grid. Divide into (size) and allocate the cost to each square.
- the cost of a free (empty) square may be 0.1.
- the cost of squares, including walls and other fixtures, is infinite, and virtual mobile robots are so expensive that they will never enter these squares.
- the speed and slowness of the virtual mobile robot For example, you may set the speed of movement of the virtual mobile robot at a particular waypoint and / or point or area on the map. In some areas, it may be necessary to slow down the virtual mobile robot for traffic control, safety, or other purposes.
- Padding and gaps (at high and low speeds) of virtual mobile robots This refers to how much clearance or distance the virtual mobile robot needs to have from an object in the environment.
- a sector is, for example, a zone or area set on the map by the user by selecting a grid grid.
- the line is a boundary line set on the map by the user.
- “resistance” means the cost for crossing a sector and / or line having resistance set on the map. This is a value that defines how much the virtual mobile robot resists driving a particular sector and / or crossing a line and finds an alternative path.
- the cost of passing through a sector or line with resistance may be calculated by multiplying the resistance value. For example, a normal area or sector is given a cost value of 1, and setting the resistance value to 1 turns off the resistance operation. 9) A preferred line for the virtual mobile robot to cross or follow, a preferred sector for the virtual mobile robot to enter, and / or a preferred direction for the virtual mobile robot to move. For example, the same resistance value may be used to indicate the level of priority, or another value may be used to indicate the level of priority.
- the user may control / adjust the localization parameters of one or more sensors according to the user's request.
- the unit is shown in millimeters (mm), but other measurement units can be applied in the same manner.
- Examples of localization parameters are as follows.
- -A flag that changes the number of samples based on the localization score. When enabled, the number of samples can be reduced when the virtual mobile robot is moving and the localization score is high. As a result, the load on the central processing unit (CPU) or the computer of this device is reduced.
- -A parameter that adjusts the interval (degree) required for setting the laser used for localization. This parameter is set to discard readings that are too close to each other. This reduces the load on the CPU or computer of this device.
- -Grid resolution (mm) of the map grid created during laser localization This is related to scan resolution. Decreasing this value improves the localization accuracy, but increases the memory usage (RAM usage, etc.) of the device.
- -Error in millimeters of linear movement This refers to the margin of error (mm) of the reading of the linear mileage meter of this device. If the set value is too high, the sample pose will be too wide and the device will not be able to locate it. If the set value is too low, the device may not be able to locate it. -The degree of error with respect to the angle of rotation of the device. This refers to the permissible error (frequency) of the reading value of the mileage meter in the rotation direction of this device. If the set value is too high, the sample poses will be too dispersed and the device will not be able to locate it. Also, if the set value is too low, the device may not be able to localize.
- -1 Degree of error in linear movement per mm This refers to the tolerance (frequency) of rotation of the robot per 1 mm of linear movement. If the set value is too high, the sample pose will be too wide and the device will not be able to locate it. Also, if the set value is too low, the device may not be able to localize. -Number of sample poses to use for localization. This is related to the resolution of the scan. Increasing this value increases the amount of localization calculation. If this value is too low, the device cannot be localized. -Distance (mm) that the device moves before localization. This is related to the frequency of localization of the device. By adjusting this parameter, the load on the CPU or computer of this device can be reduced.
- the device will only localize if it has traveled beyond the listed distance values. -The angle (degree) at which the device bends before it starts localization. This is related to the frequency of localization of the device. By adjusting this parameter, the load on the CPU or computer of the device can be reduced. The device will only localize if it rotates beyond the listed angle values.
- the light source described below means a light source mounted on the device or a light source separately prepared for cooperation with the device.
- -Minimum height (mm) of the light source from the ground This value is an approximation and should be lower than the actual value. This value is used to set a valid range and eliminate false positives.
- -Maximum height (mm) of light source from the ground This value is an approximation and should be higher than the actual value. This value is used to set a valid range and eliminate false positives.
- -Minimum length of light source (mm). This value is an approximation and should be slightly lower than the actual value.
- the user may set the virtual mobile robot to perform a specific task.
- the duration of the task may be set.
- a simulation eg, in the form of an animated video
- a virtual mobile robot may be configured to move and operate in an environment as if it were a real mobile robot when: -A map of the environment is generated. -Waypoints specific to virtual mobile robots are set by the user on the map. -How the virtual mobile robot moves between waypoints is set. -The task or action to be performed at the waypoint of the work site is set. With the operation of the virtual mobile robot, information on the work efficiency of the virtual mobile robot such as cycle time and operating rate is calculated.
- the user may select and load a load from a list of various loads.
- the user may also enter the CAD model of the customized load desired by the user and / or the CAD model of the customized mobile robot required by the user.
- the software of the device may be configured to include two or more virtual mobile robots and / or one or more objects for simulating traffic conditions when two or more virtual mobile robots are deployed. ..
- Each virtual mobile robot may be configured according to the contents described above for the virtual mobile robot.
- the one or more objects created may be stationary objects such as furniture, building structures, and / or equipment placed in place. Also, the one or more objects created may be dynamic objects such as people, mobile devices, and / or other mobile objects. Traffic data may be collected from simulations for traffic control management.
- the software may also be configured to allow the user to view a simulation of one or more virtual mobile robots and any created object in the environment.
- FIG. 1 shows an example of the system structure of the system 100 including the above-mentioned device.
- the present apparatus is designated by a reference numeral of 102.
- the device 102 exists in the work site environment, collects information for planning the placement of the mobile robot at the work site, and performs a simulation of the operation and operation of the virtual mobile robot working at the work site. .. Specifically, the device 102 performs the above-mentioned tasks a) to e).
- the device 102 is a smartphone having a touch screen screen and is configured to execute the mobile application 103.
- the device 102 comprises one or more sensors 109 (3D scanners / sensors) including a sensor for a LiDAR system to perform mapping and localization processing 107 in a worksite environment. Further, one or more sensors 109 have a camera.
- the device 102 displays the environment acquired by the camera as a background to the user 120, and also displays the virtual mobile robot created by the application 103 and superimposed or added to the background as a graphical user interface 105 ( GUI) is provided. Further, the GUI 105 provides the following functions to the user 120.
- GUI graphical user interface 105
- the device 102 cooperates with a cloud system 104 that can use virtual machine (VM) technology.
- VM virtual machine
- cloud services can use virtual machines to provide virtual application resources to multiple users at once.
- a virtual machine (VM) may be a computing resource that uses software instead of a physical computer to execute programs and deploy applications.
- One or more virtual "guest” or “client” machines may run on a physical "host” machine. Each virtual machine runs its own operating system and can function separately from other VMs, even if all VMs are running on the same host.
- the device 102 cooperates with a plurality of servers in the cloud system 104 described below.
- multiple servers may handle the work scope of each server described below.
- the cloud system 104 has a storage server 106 that manages the following cloud storages. 1) Load data (image, text, voice, etc.) that can be selected by the user in the device 102 mounted on the virtual mobile robot described above, and 2) Load, user-customized load, virtual mobile robot, user. Data of a CAD model of a virtual mobile robot customized by and / or any created object that can be displayed in the environment.
- the device 102 downloads the above-mentioned data to the server 106 or uploads it from the server 106 as needed during operation. Also, in another example, such data may be stored locally in local memory accessible by device 102. Further, the server 106 may be configured to manage user counts of a plurality of users of a device such as the device 102 and authenticate each user logged in to use the application 103.
- the cloud system 104 further includes a planning server 108 for executing autonomous intelligent vehicle (AIV) mapping software (also called a mobile planner).
- the planning server 108 creates a map of the work site from the mapping inputs provided by the device 102 and functions as a control center for managing the AIV configuration for the virtual mobile robot.
- the planning server 108 can support mapping and setting of a plurality of devices such as the device 102 and an existing mobile robot.
- the AIV refers to a virtual mobile robot created by the device 102.
- the mobile planner is a server-based application. Therefore, the mapping process for creating the map is performed not by the device 102 but by the server 108.
- the device 102 simply provides the mapping input to the server 108, which returns the created map.
- the mobile planner may be a local application running on device 102.
- the cloud system 104 is provided with a simulation server 110 for executing software (also called a fleet manager) that performs processing necessary for simulating the operation of a virtual mobile robot in a field environment.
- the fleet manager can control real and / or virtual mobile robots in the workplace environment.
- Fleet Manager provides a centralized configuration platform for configuring multiple real and / or virtual mobile robots in the environment.
- Fleet Manager also provides a central map management platform for multiple real and / or virtual mobile robots.
- User settings made intensively using Fleet Manager and map information centrally collected by Fleet Manager can be automatically propagated to multiple real and / or virtual mobile robots. ..
- the fleet manager also manages columns of jobs, matches jobs to available real and / or virtual mobile robots, and performs multiple real and / or virtual jobs in the environment to perform assigned jobs. Dispatch a mobile robot.
- the fleet manager also manages the traffic of multiple real and / or virtual mobile robots to reduce collisions and ensure efficient movement.
- Real and / or virtual mobile robot position and orbit information is shared by the fleet manager.
- the fleet manager is a single unit of integration and communication of software clients equipped with multiple identical devices 102 and / or real mobile robots in the environment and other automated devices (excluding real and / or virtual mobile robots). Functions as a point.
- the fleet manager is a server-based application.
- the fleet manager may be provided as a local application in which some or all of the functions of the fleet manager are performed on the device.
- the user moves the device 102 with the virtual mobile robot and does not perform localization to allow the virtual mobile robot to avoid (ie, detour) the dynamic object.
- the simulation of the virtual mobile robot is still ongoing and can be displayed based on real-time updates from the fleet manager.
- the fleet manager can provide real-time updates on the traffic conditions at the worksite based on updates from other real and / or virtual mobile robots located at the worksite. Dynamic objects detected by other real and / or virtual mobile robots are recorded and mapped for the virtual mobile robot, even if the device 102 is not around to perform localization for the virtual mobile robot. can do.
- the cloud system 104 comprises a manufacturing execution system (MES) server 112 that is typically used to drive manufacturing operations by managing and reporting factory activities when events occur in real time.
- the MES server 112 can be used in the manufacturing industry to track and document conversions from raw materials to finished products.
- the MES server 112 can provide information to help manufacturing decision makers understand how the current state of the factory can be optimized to improve production output.
- the MES server 112 can operate in real time to allow control of multiple elements of the production process (eg, inputs, personnel, machines, and support services).
- the MES server 112 manages, coordinates, monitors, and controls manufacturing processes performed by real and / or virtual devices / machines in the environment and / or real and / or virtual mobile robots in the environment. used.
- the mobile planner, fleet manager, and MES server 112 can communicate with each other to support multiple devices such as device 102.
- FIG. 2 is a flowchart showing an example of a method executed by the device 102 of FIG. 1, which is a smartphone equipped with a camera and a LiDAR system, in order to collect information on the work site.
- This method also provides simulation of the movement and movement of a virtual mobile robot.
- the user activates the mobile application 103 of FIG. 1 of the device 102.
- the user may be required to log in to the application 103 and be successfully authenticated by the server communicating with the application before being given access to use the application 103.
- a menu may be displayed on the GUI 105 of FIG. 1 so that the user can select a function to be executed. As one of the functions, it is possible to perform mapping to create a map of the work site.
- step 204 the application 103 checks whether the mapping is required to create the map when the user chooses to perform the mapping or when the application 103 is started. If a work site map already exists (eg, uploaded or updated by a mobile planner or fleet manager, or previously created), no mapping is required and the process proceeds to step 222.
- the device 102 may communicate with the mobile planner or the fleet manager to acquire an existing map or acquire update information of the existing map. As an example, application 103 may prompt the user to proceed with remapping if the map already exists.
- mapping is required, 3D mapping is started in step 206. In this example, the user must move the device 102 around the work site to scan the environment and create a 3D map of the environment.
- a stationary / dynamic object may be provided in the environment to detect and record the device 102 in the 3D map during the mapping process.
- the device 102 gets inputs from one or more sensors 109 in FIG. 1 for 3D mapping.
- a 2D map top view of the environment
- the method ends in step 224 after the 2D map is created and the user chooses to end.
- the user may issue instructions via the GUI 105 to perform other functions (ie, step 222) such as simulating a virtual mobile robot, or proceed to step 208 where the user wishes to edit the 2D map. You can check with the user whether or not it is.
- Map editing includes adding waypoints, setting goals to be achieved at waypoints, marking permitted / disallowed zones, editing route planning parameters, and the above-mentioned six types of user inputs (i) to (vi). It may include the setting of the task of the virtual mobile robot simulated according to (i), (ii), (iii), and (v), respectively. Editing the map may also include adding stationary / dynamic objects to specific locations and / or areas on the map.
- Step 212 may also proceed after the map has been edited in step 210.
- step 212 the virtual mobile robot is created so as to appear on the display of the device 102 against the background of the image of the work site environment acquired by the camera. Then, in step 214, the virtual mobile robot is set to move autonomously according to the goal / task set in the edited map.
- the device 102 is localized by the user carrying the device and following the virtual mobile robot set to automatically move according to the set goal / task. It can be performed. While moving with the virtual mobile robot, the device 102 detects dynamic objects that are not on or recorded on the map in the environment and causes them to be recorded on the map in step 216.
- step 218 the virtual mobile robot moves so as to avoid (that is, detour) the dynamic object.
- a local route plan is determined based on the inputs of one or more sensors during localization. For example, when an object is found in a globally planned path (in front of a virtual mobile robot), the virtual robot stops, changes course, and moves on to the next goal. If no such dynamic object is detected, the virtual mobile robot plans the global route in step 220 to optimally move from one waypoint to another without worrying about the dynamic object.
- the information on the stationary object is collected at the time of map creation, it is understood that the movement of the virtual mobile robot during autonomous navigation considers the existence of the stationary object.
- step 224 when the simulation is terminated by the user or the goal / task set by the virtual mobile robot is achieved.
- the virtual mobile robot is configured to appear in the 2D map at step 214 and navigate autonomously according to the set goal / task.
- the user automatically performs autonomous navigation within the acquired environment to achieve the goal / task and turn around. View on the display.
- the virtual mobile robot detects only the dynamic objects already mapped or recorded on the map in step 216 when moving in the environment.
- the virtual mobile robot simulates the localization process and moves based on the local route plan in step 218 to avoid or bypass the mapped dynamic object. For example, when an object is found within a globally planned path (in front of a virtual mobile robot), the virtual robot stops, changes course, and moves toward the next goal.
- step 220 the virtual mobile robot will move from one waypoint to another in an optimal manner without worrying about the dynamic object.
- step 222 instead of autonomously navigating the virtual mobile robot based on a set goal / task (such as step 214), the user uses the controls provided by the GUI 105, such as a virtual mobile pad or joystick. Then, manually control and drive the virtual mobile robot around the work site. The user can choose to move the device 102 to follow when the virtual mobile robot is driven. In this case, localization performed by the device 102 with respect to the virtual mobile robot occurs, and the device 102 can detect an unmapped dynamic object to be mapped or recorded (interfering with the movement of the virtual mobile robot). If the user chooses not to move the device 102 and not follow the virtual mobile robot, the virtual mobile robot will be driven in the environment acquired (or last updated by the fleet manager) during the last map creation. Ru.
- the user may add a load to be mounted on the virtual mobile robot in step 222 and see a simulation of the virtual mobile robot operating with the load.
- One or more on-screen buttons or menu options may be provided to turn on / off voice for virtual mobile robots and / or loads. When voice is on, a realistic simulation is provided that includes the sound emitted by the virtual mobile robot and / or the load.
- FIG. 3 is a diagram showing the graphical user interface 105 of FIG.
- FIG. 3 is a diagram showing the displayed environment of the work site acquired by the camera of the device 102 of FIG. 1 by adding an object on the environment.
- the following description of FIG. 3 refers to the components of FIG. 1 and the steps of FIG.
- the user instructs the device 102 to acquire an image of the ground 302 acquired by the camera of the device 102. , It may be required to initialize the addition or simulation process in the initialization step.
- the purpose of this initialization step is to calibrate one or more sensors 109 to prepare for localization, and if an environment map has already been created, detect and select the position of device 102 in the environment map.
- the purpose is to have the user place a virtual mobile robot in the designated place.
- the mesh 304 composed of a plurality of spots is added onto the ground 302 when the device 102 succeeds in detecting the ground 302 with the help of one or more sensors 109.
- Still objects such as the wall 306 shown in FIG. 3 have graphics added on them in the displayed environment. These graphics highlight the wall 306 for a clearer display on the display. These graphics may also serve as markers to guide the movement of the virtual mobile robot, for example, when the virtual mobile robot collides with or intersects a marked point, line, or area. May be suppressed.
- An "arrangement" button 308 pressed by the user is provided to start the process of arranging the virtual mobile robot at a position on the ground 302 selected by the user.
- FIG. 4 is a cut-out screen view showing a graphic in which the virtual mobile robot 402 is arranged on the mesh-shaped ground 302 of FIG.
- a ring 404 is displayed around the virtual mobile robot 402 on the ground 302.
- the ring 404 can be used by the virtual mobile robot 402 to indicate a gap or distance to be maintained between adjacent objects in the vicinity of the virtual mobile robot 402. This gap or distance can be customized.
- the ring 404 acts like a cursor and is displayed when the virtual mobile robot 402 is selected by the user. When the virtual mobile robot 402 is selected, the main body of the virtual mobile robot 402 may be highlighted as shown in FIG.
- FIG. 5 shows an example of a screen view 500 (sideways) including the virtual mobile robot 402 after the virtual mobile robot 402 of FIG. 4 is placed on the ground 302 at a position selected by the user.
- a screen view 500 may be displayed in step 222 of FIG.
- Screen FIG. 500 includes a back button 508 and a forward button 510 for toggle between pages of the graphical user interface 105 of FIG.
- a "load selection" button 506 for the user to click to display a list of loads and for the user to select a load from the list and mount it on the virtual mobile robot 402.
- the virtual mobile robot 402 is drawn so as not to be loaded with a load in FIG.
- the parameter 502 is displayed in the upper right of the screen view 500.
- the screen diagram 500 includes a virtual joystick 504 that can be operated to drive the virtual mobile robot 402 at the work site via the touch on the touch screen display of the device 102 of FIG.
- the user-driven session of the virtual mobile robot 402 may be recorded as a moving image for future reference.
- FIG. 6 and 7 are diagrams showing an example of a list of loads described with reference to FIG. 5 for the user to select a load to be mounted on the virtual mobile robot 402 of FIG.
- loads included in the list are a conveyor top 602 for transporting objects, a mobile manipulator (MoMa) 604 for moving objects, and a cart robot top designed to hold specific objects. 702, and ultraviolet (UVC) top 704 can be mentioned.
- UVC ultraviolet
- the mobile manipulator 604 may have six or more robot motion axes.
- the mobile manipulator 604 includes a virtual robot arm that animates to perform a picking operation using a predetermined virtual gripper attached to the mobile manipulator 604.
- the design of the virtual gripper may be incorporated as a CAD model in the same manner as the CAD model of all loads.
- 8 to 11 show four screen views of the graphical user interface 105 of FIG.
- the four virtual mobile robots 800, 900, 1000, and 1100 are shown in the four screen views of FIGS. 8 to 11, respectively.
- Each virtual mobile robot carries a different load.
- the virtual mobile robot 800 is equipped with a conveyor top.
- the virtual mobile robot 900 is equipped with a cart robot top.
- the virtual mobile robot 1000 is equipped with a user control panel top.
- the virtual mobile robot 1100 is equipped with a UVC top.
- Each of the four screen views shows that the load list 1002 of FIG. 10 for the user to select a load to be mounted on the virtual mobile robot can be placed in one window of the graphical user interface 105.
- the intensity level of the ultraviolet rays of the UVC top 704 in FIG. 7 may be adjusted. Further, the ultraviolet rays emitted by the UVC top 704 may be turned on / off.
- 12 and 13 are screen views of a virtual mobile robot 1200 equipped with a UVC top 1204. 12 and 13 show a workload simulation option or button 1202 that can turn on or off the irradiation of UV 1304 from the UVC top 1204. In FIG. 12, the ultraviolet rays are switched off, and in FIG. 13, the ultraviolet rays are switched on. 12 and 13 show that each load is provided with a unique animation for realistic simulation.
- radiation or luminescence that is invisible to humans such as infrared rays, laser beams, and / or rays of other electromagnetic waves, may be displayed or animated. Navigation by real or virtual mobile robots during simulation may take into account these radiations or luminescences.
- FIG. 14 shows an example of a 3D map 1402 of a work site environment that can be created by the LiDAR system of device 102 of FIG. 1 during the mapping process, and a work site environment that can be converted from the 3D map 1402. It shows an example of a 2D map 1404 (for example, a file in dxf format).
- FIG. 15 shows the cloud system 104 of FIG. 1 in which the servers 108 and 110 of FIG. 1 provide a mobile planner and a fleet manager 1504, respectively.
- FIG. 15 shows a screen view 1502 of a simulation of a virtual mobile robot displayed on the display of the device 102 of FIG.
- the screen diagram 1502 is provided by the graphical user interface 105 of FIG.
- the 5G network helps to reduce the waiting time during data communication and provides more reliable and error-free data communication.
- a smoother simulation can be provided.
- the device 102 is also known as an augmented reality (AR) device used to collect data in a worksite environment.
- AR augmented reality
- an AR device is used to collect data.
- the user may manually input the data to be collected in the AR device, or the AR device may be made to automatically collect the data by using one or more sensors 109 (sensor set) in FIG.
- sensors 109 sensor set
- the user can carry the AR device and follow the movement of the virtual mobile robot to localize the virtual mobile robot via the set of sensors in the AR device. good.
- the user does not move the AR device with the virtual mobile robot, but other AR devices, other devices used to monitor objects in the map, and / or real mobile robots present in the environment.
- a virtual mobile robot may be made to automatically search for a predetermined or existing map on which stationary and / or dynamic objects are updated by.
- the AR device transmits the data collected in step 1508 to the simulation software (which may be a fleet manager or other software) as a simulation task via the 5G network.
- the simulation software is run on the simulation server 110 of FIG.
- the simulation is automated, and the data for displaying the simulation is transmitted to the AR device and displayed.
- Simulation automation refers to the case where the virtual mobile robot is set to navigate autonomously by itself (for example, step 214 in FIG. 2).
- the user may also manually select the simulation.
- the user drives the virtual mobile robot (for example, step 222 in FIG. 2) to collect data.
- the collected data transferred to the simulation software includes a precise 2D map, a precise goal position, and a route plan for an autonomous intelligent vehicle (AIV)
- AIV is a virtual mobile robot, and the route plan is a global route even if it is a local route plan.
- AIV job schedule, AIV travel time, bottleneck locations (eg, heavy traffic locations, difficult object avoidance locations, slow or inefficient job execution locations, etc.), and AIV. Usage rate (AIV usage statistics) may be included.
- the AR device acquires the simulation result from the simulation server 110 via the 5G network.
- the simulation result may be displayed in real time on the display of the AR device.
- record the simulation results and the user will later see the pre-recorded simulation on the display of the AR device to learn the demonstrations provided by the simulation and / or study the simulation and provide suggestions for improvement. You may.
- AR devices smoothly visualize not only one virtual mobile robot, but multiple virtual and / or real mobile robots operating in the workplace environment. May be used to.
- One or more virtual mobile robots operating in the environment and / or each device controlling each real mobile robot are configured to communicate data about themselves with a server running a fleet manager (eg 110 in FIG. 1). May be done.
- the fleet manager acts as a central traffic control device, providing data to each AR device and instructing it to display a graphic simulating traffic conditions in the environment.
- one or more virtual and / or real mobile robots appearing in the field of view of the AR device can be visualized as part of the environment by input from the fleet manager, and all visualizations in the environment.
- the virtual mobile robot behaves as if the virtual mobile robot were a real object in the simulation.
- One of the devices described in the embodiments of the present disclosure (eg, device 102) or the server described with reference to FIG. 1 has the following components for electronic communication over a bus: May be good.
- 1. display; 2.
- Non-volatile memory and / or non-temporary computer-readable media 3.
- Random access memory hereinafter referred to as "RAM”
- N processing components (“1 or more controllers", “1 or more processors”, “1 or more central processing units”, etc.); 5.
- Transceiver component containing N transceivers for internet / intranet and / or wireless network communication 6.
- User control ie user input device; 7.
- Image acquisition components such as cameras (item 7 is a server option described with reference to FIG. 1); 8.
- an audio signal acquisition component eg, a microphone
- audio speakers e.g., One or more sensors and / or components intended for navigation / area mapping
- An input / output interface for connecting to user input devices (mouse, joystick, keyboard, sensors that detect user gestures, etc.), audio speakers, displays, image acquisition components, and / or audio signal acquisition components.
- the display generally operates to provide the user with a presentation of graphical content (eg, the graphical user interface 105 of FIG. 1) and is any of a variety of displays (eg, CRT, LCD, HDMI, microprojector, OLED display). It may be realized by.
- the display may be a touch screen. In this case, the touch screen is part of the user control, i.e. part of the user input device.
- non-volatile memory functions to store (eg, permanently store) data and executable code that includes code associated with functional components of the mobile platform.
- non-volatile memory contains boot loader code, modem software, operating system code, file system code, and other codes well known to those of skill in the art that are not drawn for simplicity.
- the non-volatile memory is implemented by flash memory (eg, NAND or NOR memory), but it is certain that other types of memory can also be used. Although it is possible to execute code from non-volatile memory, the executable code in non-volatile memory is typically loaded into RAM and executed by one or more of the N processing components.
- flash memory eg, NAND or NOR memory
- the executable code in non-volatile memory is typically loaded into RAM and executed by one or more of the N processing components.
- Computer-readable media may include storage devices such as magnetic disks or optical discs, memory chips, or other storage devices suitable for interfacing with mobile platforms.
- the machine or computer readable medium may include a wired medium as exemplified by an Internet system or a wireless medium as exemplified by a wireless LAN (WLAN) system.
- WLAN wireless LAN
- the N processing components (or “one or more processors") associated with RAM generally operate to execute instructions stored in non-volatile memory in order to enable functional components.
- the N processing components may include a video processor, a modem processor, a DSP, a graphic processing unit (GPU), and other processing components. good.
- Transceiver components may include N transceiver chains, which may be used to communicate with external devices over a wireless network.
- Each of the N transceiver chains may represent a transceiver associated with a particular communication scheme.
- each transceiver may support protocols specific to local area networks, cellular networks (WIFI networks, CDMA networks, GPRS networks, UMTS networks, 5G networks, etc.), and other types of communication networks.
- the transceiver component communicates with the communication network to determine the location of the connected device.
- One or more sensors and / or components for navigation / area mapping may or may be an image acquisition component for acquiring an image (photo or video).
- the one or more sensors may be laser sensors (eg, LiDAR scanners), infrared and / or ultrasonic sensors, or may include them.
- a device for simulating a mobile robot in the workplace (eg, 102 in FIG. 1), one or more sensors (eg, this mapping covers a map or a partial map) that maps the environment in the workplace. It includes 109) of FIG. 1 and a processor.
- the processor displays an image of the environment acquired by one or more sensors on a display, performs environment mapping based on the inputs of one or more sensors, and performs one or more objects in the environment (stationary objects and / or).
- Virtual mobile robots eg, 402 in FIGS. 4 and 5, 800 in FIG. 8, 900 in FIG. 9, 1000 in FIG. 10) for detecting (including dynamic objects) and displaying them in the environment displayed on the display. , 1100 in FIG. 11, 1200 in FIGS.
- the virtual mobile robot is configured to avoid (ie, detour) one or more objects detected in the environment as the virtual mobile robot moves within the displayed environment.
- the one or more objects detected in the environment may include one or more objects (dynamic objects) that can move in the environment.
- the device is capable of receiving user input to add one or more waypoints within the displayed environment and navigating the movement of the virtual mobile robot according to the one or more waypoints. May be good.
- the device may be operable to set a task to be executed by a virtual mobile robot at any of one or more waypoints and to display graphics for simulating the execution of the task.
- Maps may be generated during environment mapping.
- the device receives the map and one or both of the servers (eg, 108 and / or 110-Mobile Planner and Fleet Manager in FIG. 2) for recording information about the map and one or more detected objects in the map. Can be configured to send).
- the server receives input from one or more devices present in the environment (eg, any device with mapping / position / object detection capabilities, other devices similar to this device, real or virtual mobile robots, etc.). And / or update the information of one or more detected objects in the map so that the device can use the updated map and / or the updated information of one or more objects in the map. It is configured.
- the map may be a three-dimensional map (for example, 1402 in FIG. 14).
- the device may be operable to convert the 3D map into a 2D map (eg, 1404 in FIG. 14) and send the 2D map to the server.
- the device may be operable to transmit data of one or more movement routes (for example, local route planning) determined by the virtual mobile robot to the server.
- movement routes for example, local route planning
- the device sends data to a server (eg, 110 in FIG. 1) to process data for displaying a simulation of the movement of the virtual mobile robot, and simulates the virtual mobile robot via a 5G network. It may be operational to receive streamed data from the server for real-time display on the display.
- the server may be configured to receive position data of the device and one or more other devices operating in the environment and control traffic in the environment based on the received position data.
- the simulation of the movement of the virtual mobile robot may consider the traffic conditions so that the collision between the device and the one or more other devices is suppressed.
- the apparatus processes a captured image of the environment directed to the ground in the environment during the initialization process, displays a graphic index (eg 304 in FIG. 3) on the ground displayed in the display, and initializes. It may be operable to display the graphic of the virtual mobile robot after the process is completed.
- a graphic index eg 304 in FIG. 3
- the device operates to receive user input for selecting one or more zones in which the virtual mobile robot is allowed to enter and / or zones in which the virtual mobile robot is not allowed to enter in the displayed environment. It may be possible.
- the device mounts a load having a specific function at the work site mounted on the virtual mobile robot (for example, 602 and 604 in FIG. 6, 702 and 704 in FIG. 7, 1004 in FIG. 10, and 1204 in FIG. 12). It may be operable to receive user input for selection and enable simulation of a virtual mobile robot with the loaded load.
- the load may be a mobile manipulator that supports robotic motion of 6 or more axes, and the mobile manipulator may be configured to simulate the execution of one or more production tasks.
- the device may be operable so as to estimate the work cycle time of the virtual mobile robot and / or the usage information of the virtual mobile robot based on the simulation of the movement of the virtual mobile robot.
- the device may be operable to receive user input for setting one or more route planning parameters for a virtual mobile robot.
- the above-mentioned one or more operating parameters are Movement speed and maximum movement speed of virtual mobile robot, Rotation speed and maximum rotation speed of virtual mobile robot, Radius of gyration of virtual mobile robot, Acceleration / deceleration of virtual mobile robots, The gap between a virtual mobile robot and an object in the environment, Resistance level when a virtual mobile robot enters or crosses a line or area, Priority level when a virtual mobile robot enters or crosses a line or area, It may contain one or more of them.
- Multiple virtual mobile robots may be generated to move within the displayed environment (eg, at the request of the user or by a fleet manager to simulate traffic conditions), and each virtual mobile robot may be generated. , May be regarded as objects that should be avoided from each other.
- the device may be operable to display graphics for one or more features invisible to the human eye.
- the above-mentioned one or more features are The field of view of the laser projected by a virtual mobile robot and / or other detected object capable of irradiating the laser at the work site, Radiation emitted from a radiation source, It may contain one or more of them.
- the virtual mobile robot may be configured to avoid the one or more features while moving.
- the device By setting the virtual mobile robot to navigate autonomously in the environment, the device performs mapping of the virtual mobile robot (creation of a whole map or a partial map), navigation (for example, localization data, virtual mobile robot). Collects information related to (eg, cycle time, uptime, traffic conditions, other work efficiency related parameters, etc.) and / or operations (eg, cycle time, uptime, traffic conditions, and other work efficiency related parameters) automatically calculated by the user. May be operable to collect information related to the mapping, navigation, and movement of the virtual mobile robot by making inputs to move the virtual mobile robot within the environment.
- the device may be a handheld mobile device.
- a method of simulating a mobile robot at a work site in which an image of the work site environment acquired by one or more sensors (eg, 109 in FIG. 1) is displayed on a display.
- 402 of FIG. 5, 800 of FIG. 8, 900 of FIG. 9, 1000 of FIG. 10, 1100 of FIG. 11 and 1200 of FIGS. 12 and 13) are generated to move the virtual mobile robot in the displayed environment.
- a virtual mobile robot, including receiving user input for control may be configured to avoid one or more objects detected in the environment while navigating in the displayed environment. good.
- a system for simulating a mobile robot in the workplace (eg, 100 in FIG. 1), which is the device described above in the outline of the specification (eg, 102 in FIG. 1) and the present specification. It comprises a cloud system (eg, 104 in FIG. 1) including any of the servers described above in the overview (eg, 108 and / or 110 in FIG. 2).
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Mechanical Engineering (AREA)
- Robotics (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Electromagnetism (AREA)
- Automation & Control Theory (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Aviation & Aerospace Engineering (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Ecology (AREA)
- Mathematical Physics (AREA)
- Educational Administration (AREA)
- Business, Economics & Management (AREA)
- Educational Technology (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
Description
1)仮想モバイルロボットの最大移動速度及び最大回転速度。
2)仮想モバイルロボットの回転半径。
3)経路計画グリッドのグリッド解像度。経路計画グリッドは、上述したコストベースの経路計画と呼ばれる経路計画方式に関連してもよい。例えば、概念的には、この経路計画方式の「コストベース」の側面又はグリッド解像度は、生成される2Dマップ(上面図)を経路計画グリッドと呼ばれる離散的な100mmのマス目(通常は十分な大きさ)に分割し、各マス目にコストを割り当てる。自由な(何もない)マス目(障害物が近くにない正方形)のコストは、0.1であってもよい。壁及び他の固定物を含むマス目のコストは、無限大であり、仮想モバイルロボットは、コストが高すぎるため、これらのマス目には絶対に入らないことになる。
4)仮想モバイルロボットの速度の速さ及び遅さ。例えば、マップ上の特定のウェイポイント、及び/又は、ポイント又はエリアでの仮想モバイルロボットの移動速度を設定してもよい。いくつかのエリアでは、交通制御、安全性、又はその他の目的で仮想モバイルロボットを減速させる必要がある場合がある。
5)仮想モバイルロボットのパディング(padding)及び隙間(高速及び低速時)。これは、仮想モバイルロボットを環境中の物体からどの程度の隙間又は距離を有する必要があるかをいう。
6)特定のウェイポイント、及び/又は、マップ上のポイント又はエリアでの仮想モバイルロボットの回転速度。
7)選択されたウェイポイントでの仮想モバイルロボットの加速/減速、及び/又は、ユーザが環境内で仮想モバイルロボットを駆動するときの加速/減速。
8)抵抗のあるセクタ、抵抗のあるラインの抵抗量。セクタは、例えば、ユーザがグリッドのマス目を選択することによってマップ上に設定したゾーン又はエリアである。ラインとは、ユーザによってマップ上に設定された境界線をいう。ここで「抵抗」とは、マップ上に設定された抵抗のあるセクタ及び/又はラインを横切るためのコストを意味する。これは、仮想モバイルロボットが、特定のセクタを駆動すること及び/又はラインを横断することにどの程度抵抗し、代替の経路を見つけるかを定義する値である。抵抗のあるセクタ又はラインを通過するためのコストは、その抵抗値に乗じて算出されてもよい。例えば、通常のエリア又はセクタにはコスト値として1が付与され、抵抗値を1に設定すると抵抗の動作がオフになる。
9)仮想モバイルロボットが横断又は追従するのに好ましいライン、仮想モバイルロボットが進入するのに好ましいセクタ、及び/又は仮想モバイルロボットが移動するのに好ましい方向。例えば、同じ抵抗値が優先度のレベルを示すために使用されてもよいし、別の優先度の値が優先度のレベルを示すために使用されてもよい。
-定位スコアに基づいてサンプル数を変化させるフラグ。有効にすると、仮想モバイルロボットが移動し且つ定位スコアが高いときに、サンプル数を少なくすることができる。これにより、本装置の中央演算処理装置(CPU)又は計算機の負荷が軽減される。
-定位に使用するレーザの設定に要求される間隔(程度)を調整するパラメータである。このパラメータは、互いに近すぎる読み取り値を破棄するように設定される。これにより、本装置のCPU又は計算機の負荷が低減する。
-レーザ定位中に作成されるマップグリッドのグリッド解像度(mm)。これはスキャン解像度に関連する。この値を下げると、定位の精度は向上するが、本装置のメモリ使用量(RAM使用量など)が増加する。
-線形移動量のミリ単位の誤差。これは、本装置の線形走行距離計の読み取り値の許容誤差(mm)をいう。設定値が高すぎると、サンプルポーズが広がりすぎて本装置が位置を特定できなくなる。設定値が低すぎると、本装置が位置を特定できない可能性がある。
-装置の回転角に対する誤差の度合い。これは、本装置の回転方向の走行距離計の読み取り値の許容誤差(度数)をいう。設定値が高すぎると、サンプルポーズが分散しすぎて本装置が位置を特定できなくなる。また、設定値が低すぎると、本装置が定位できない可能性がある。
-1mm当たりの直線移動の誤差の度合い。これは、1mmの直線移動当たりのロボットの回転の許容誤差(度数)をいう。設定値が高すぎると、サンプルポーズが広がりすぎて本装置が位置を特定できなくなる。また、設定値が低すぎると、本装置が定位できない可能性がある。
-定位に使用するサンプルポーズの数。これは、スキャンの解像度に関係する。この値を大きくすると、定位の計算量が増加する。この値が低すぎると、本装置が定位できなくなる。
-定位する前に本装置が移動する距離(mm)。これは、装置の定位の頻度に関連する。このパラメータを調整することで、本装置のCPU又は計算機の負荷を低減することができる。本装置は、リストされた距離値を超えて移動した場合のみ定位する。
-本装置が定位を開始する前に曲がる角度(度数)。これは、本装置の定位の頻度に関連する。このパラメータを調整することで、装置のCPU又は計算機の負荷を低減することができる。本装置は、リストされた角度値を超えて回転した場合のみ定位する。
-光源の地面からの最低高さ(mm)。この値は、概算値であり、実際の値よりも低くすべきである。この値は、有効な範囲を設定して誤検知をなくすために使用される。
-地面からの光源の最大高さ(mm)。この値は、概算値であり、実際の値よりも高くすべきである。この値は、有効な範囲を設定して誤検知をなくすために使用される。
-光源の最小長さ(mm)。この値は、概算値であり、実際の値よりも僅かに低くすべきである。
-光源の最大長さ(mm)。この値は、概算値であり、実際の値よりも僅かに高くすべきである。
-環境のマップが生成される。
-仮想モバイルロボットに特有のウェイポイントがユーザによってマップ上に設定される。
-仮想モバイルロボットがウェイポイント間をどのように移動するのかが設定される。
-作業現場のウェイポイントで行われるべきタスク又は行動が設定される。
仮想モバイルロボットの動作に伴い、サイクルタイムや稼働率などの仮想モバイルロボットの作業効率に関する情報が算出される。
1)例えば、タッチスクリーン画面上にタッチで操作可能な仮想モバイルパッド又はジョイスティックを設けることによって行うことができる仮想モバイルロボットの動作を制御すること;
2)仮想モバイルロボットに搭載される積載物を、積載物リストから選択すること;
3)前述の6種類のユーザ入力(i)~(vi)を受け付けること;
4)仮想モバイルロボット、任意の積載物、環境に表示される任意の物体、及び/又は表示される環境をカスタマイズすること;
5)環境に表示されるとともに、定位時、経路の自動計算時、及び/又は物体検知時に考慮される任意の物体を追加すること;及び
6)ユーザがカスタマイズした積載物、ユーザがカスタマイズした仮想モバイルロボット、及び/又は、環境に表示されるユーザが作成した任意の物体のCADモデルを保存のためにアップロードすること。
1)前述の仮想モバイルロボットに搭載される装置102でユーザが選択可能な積載物のデータ(画像、テキスト、音声など)、及び
2)積載物、ユーザがカスタマイズした積載物、仮想モバイルロボット、ユーザがカスタマイズした仮想モバイルロボット、及び/又は、環境に表示することができる任意の作成された物体のCADモデルのデータ。
1.ディスプレイ;
2.不揮発性メモリ及び/又は非一時的なコンピュータ可読媒体;
3.ランダムアクセスメモリ(以下、「RAM」);
4.N個の処理コンポーネント(「1以上のコントローラー」、「1以上のプロセッサ」、「1以上の中央演算処理装置」など);
5.インターネット/イントラネット、及び/又はワイヤレスネットワーク通信のためのN個のトランシーバを含むトランシーバコンポーネント;
6.ユーザコントロール、すなわち、ユーザ入力装置;
7.カメラなどの画像取得コンポーネント(項目7は、図1を参照して説明したサーバのオプションである);
8.任意で、オーディオ信号取得コンポーネント(例えば、マイクロフォン);
9.任意で、オーディオスピーカー;
10.ナビゲーション/エリアマッピングを目的とした1以上のセンサ及び/又はコンポーネント(項目10は、図1を参照して説明したサーバのオプションである);及び
11.ユーザ入力デバイス(マウス、ジョイスティック、キーボード、ユーザのジェスチャを検知するセンサなど)、オーディオスピーカー、ディスプレイ、画像取得コンポーネント、及び/又はオーディオ信号取得コンポーネントに接続するための入出力インターフェース。
前記1以上の動作パラメータは、
仮想モバイルロボットの移動速度及び最大移動速度、
仮想モバイルロボットの回転速度及び最大回転速度、
仮想モバイルロボットの回転半径、
仮想モバイルロボットの加速/減速、
仮想モバイルロボットと環境内の物体との間の隙間、
仮想モバイルロボットがライン又はエリアに進入又は横断するときの抵抗レベル、
仮想モバイルロボットがライン又はエリアに進入又は横断するときの優先レベル、
のうちの1以上を含んでもよい。
前記1以上の特徴は、
作業現場でレーザを照射する能力を有する仮想モバイルロボット及び/又は他の検知された物体によって投射されたレーザの視界、
放射線放出源から放出される放射線、
のうちの1以上を含んでもよい。
仮想モバイルロボットは、移動中に前記1以上の特徴を回避するように構成されてもよい。
Claims (20)
- 作業現場でモバイルロボットをシミュレーションする装置であって、
作業現場の環境をマッピングする1以上のセンサと、
プロセッサと、
を備え、
前記プロセッサは、
前記1以上のセンサによって取得された前記環境の画像をディスプレイに表示し、
前記1以上のセンサの入力に基づいて前記環境のマッピングを実行し、
前記環境内の1以上の物体を検知し、
前記ディスプレイに表示された環境内に表示するための仮想モバイルロボットのグラフィックを生成し、
表示された環境内における前記仮想モバイルロボットの移動を制御するためのユーザ入力を受信する、
ように前記装置を動作させる命令を実行するように構成され、
前記仮想モバイルロボットは、当該仮想モバイルロボットが前記表示された環境内を移動するときに、前記環境内で検知された1以上の物体を回避するように構成されている、装置。 - 前記環境内で検知された1以上の物体は、前記環境内で移動可能な1以上の物体を含む、請求項1に記載の装置。
- 前記表示された環境内に1以上のウェイポイントを追加するためのユーザ入力を受信し、当該1以上のウェイポイントに従って前記仮想モバイルロボットの移動をナビゲートする、ように動作可能である、請求項1又は2に記載の装置。
- 前記1以上のウェイポイントのいずれかで前記仮想モバイルロボットが実行するタスクを設定するとともに、前記タスクの実行をシミュレーションするためのグラフィックスを表示する、ように動作可能である、請求項3に記載の装置。
- 前記環境のマッピング中にマップが生成され、
前記装置は、前記マップ及び前記マップ内の1以上の検知された物体の情報を、記録のためにサーバに送信するように動作可能であり、
前記サーバは、前記環境内に存在する1以上の装置からの入力を受信して、前記マップ及び/又は前記マップ内の1以上の検知された物体の情報を更新し、前記装置が更新されたマップ及び/又は前記マップ内の1以上の物体の更新された情報を使用可能なように構成されている、請求項4に記載の装置。 - 前記マップは、3次元マップであり、
前記3次元マップを2次元マップに変換し、当該2次元マップを前記サーバに送信する、ように動作可能である、請求項5に記載の装置。 - 前記仮想モバイルロボットによって決定された1以上の移動経路のデータを前記サーバに送信する、ように動作可能である、請求項5又は6に記載の装置。
- 前記仮想モバイルロボットの移動のシミュレーションを表示するためのデータを処理するために、位置データを含むデータをサーバに送信し、
前記仮想モバイルロボットのシミュレーションを、5Gネットワークを介して前記ディスプレイにリアルタイムで表示するために、前記サーバからストリーミングされたデータを受信する、
ように動作可能であり、
前記サーバは、前記装置及び前記環境内で動作する1以上の他の装置の位置データを受信し、当該受信した位置データに基づいて前記環境内の交通を制御するように構成され、
前記サーバからストリーミングされたデータは、前記装置及び前記1以上の他の装置の交通状況を含み、
前記仮想モバイルロボットの移動のシミュレーションは、前記装置と前記1以上の他の装置との間の衝突が抑えられるように前記交通状況を考慮する、
請求項1~7のいずれか1つに記載の装置。 - 初期化工程中に前記環境内の地面に向けられた前記環境のキャプチャ画像を処理し、
前記ディスプレイ内に表示された地面上にグラフィック指標を表示し、
前記初期化工程が完了した後に前記仮想モバイルロボットのグラフィックを表示する、
ように動作可能である、請求項1~8のいずれか1つに記載の装置。 - 前記表示された環境において前記仮想モバイルロボットの進入が許可されているゾーン及び/又は前記仮想モバイルロボットの進入が許可されていないゾーンを1以上選択するためのユーザ入力を受信する、ように動作可能である、請求項1~9のいずれか1つに記載の装置。
- 前記仮想モバイルロボットに搭載される前記作業現場で特定の機能を有する積載物を選択するためのユーザ入力を受信し、当該搭載された積載物を有する前記仮想モバイルロボットのシミュレーションを可能にする、ように動作可能である、請求項1~10のいずれか1つに記載の装置。
- 前記積載物は、6軸以上のロボット運動をサポートするモバイルマニピュレータであり、前記モバイルマニピュレータは、1以上の生産タスクの実行をシミュレーションするように構成可能である、請求項11に記載の装置。
- 前記仮想モバイルロボットの移動のシミュレーションに基づいて、前記仮想モバイルロボットのワークサイクルタイム及び/又は前記仮想モバイルロボットの利用情報を推定する、ように動作可能である、請求項1~12のいずれか1つに記載の装置。
- 前記仮想モバイルロボットの1以上の動作パラメータを設定するためのユーザ入力を受信するように動作可能であり、
前記1以上の動作パラメータは、
前記仮想モバイルロボットの移動速度及び最大移動速度、
前記仮想モバイルロボットの回転速度及び最大回転速度、
前記仮想モバイルロボットの回転半径、
前記仮想モバイルロボットの加速/減速、
前記仮想モバイルロボットと前記環境内の物体との間の隙間、
前記仮想モバイルロボットがライン又はエリアに進入又は横断するときの抵抗レベル、
前記仮想モバイルロボットがライン又はエリアに進入又は横断するときの優先レベル、
のうちの1以上を含む、請求項1~13のいずれか1つに記載の装置。 - 複数の仮想モバイルロボットは、前記表示された環境内を移動するように生成され、各仮想モバイルロボットは、互いに回避すべき物体とみなす、請求項1~14のいずれか1つに記載の装置。
- 人間の目には見えない1以上の特徴のためのグラフィックスを表示するように動作可能であり、
前記1以上の特徴は、
前記作業現場でレーザを照射する能力を有する前記仮想モバイルロボット及び/又は他の検知された物体によって投影されたレーザの視界、
放射線放出源から放出される放射線、
のうちの1以上を含み、
前記仮想モバイルロボットは、移動中に前記1以上の特徴を回避するように構成されている、請求項1~15のいずれか1つに記載の装置。 - 前記仮想モバイルロボットが前記環境内で自律的にナビゲートされるように設定することによって、前記仮想モバイルロボットのマッピング、ナビゲーション、及び/又は動作に関連する情報を収集するか、或いは、
ユーザが前記仮想モバイルロボットを前記環境内で移動させるための入力を行うことによって、前記仮想モバイルロボットのマッピング、ナビゲーション、及び/又は動作に関する情報を収集する、
ように動作可能である、請求項1~16のいずれか1つに記載の装置。 - ハンドヘルド型のモバイル機器である、請求項1~17のいずれか1つに記載の装置。
- 作業現場でモバイルロボットをシミュレーションする方法であって、
1以上のセンサによって取得された作業現場の環境の画像をディスプレイに表示し、
前記1以上のセンサの入力に基づいて前記環境のマッピングを実行し、
前記環境内の1以上の物体を検知し、
前記ディスプレイに表示された環境内に表示するための仮想モバイルロボットのグラフィックを生成し、
前記表示された環境内における前記仮想モバイルロボットの移動を制御するためのユーザ入力を受信する、
ことを含み、
前記仮想モバイルロボットは、前記環境内で検知された1以上の物体を回避するように、前記表示された環境内で移動するように構成されている、方法。 - 作業現場でモバイルロボットをシミュレーションするためのシステムであって、請求項1~18のいずれか1つに記載の装置と、請求項5~8のいずれか1つに記載のサーバを含むクラウドシステムとを備える、システム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21900411.6A EP4258077A1 (en) | 2020-12-01 | 2021-11-17 | Device and method for simulating mobile robot at work site |
JP2022566828A JP7452706B2 (ja) | 2020-12-01 | 2021-11-17 | 作業現場でモバイルロボットをシミュレーションする装置及び方法 |
US18/039,368 US20240025040A1 (en) | 2020-12-01 | 2021-11-17 | Apparatus and method for simulating a mobile robot at a work site |
CN202180080401.0A CN116547624A (zh) | 2020-12-01 | 2021-11-17 | 在作业现场模拟移动机器人的装置和方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SG10202011988T | 2020-12-01 | ||
SG10202011988T | 2020-12-01 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022118656A1 true WO2022118656A1 (ja) | 2022-06-09 |
Family
ID=81853699
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/042214 WO2022118656A1 (ja) | 2020-12-01 | 2021-11-17 | 作業現場でモバイルロボットをシミュレーションする装置及び方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20240025040A1 (ja) |
EP (1) | EP4258077A1 (ja) |
JP (1) | JP7452706B2 (ja) |
CN (1) | CN116547624A (ja) |
WO (1) | WO2022118656A1 (ja) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07328968A (ja) * | 1994-06-10 | 1995-12-19 | Gijutsu Kenkyu Kumiai Shinjiyouhou Shiyori Kaihatsu Kiko | ロボット装置 |
JPH11104984A (ja) * | 1997-10-06 | 1999-04-20 | Fujitsu Ltd | 実環境情報表示装置及び実環境情報表示処理を実行するプログラムを記録したコンピュータ読み取り可能な記録媒体 |
JP2007249632A (ja) * | 2006-03-16 | 2007-09-27 | Fujitsu Ltd | 障害物のある環境下で自律移動する移動ロボットおよび移動ロボットの制御方法。 |
JP2011100306A (ja) * | 2009-11-06 | 2011-05-19 | Hitachi Ltd | シミュレーションシステム |
WO2020073680A1 (en) * | 2018-10-10 | 2020-04-16 | Midea Group Co., Ltd. | Method and system for providing remote robotic control |
-
2021
- 2021-11-17 WO PCT/JP2021/042214 patent/WO2022118656A1/ja active Application Filing
- 2021-11-17 EP EP21900411.6A patent/EP4258077A1/en active Pending
- 2021-11-17 CN CN202180080401.0A patent/CN116547624A/zh active Pending
- 2021-11-17 JP JP2022566828A patent/JP7452706B2/ja active Active
- 2021-11-17 US US18/039,368 patent/US20240025040A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07328968A (ja) * | 1994-06-10 | 1995-12-19 | Gijutsu Kenkyu Kumiai Shinjiyouhou Shiyori Kaihatsu Kiko | ロボット装置 |
JPH11104984A (ja) * | 1997-10-06 | 1999-04-20 | Fujitsu Ltd | 実環境情報表示装置及び実環境情報表示処理を実行するプログラムを記録したコンピュータ読み取り可能な記録媒体 |
JP2007249632A (ja) * | 2006-03-16 | 2007-09-27 | Fujitsu Ltd | 障害物のある環境下で自律移動する移動ロボットおよび移動ロボットの制御方法。 |
JP2011100306A (ja) * | 2009-11-06 | 2011-05-19 | Hitachi Ltd | シミュレーションシステム |
WO2020073680A1 (en) * | 2018-10-10 | 2020-04-16 | Midea Group Co., Ltd. | Method and system for providing remote robotic control |
Also Published As
Publication number | Publication date |
---|---|
EP4258077A1 (en) | 2023-10-11 |
JP7452706B2 (ja) | 2024-03-19 |
US20240025040A1 (en) | 2024-01-25 |
CN116547624A (zh) | 2023-08-04 |
JPWO2022118656A1 (ja) | 2022-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | Interactive and immersive process-level digital twin for collaborative human–robot construction work | |
JP6986188B2 (ja) | 環境地図生成および位置整合のための方法ならびにシステム | |
US20200030979A1 (en) | Mixed Reality Assisted Spatial Programming of Robotic Systems | |
CN109074536B (zh) | 计划和/或控制和/或模拟施工机械的运行的方法及设备 | |
US10754422B1 (en) | Systems and methods for providing interaction with elements in a virtual architectural visualization | |
US9753453B2 (en) | Natural machine interface system | |
JP6895539B2 (ja) | 構築可能性分析に基づくプロジェクトの計画および適応 | |
WO2013108749A1 (ja) | 搬入経路計画システム | |
JPWO2017188292A1 (ja) | 移動体の管理システム、方法、およびコンピュータプログラム | |
Aivaliotis et al. | An augmented reality software suite enabling seamless human robot interaction | |
US20170091999A1 (en) | Method and system for determining a configuration of a virtual robot in a virtual environment | |
Eiris et al. | InDrone: a 2D-based drone flight behavior visualization platform for indoor building inspection | |
JP5414465B2 (ja) | シミュレーションシステム | |
US11880209B2 (en) | Electronic apparatus and controlling method thereof | |
Joseph | Learning Robotics using Python: Design, simulate, program, and prototype an autonomous mobile robot using ROS, OpenCV, PCL, and Python | |
WO2022118656A1 (ja) | 作業現場でモバイルロボットをシミュレーションする装置及び方法 | |
KR20210059460A (ko) | 물류 자동화를 위한 자율 주행 로봇의 원격 관제 시스템 | |
WO2022259600A1 (ja) | 情報処理装置、情報処理システム、および情報処理方法、並びにプログラム | |
Wang | Enabling Human-Robot Partnerships in Digitally-Driven Construction Work through Integration of Building Information Models, Interactive Virtual Reality, and Process-Level Digital Twins | |
Kiran et al. | Design and development of autonomous mobile robot for mapping and navigation system | |
CN111367194A (zh) | 一种视觉算法验证的方法及装置 | |
Weber | Motion planning for triple-axis spectrometers | |
US20220043455A1 (en) | Preparing robotic operating environments for execution of robotic control plans | |
US20210187746A1 (en) | Task planning accounting for occlusion of sensor observations | |
Smith | An investigation on a mobile robot in a ros enabled cloud robotics environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21900411 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022566828 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180080401.0 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021900411 Country of ref document: EP Effective date: 20230703 |