CN113510718A - Intelligent meal selling robot based on machine vision and use method thereof - Google Patents

Intelligent meal selling robot based on machine vision and use method thereof Download PDF

Info

Publication number
CN113510718A
CN113510718A CN202110514678.1A CN202110514678A CN113510718A CN 113510718 A CN113510718 A CN 113510718A CN 202110514678 A CN202110514678 A CN 202110514678A CN 113510718 A CN113510718 A CN 113510718A
Authority
CN
China
Prior art keywords
meal
mechanical arm
module
grabbing
food
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110514678.1A
Other languages
Chinese (zh)
Inventor
朱以帅
甘良志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Normal University
Original Assignee
Jiangsu Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Normal University filed Critical Jiangsu Normal University
Priority to CN202110514678.1A priority Critical patent/CN113510718A/en
Publication of CN113510718A publication Critical patent/CN113510718A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Fuzzy Systems (AREA)
  • Image Analysis (AREA)

Abstract

A use method of an intelligent meal selling robot based on machine vision comprises the following steps: the dining car slowly moves back and forth in a corridor of a student dormitory, when the dining car detects that someone can automatically stop, students can select required meals on the touch screen; the mechanical arm acquires information and performs image recognition on the food to acquire a color image and depth; dividing the region of the target object, and identifying the target object; acquiring the pose of the identified target object; performing motion planning according to the pose; finishing the grabbing of the target food according to the motion plan; placing the grabbed food on a dinner plate, weighing, calculating the price, displaying the price on an external display screen, and waiting for payment by using a two-dimensional code; and opening the valve on the dining car after payment is finished to send out the meal. In the method, the mechanical arm has higher identification accuracy and position information estimation precision, intelligent operation is realized according to image identification and remote control, the method can also be used for dealing with complex working environments, and the method is very convenient and fast in the using process.

Description

Intelligent meal selling robot based on machine vision and use method thereof
Technical Field
The invention relates to the technical field of service robots, in particular to object grabbing and a method based on machine vision.
Background
The service robot is an important branch in the robot field at present, and with the development of society and the acceleration of the rhythm of life and work, the service robot is a huge service robot market along with the incubation. A clear distinction from conventional industrial robots is that service robots operate in an unordered, unstructured environment. An industrial robot can operate back and forth according to specified actions only by planning a working mode in advance, but the working environment of a service robot changes frequently, and the robot needs stronger cognitive ability and execution force, so that higher requirements are put on the intelligence and the adaptability of the robot. How to provide a service robot capable of solving the problem that no people sell meals in the dining room and dormitory of colleges and universities is a technical problem to be solved urgently.
Disclosure of Invention
The invention aims to provide an intelligent meal selling robot based on machine vision, which can solve the problem that no person sells meals in a college dining room dormitory.
In order to achieve the above object, the present invention provides the following solutions:
an intelligence meal selling robot based on machine vision specifically includes: the intelligent food ordering system comprises an intelligent food ordering module, a path planning module, a machine vision module, a coordinate conversion module, an infrared positioning module and a mechanical arm system; the machine vision module is used for acquiring RGB information of an object in a grabbing range of the manipulator system and depth information between the vision system and the object, and sending the RGB information and the depth information of all the objects to the target detection module; the target detection module is connected with the coordinate conversion module and is used for classifying the RGB information of each meal so as to determine the name of each meal and determine the three-dimensional coordinate of each meal under the camera coordinate according to the depth information of each meal; simultaneously sending the name of each meal and the three-dimensional coordinates under the camera coordinate system to a coordinate conversion module; the coordinate conversion module is used for converting the three-dimensional coordinates under the camera coordinates of each object into the three-dimensional coordinates under the manipulator coordinate system; the intelligent ordering system is used for providing convenient ordering operation for a user and transmitting the acquired food information to the computer; the path planning module is connected with the mechanical arm system and used for carrying out path planning according to three-dimensional coordinates under a mechanical arm coordinate system for grabbing an object by a user, acquiring an object grabbing path and sending the object grabbing path to the mechanical arm grabbing module; the infrared positioning module detects the position of the food; and the mechanical arm system is used for grabbing the food required by the user according to the object grabbing path.
The technical scheme adopted by the invention is as follows: in the aspect of object position resource determination, a Zhang Zhengyou calibration method is adopted, and in order to quickly determine the position of a meal, a model of a normalization camera is introduced firstly. In this model, f of the camera is 1, and the center of the imaging point is the origin (x, y) of the image coordinate system.
According to the similar triangle principle, it is easy to obtain:
x=u/w
y=v/w
the normalized camera is not present in practical situations, and has two differences from a real video camera:
(1) the focal length is not 1 and since the final position of the image is measured in pixels, the model must take into account the spacing of the photoreceptors, which are different in the x and y directions, thus introducing two scaling factors:
Figure BDA0003060412780000021
referred to as focus parameters. The original mapping relationship becomes:
Figure BDA0003060412780000022
Figure BDA0003060412780000023
(2) the coordinate origin of the pixel coordinate system and the image coordinate system do not coincide, which requires a shifted position, thus increasing the offset parameter: deltax,δy. Meanwhile, a deviation parameter gamma is introduced to control the projection position x as a function of the height v in the real world, and the original mapping relation becomes:
Figure BDA0003060412780000024
Figure BDA0003060412780000025
in addition, the position of the camera is not always located at the origin of the world coordinate system, taking into account camera external factors, especially if two or more cameras are considered. For this purpose, the real world points w in the camera coordinate system need to be derived by coordinate transformation before the real world passes through the projection model, i.e.:
Figure BDA0003060412780000031
the projection model of observing the monocular camera can be easily found out: points in the 3D world coordinate are divided by the denominator w in the projection of the image, which makes the projection non-linear and inconvenient to study. The representation of the 2D image points and the 3D world points is thus modified such that the projection equation becomes linear.
The method for solving the camera internal reference by using the Zhangyingyou calibration method comprises the following specific steps: the chessboard for calibration is a plane n of the three-dimensional world, and the image formed by the chessboard on the imaging plane is another plane pi. Because the coordinates of the corner points of the calibration chessboard are known, the corner points of the image can be solved by a corner point identification algorithm, and the homography matrix H can be solved by the coordinates of the corresponding points of the two planes, so that the internal parameters of the camera can be solved.
And setting the plane where the chessboard is located as a plane with the z being 0 under the world coordinate system. Thus, any corner point on the board can be represented as (X, Y,0) in the world coordinate system. Substituting the mapping equation yields:
Figure BDA0003060412780000032
the homography matrix can be expressed as:
Figure BDA0003060412780000033
if the solution of the internal parameters obtained by the least square method is used for further increasing the calibration precision, the solution result can be optimized by adopting a maximum likelihood estimation method. And finally, solving according to the internal and external parameters of the camera to obtain accurate meal quality resources.
The specific operation method comprises the following steps: the current Computer Vision library is mainly OpenCV (open Source Computer Vision library). The target position data is accurately solved by utilizing the abundant visual function library and strong platform applicability. The main working flow is as follows:
(1) searching a chessboard:
Figure BDA0003060412780000041
(2) drawing a chessboard:
Figure BDA0003060412780000042
(3) monocular calibration of a camera:
Figure BDA0003060412780000043
(4) calculating a rectification mapping:
Figure BDA0003060412780000051
(5) remapping (distortion-free processing of the image):
Figure BDA0003060412780000052
after the meal is calibrated, the trained target detection algorithm YOLO of the convolutional neural network is used for recognizing and classifying the meal, and classification information is transmitted to an upper computer.
The RRT algorithm is a searching method based on probability sampling, and is constructed in a special incremental mode, and the expected distance between a random state point and a tree can be shortened rapidly. The method is characterized in that a high-dimensional space can be quickly and effectively searched, and the search is guided to a blank area through random sampling points of a state space, so that a planned path from a starting point to a target point is found. The method avoids the modeling of the space by performing collision detection on the sampling points in the state space, and can effectively solve the path planning problem of high-dimensional space and complex constraint.
The invention discloses functions of mechanical arm visual identification grabbing, intelligent vending of unmanned dining cars and the like, which comprise the following steps: the dining car slowly moves back and forth in a corridor of a student dormitory, when the dining car detects that someone can automatically stop, students can select required meals on the touch screen; the mechanical arm acquires information and performs image recognition on the food to acquire a color image and depth; dividing the region of the target object, and identifying the target object; acquiring the pose of the identified target object; performing motion planning according to the pose; finishing the grabbing of the target food according to the motion plan; placing the grabbed food on a dinner plate, weighing and calculating the price; the external display screen displays the price and the two-dimension code to wait for payment; and opening the valve on the dining car after payment is finished to send out the meal. The problem of the student can't buy meal because of the busy of classroom is gone out, the arm has higher discernment rate of accuracy and position capital estimation precision simultaneously, realizes the operation of getting meal intelligently according to image recognition and remote control, can deal with complicated operational environment simultaneously, and is very convenient in the use, has promoted the intelligence and the commonality that unmanned selling goods to a certain extent, but the wide application with in each college dormitory and the dining room.
Drawings
FIG. 1 is a block diagram of an intelligent meal selling robot based on machine vision according to the present invention;
FIG. 2 is a basic workflow diagram provided by the present invention;
FIG. 3 is a flow chart of an intelligent food ordering process according to the present invention;
FIG. 4 is a flow chart of a robot arm gripping process provided by the present invention;
fig. 5 is a basic structure diagram of the mechanical arm provided by the invention.
Fig. 6 is a flowchart of the RRT algorithm robot arm path planning provided by the present invention.
Fig. 7 is a simulation path of the RRT algorithm provided by the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide an intelligent meal selling robot based on machine vision, which can solve the problem that no person sells meals in a college dining room dormitory. In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below. As shown in fig. 1, the system specifically comprises an intelligent ordering module, a path planning module, a machine vision module, a coordinate conversion module, an infrared positioning module and a mechanical arm system.
The intelligent ordering module mainly comprises an external display screen and a computer, when a user needs to order, the user can select food information provided by the external display screen, and then the computer transmits the obtained information to the ROS system.
The target detection module, the path planning module and the coordinate conversion module are all integrated in the ROS system. The machine vision module mainly comprises a depth camera and a support, wherein the depth camera comprises an RGB camera positioned in the center and infrared cameras uniformly distributed around the RGB camera. The central lens of the depth camera is a common RGB camera and is used for collecting color images of the surrounding environment, obtaining RGB information of objects in the environment, and the obtained RGB information is used for classifying the objects in the environment. And the target detection module utilizes a trained target detection algorithm-YOLO of the convolutional neural network to identify and classify the food.
The target positioning module is used for determining the three-dimensional coordinates of each object in the camera coordinate system according to the depth information of each object; specifically, the target positioning module gives three-dimensional coordinates of each object based on a camera coordinate system. The method mainly comprises the steps of calibrating internal and external parameters of a camera by a Zhang-friend calibration method, and converting three-dimensional coordinates under a camera coordinate system into three-dimensional coordinates under a mechanical arm coordinate system through a coordinate conversion module for grabbing by a mechanical arm.
The mechanical arm system comprises a mechanical arm base, a mechanical arm and a tail end hand grab, and the mechanical arm base, the mechanical arm and the tail end hand grab are positioned in a dining car and can be used for finishing grabbing work of different food. The mechanical arm is arranged on a mechanical arm base inside the dining car; the tail end gripper is arranged at the tail end of the mechanical arm. The mechanical arm system comprises a mechanical arm base, a mechanical arm and a tail end paw, wherein a programmable path planning controller is arranged in the mechanical arm base, and a path planning algorithm can be continuously optimized through ROS control, so that a grabbing path planned by the planner is safer and more convenient; the terminal hand claw of arm has 3 degrees of freedom, can realize snatching appointed food. The specific mechanical arm system is a mechanical arm with six degrees of freedom, and as shown in fig. 5, the mechanical arm can reach any position inside the dining car theoretically to realize the grabbing work of the tail-end paw on different food.
The path planning submodule mainly adopts an improved RRT algorithm, can plan a path for a three-dimensional coordinate under a mechanical arm coordinate system according to different requirements of a user, and obtains a most convenient and safe object grabbing path. The path planning submodule plans a safe, convenient and humanoid grabbing path information through a path planning algorithm-RRT algorithm, and then transmits the grabbing path information to the mechanical arm system to implement specific target object grabbing work. The simulation path of the RRT algorithm is shown in FIG. 7.
The basic algorithm pseudo-code is as follows:
Figure BDA0003060412780000081
the ROS system (robot operating system) in the invention can integrate the target detection algorithm and the mechanical arm grabbing algorithm together, and provides a subscription type communication framework for simply and quickly constructing a distributed computing system. The ROS is a distributed process framework, processes are packaged in program packages and function packages which are easy to share and release, and the ROS also supports a combined system similar to a code repository, in the system, the cooperation and release of projects can be realized, and the design can lead the development of a project to realize completely independent decision from a file system to a user interface.

Claims (6)

1. A use method of an intelligent meal selling robot based on machine vision is characterized by comprising the following steps:
the meal selling robot is driven by a motor to move back and forth in a corridor continuously, a path is planned by using visual barrier recognition, and people are stopped for 5 seconds to wait for ordering or not;
a guest orders a meal, and the server displays the rest meal for selection through an external display screen;
after the ordering is finished, the client sends an instruction to the mechanical arm;
the mechanical arm accurately selects to grab different meal items through image recognition;
and weighing, calculating the price after the meal taking is finished, and sending the meal out after the payment.
2. The method of claim 1, wherein: the user orders through external display, snatchs gained data transfer to the arm, and it specifically includes:
the user orders, and the client sends the command to be executed to the mechanical arm through the ROS platform, so that the aim of controlling the mechanical arm is fulfilled.
3. The method of claim 1, wherein: the method for acquiring the RGB information of the object and the depth information between the visual system and the object by using the depth camera specifically comprises the following steps:
and the target detection module utilizes a trained target detection algorithm-YOLO of the convolutional neural network to identify and classify the food.
4. The method of claim 1, wherein: the target pose calculation specifically comprises:
grabbing a target by vision, wherein the position of the target in an image shot by a camera is obtained firstly, and then the position in the image is converted into the position known by a mechanical arm;
converting the position under the pixel coordinate system into the position under the world coordinate system according to the following formula;
Figure FDA0003060412770000011
wherein
Figure FDA0003060412770000012
Is a reference for the camera to be used,
Figure FDA0003060412770000013
is an external reference of the camera.
5. The robotic arm path plan of claim 1, wherein: and planning the mechanical arm grabbing path by using an RRT algorithm.
6. An intelligent meal selling robot based on machine vision, comprising:
the intelligent food ordering system comprises an intelligent food ordering module, a path planning module, a machine vision module, a coordinate conversion module, an infrared positioning module and a mechanical arm system; the machine vision module is used for acquiring RGB information of an object in a grabbing range of the manipulator system and depth information between the vision system and the object, and sending the RGB information and the depth information of all the objects to the target detection module;
the target detection module is connected with the coordinate conversion module and is used for classifying the RGB information of each meal so as to determine the name of each meal and determine the three-dimensional coordinate of each meal under the camera coordinate according to the depth information of each meal; simultaneously sending the name of each meal and the three-dimensional coordinates under the camera coordinate system to a coordinate conversion module;
the coordinate conversion module is used for converting the three-dimensional coordinates under the camera coordinates of each object into the three-dimensional coordinates under the manipulator coordinate system;
the intelligent ordering system is used for providing convenient ordering operation for a user and transmitting the acquired food information to the computer;
the path planning module is connected with the mechanical arm system and used for carrying out path planning according to three-dimensional coordinates under a mechanical arm coordinate system for grabbing an object by a user, acquiring an object grabbing path and sending the object grabbing path to the mechanical arm grabbing module;
the infrared positioning module detects the position of the food;
and the mechanical arm system is used for grabbing the food required by the user according to the object grabbing path.
CN202110514678.1A 2021-05-11 2021-05-11 Intelligent meal selling robot based on machine vision and use method thereof Pending CN113510718A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110514678.1A CN113510718A (en) 2021-05-11 2021-05-11 Intelligent meal selling robot based on machine vision and use method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110514678.1A CN113510718A (en) 2021-05-11 2021-05-11 Intelligent meal selling robot based on machine vision and use method thereof

Publications (1)

Publication Number Publication Date
CN113510718A true CN113510718A (en) 2021-10-19

Family

ID=78064369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110514678.1A Pending CN113510718A (en) 2021-05-11 2021-05-11 Intelligent meal selling robot based on machine vision and use method thereof

Country Status (1)

Country Link
CN (1) CN113510718A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113977609A (en) * 2021-11-29 2022-01-28 杭州电子科技大学 Automatic dish-serving system based on double-arm mobile robot and control method thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113977609A (en) * 2021-11-29 2022-01-28 杭州电子科技大学 Automatic dish-serving system based on double-arm mobile robot and control method thereof
CN113977609B (en) * 2021-11-29 2022-12-23 杭州电子科技大学 Automatic dish serving system based on double-arm mobile robot and control method thereof

Similar Documents

Publication Publication Date Title
CN111496770B (en) Intelligent carrying mechanical arm system based on 3D vision and deep learning and use method
Dömel et al. Toward fully autonomous mobile manipulation for industrial environments
WO2018137445A1 (en) Ros-based mechanical arm grabbing method and system
US11969893B2 (en) Automated personalized feedback for interactive learning applications
JPWO2019189661A1 (en) Learning data set creation method and equipment
EP3485424B1 (en) Delegation of object and pose detection
CN114080583A (en) Visual teaching and repetitive motion manipulation system
US11833697B2 (en) Method of programming an industrial robot
CN111383263A (en) System, method and device for grabbing object by robot
JP2020163502A (en) Object detection method, object detection device, and robot system
Cui et al. A novel flexible two-step method for eye-to-hand calibration for robot assembly system
Ruan et al. Feature-based autonomous target recognition and grasping of industrial robots
CN113510718A (en) Intelligent meal selling robot based on machine vision and use method thereof
Xue et al. Gesture-and vision-based automatic grasping and flexible placement in teleoperation
Lakshan et al. Identifying Objects with Related Angles Using Vision-Based System Integrated with Service Robots
CN114187312A (en) Target object grabbing method, device, system, storage medium and equipment
CN111975776A (en) Robot movement tracking system and method based on deep learning and Kalman filtering
Schnaubelt et al. Autonomous assistance for versatile grasping with rescue robots
Phyu et al. Verification of recognition performance of cloth handling robot with photo-model-based matching
CN211890823U (en) Four-degree-of-freedom mechanical arm vision servo control system based on RealSense camera
Rauer et al. An autonomous mobile handling robot using object recognition
Kozamernik et al. Visual quality and safety monitoring system for human-robot cooperation
Medeiros et al. UAV target-selection: 3D pointing interface system for large-scale environment
Funakubo et al. Verification of illumination tolerance for clothes recognition
Liang et al. Visual reconstruction and localization-based robust robotic 6-DoF grasping in the wild

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination