CN113110513A - ROS-based household arrangement mobile robot - Google Patents

ROS-based household arrangement mobile robot Download PDF

Info

Publication number
CN113110513A
CN113110513A CN202110548387.4A CN202110548387A CN113110513A CN 113110513 A CN113110513 A CN 113110513A CN 202110548387 A CN202110548387 A CN 202110548387A CN 113110513 A CN113110513 A CN 113110513A
Authority
CN
China
Prior art keywords
robot
algorithm
ros
data
mechanical arm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110548387.4A
Other languages
Chinese (zh)
Inventor
徐军
杜丰收
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN202110548387.4A priority Critical patent/CN113110513A/en
Publication of CN113110513A publication Critical patent/CN113110513A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a ROS-based household arrangement mobile robot, and relates to the technical field of robots; the method comprises the following steps: the method comprises the following steps: positioning and mapping algorithm; step two: a path planning method; step three: modeling and simulating an ROS robot: simulating in Rviz and Gazebo to realize drawing establishment, automatic navigation and mechanical arm grabbing of the target object; step four: a robot is formed; step five: visual recognition; step six: establishing a mechanical arm: the ROS Moveit is used as a software package for planning the motion of the mechanical arm; the invention realizes monocular vision shooting of an object to obtain a matched image pair, image preprocessing and stereo matching are carried out, accurate positioning of a target object is realized, and the identification of target garbage is realized by combining deep learning; and performing map construction, automatic navigation and garbage capture function test on the indoor environment by using the robot.

Description

ROS-based household arrangement mobile robot
Technical Field
The invention belongs to the technical field of robots, and particularly relates to a ROS-based household arrangement mobile robot.
Background
In recent years, with the continuous development of modern science and technology, service type robots such as intelligent food delivery robots, customer service robots, medical robots, accompanying robots, sweeping robots, and the like are gradually starting to enter the market. A service robot is a young member of a large family of robots and so far there is no strict definition. According to different purposes, the robot can be divided into a cleaning robot, an educational robot, a medical robot, a household robot, a service robot and an entertainment robot, and the application range is very wide. At present, the market share of education robots is about 16%, the share of customer service robots is about 7%, and the ratio of medical and cleaning service robots is about 4%.
The floor sweeping robot is used as a service type robot, people are liberated from heavy household cleaning work due to the appearance of the floor sweeping robot, the two hands of the people are released, the people can put energy in more important fields, and the life quality and the work efficiency of the people are improved.
The sweeping robot can not be cleaned and can only be manually cleaned due to the fact that the sweeping robot is low in chassis and small in suction opening, and large garbage such as beverage bottles, waste paper rolls, banana peels and plastic bags can not be cleaned. Dozens of sweeping robots are tested in the Guancun on line, and the largest garbage which can be processed by the robots is garbage with the size of dog food and small building block particles. The current sweeping robot can only clean small garbage such as dust and hair, and can not clean large garbage such as waste paper masses. In addition, objects encountered in the cleaning process can be directly cleaned, for example, small toys and the like, and once the objects are encountered by the sweeping robot, the objects can be directly processed, and the sweeping robot can try to clean or drag the objects, so that certain damage can be caused to the robot and personal objects.
To the limited shortcoming of rubbish kind size that current robot of sweeping the floor can clear up, hope through mobile robot, discern great rubbish, article such as toy building blocks and toy car to through snatching the object, remove garbage bin or toy case and classify and place, with quality and the work efficiency that improves the indoor health of cleaning.
Disclosure of Invention
The problem that the type and size of garbage which can be cleaned by the existing sweeping robot are limited is solved; the invention aims to provide a ROS-based household tidying mobile robot.
The invention discloses a ROS-based household arrangement mobile robot, which comprises the following steps:
the method comprises the following steps: positioning and mapping algorithm:
SLAM comprises feature extraction, data association, state estimation, state update and feature update; adopting 2D laser SLAM; input of 2D laser SLAM: IMU data, odometer data, 2D lidar data;
cartographer algorithm: generating a real-time grid map with the resolution of 5 cm through data measured by sensors such as a laser range finder; the Cartogrrapher adopts an SLAM theoretical framework based on a graph optimization method and is divided into a front end and a rear end: the front end is mainly responsible for Scan-to-Submap and loop detection, processed laser radar data is matched with a subgraph, and local loop detection is carried out when a subgraph is generated and no new data frame is inserted; after the subgraph is created, if a matching result which is the best with the current estimation pose is found, adding the matching result into a loopback constraint; the back end is mainly responsible for optimizing the pose estimation, and adopts branch-bound and pre-calculated grids to realize global closed-loop detection;
step two: the path planning method comprises the following steps:
searching a feasible collision-free path by using an A-algorithm, wherein the A-algorithm is a heuristic search algorithm, namely, the search in a state space evaluates each searched position to obtain the best position, and then the position is searched to the target; in the a-algorithm, the distance between the connection points is used as the weight of the edge points, and the distance to the target points is used as the heuristic function.
Step three: modeling and simulating an ROS robot:
simulating in Rviz and Gazebo to realize drawing establishment, automatic navigation and mechanical arm grabbing of the target object;
step four: the robot comprises:
the control system is the core for collecting and processing the posture data of the wheeled robot and sending and receiving control commands, and a Jetson Nano is adopted for the development mainboard;
step five: visual identification:
(5.1) building a visual platform according with the requirement by comparing visual system schemes, and learning the principle of monocular vision;
(5.2), determining image preprocessing,
(5.3) learning a visual technology of deep learning;
step six: establishing a mechanical arm: ROS Moveit | is used as a software package for mechanical arm motion planning.
Compared with the prior art, the invention has the beneficial effects that:
the method comprises the steps of firstly, monocular vision shooting of an object is achieved to obtain a matched image pair, image preprocessing and stereo matching are conducted, accurate positioning of a target object is achieved, and recognition of target garbage is achieved by combining deep learning;
and secondly, performing map construction, automatic navigation and garbage capture function test on the indoor environment by using the robot.
Drawings
For ease of illustration, the invention is described in detail by the following detailed description and the accompanying drawings.
FIG. 1 is a schematic structural diagram of the present invention.
FIG. 2 is a block diagram of a mobile robot motion recognition system according to the present invention;
FIG. 3 is a flow chart of image preprocessing according to the present invention;
FIG. 4 is a diagram of the overall framework of the ROS Moveit! of the present invention.
Detailed Description
In order that the objects, aspects and advantages of the invention will become more apparent, the invention will be described by way of example only, and in connection with the accompanying drawings. It is to be understood that such description is merely illustrative and not intended to limit the scope of the present invention. The structure, proportion, size and the like shown in the drawings are only used for matching with the content disclosed in the specification, so that the person skilled in the art can understand and read the description, and the description is not used for limiting the limit condition of the implementation of the invention, so the method has no technical essence, and any structural modification, proportion relation change or size adjustment still falls within the range covered by the technical content disclosed by the invention without affecting the effect and the achievable purpose of the invention. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
It should be noted that, in order to avoid obscuring the present invention with unnecessary details, only the structures and/or processing steps closely related to the scheme according to the present invention are shown in the drawings, and other details not so relevant to the present invention are omitted.
The specific implementation mode adopts the following technical scheme: the method comprises the following steps:
the method comprises the following steps: positioning and mapping algorithm:
SLAM is an abbreviation for Simultaneous Localization And Mapping (Simultaneous Localization And Mapping) originally proposed by Hugh Durrant-Whyte And John J.Leonard. SLAM is mainly used for solving the problems of positioning navigation and map construction when a mobile robot runs in an unknown environment.
SLAM typically includes several components, feature extraction, data association, state estimation, state update, and feature update.
Localization: given the map, the pose of the robot is estimated.
SLAM: and simultaneously estimating the pose of the robot and an environment map.
Mapping: an environment map is estimated given the pose of the robot.
In summary, a 2D laser SLAM is used. Input of 2D laser SLAM: IMU data, odometry data, 2D lidar data.
A、HectorSLAM:
The method comprises the following steps: a laser scanner with high update frequency and small measurement noise. The odometer is not needed, and the possibility of using the ground trolley in the uneven area exists. The laser beam lattice is optimized using the map that has been obtained, and the representation of the laser spot on the map, and the probability of occupying the grid, are estimated. Wherein the scanning matching is solved by using a Gaussian Newton method. Finding a rigid body mapping from the laser point set to the existing map is converted to avoid the occurrence of local minimum rather than global optimum (similar to the multi-peak model, the local gradient is minimum, but not global optimum), and the map is in a multi-resolution form. The state estimation in navigation can be added with inertial measurement for EKF filtering.
B、Gmapping:
Currently, laser 2DSLAM uses the most widely method, and Gmiping uses RBPF method. The method of particle filtering (describing the result under a physical expression using statistical properties) must be understood.
Particle filtering methods generally require a large number of particles to obtain good results, but this necessarily introduces computational complexity; particles are a process of gradually updating weights and converging according to observation of the process, and this resampling process inevitably substitutes for the problem of particle dissipation, large-weight particles are significant, and small-weight particles disappear (it is possible that correct particle simulation may appear in the middle stage that the weights are small and disappear).
The self-adaptive resampling technology reduces the particle dissipation problem, only depends on the motion (mileometer) of the robot when calculating the particle distribution, and simultaneously takes the current observation into consideration, thereby reducing the uncertainty of the robot position in the particle filtering step. (idea of FAST-SLAM 2.0, particle number can be reduced appropriately)
C、KartoSLAM:
The core idea of graph optimization is mainly sparsification and least squares of the matrix. KartosLAM is a graph optimization-based method, and takes high optimization and non-iterative cholesky matrix to perform sparse system decoupling as a solution. The graph optimization method uses the average value of a graph to represent a map, each node represents a position point of a robot track and a sensor measurement data set, the connection pointed by an arrow represents the movement of the position points of the continuous robot, and each new node is added, so that the map is calculated and updated according to the constraint of the node arrow in the space.
ROS version of KartoSLAM, where sparse point adjustment is employed in connection with scan matching and closed loop detection. The more Landmark, the greater the memory requirement, however, the better the mapping in the large environment compared with other methods. In some cases KartoSLAM is more efficient because it only contains a Map of points (Robot position) and maps are found after the position is found.
D、CoreSLAM:
For simplicity and ease of understanding a SLAM algorithm that minimizes performance loss. The algorithm is simplified into two processes of distance calculation and map updating, wherein in the first step, each time scanning input is carried out, the distance is calculated based on a simple particle filter algorithm, a matcher for particle filtering is used for matching laser with a map, and each filter particle represents a possible position and a corresponding probability weight of the robot and depends on previous iteration calculation. The best hypothetical distribution is chosen, i.e. the low-weight particles disappear and new particles are generated. In the update step, the scanned line is added to the map and when an obstacle appears, a set of adjustment points is drawn around the obstacle point, instead of just one isolated point.
E、LagoSLAM:
The basic approach to map-optimized SLAM is to use a minimized nonlinear non-convex cost function. And each iteration, updating the graph configuration by solving the initial problem of local convex approximation, and iterating the process for a certain number of times until the local minimum cost function is reached. Lagosram is a linear approximation graph optimization that does not require initial assumptions.
F. Cartographer algorithm:
the algorithm can generate a real-time grid map with the resolution of 5 cm through data measured by a sensor such as a laser range finder. The Cartogrrapher adopts an SLAM theoretical framework based on a graph optimization method and is divided into a front end and a rear end: the front end is mainly responsible for Scan-to-Submap and loop detection, processed laser radar data is matched with the subgraph, and local loop detection is carried out when a subgraph is generated and no new data frame is inserted. After the sub-graph is created, if a matching result which is best with the current estimated pose is found, the matching result is added into the loopback constraint. The back end is mainly responsible for optimizing pose estimation, and adopts branch-bound and pre-calculated grids to realize global closed loop detection.
In summary, the Cartogrer algorithm is selected.
Step two: the path planning method comprises the following steps:
the probabilistic roadmapping method (PRM algorithm for short) is one of the earliest methods based on random sampling. The method randomly and uniformly places nodes on a robot moving space, then connects all or adjacent nodes to form a network graph of undirected paths, and then obtains a feasible path by using a search algorithm. The method has the advantages that environmental modeling is not needed, and the method is a good solution to the problem of path planning of high-dimensional space and multiple constraints.
The PRM algorithm has the basic idea that a moving area of a mobile robot is constructed into a network diagram of undirected paths by converting a continuous space into a discrete space (N sampling points are scattered randomly in a space servo). And finally, searching a feasible optimal path by using a graph search algorithm (such as an breadth first algorithm, an A-algorithm, a Dijkstra algorithm and the like). The graph search algorithm uses the a-x algorithm, starting from the starting unit, each neighboring unit is ranked by the lowest total cost, defined by the trip plus the heuristic cost, the trip cost being the distance from the starting unit to the next unit, the heuristic cost being the distance from any unit to the target unit. The least costly node is then expanded and explored until it reaches the target node. The probability of finding the feasible path by the algorithm is related to the number of the sampling points, along with the increase of the sampling points, the probability of finding the feasible path is increased, and the PRM algorithm is complete. But for most problems, many samples are not needed, and paths can be found by using fewer samples.
a. A learning stage: randomly sampling N points (self-defined number) in the map, then connecting each sampling point, removing a connecting line in contact with an obstacle, and constructing a undirected path network map.
b. And (3) an inquiry stage: after the undirected path network graph is constructed in the learning stage, in the query stage, a starting point and an end point of a path are required to be set, and then a feasible path is searched by using a graph search algorithm.
And searching a feasible collision-free path by using an A-algorithm, wherein the A-algorithm is a heuristic search algorithm, namely, searching in a state space evaluates each searched position to obtain the best position, and searching from the position to the target. This may omit a large number of featureless search paths, which may be useful for efficiency. In the a-algorithm, the distance between the connection points is used as the weight of the edge points, and the distance to the target points is used as the heuristic function.
Step three: modeling and simulating an ROS robot:
simulation is carried out in Rviz and Gazebo to realize drawing establishment, automatic navigation and mechanical arm grabbing of the target object
(3.1) urdf robot modeling. The description form of the whole organization is in the xml format, and a plurality of configuration files in the ROS are in the format, so that the label attributes can be used for conveniently describing each piece of related information, the organization is convenient, and the appearance is more intuitive. Problems with urdf modeling: the model is long and the repeated content is excessive; the modification of the parameters is too troublesome, and the secondary development is not convenient; there is no function of parameter calculation.
And (3.2) optimizing a robot urdf model. The evolutionary version of the urdf model, namely the xacro model file, simplifies the model code and provides a programmable interface.
And (3.3) building a Gazebo physical simulation environment. And configuring a robot model and creating a simulation environment.
And (3.4) compiling and importing programs such as drawing building and navigation into a simulation environment to complete simulation.
Step four: the robot comprises the following components and characteristics:
the control system is the core for collecting and processing the attitude data of the wheeled robot and sending and receiving control commands, and directly determines the overall performance and the running state of the robot.
The development motherboard is intended to use Jetson Nano. Jetson Nano is chosen because Jetson Nano is more suitable for a robot to be completed after comparing advantages and disadvantages of raspberry pie and Jetson Nano. The raspberry pi is a small computer and Internet of things development mainboard. The internet of things gateway device is not only a low-power-consumption internet of things device, but also a good prototype design tool, and even can be used for building internet of things gateway devices. The Jetson Nano motherboard, published by engida, as a development kit, provides all the inputs and connections required for the internet of things solution when designing a prototype, and provides all the computing power (and also integrates a GPU) required for the internet of things solution when designing a prototype. Both have an ARM processor and a series of peripheral connections. The greatest difference between the two is that Jetson Nano of england contains a higher performance and more powerful GPU (graphics processor), while raspberry pi has a low power consumption VideoCore multimedia processor. Jetson Nano by engida is expensive, but has a powerful Maxwell GPU.
The overall frame diagram of the overall hardware system of the robot to be designed is shown in FIG. 1.
Based on the ROS distributed framework, the laser radar is used for collecting environmental information of a cleaning area, the SLAM function is achieved, and path planning and traversal of the cleaning area are carried out through an optimal path algorithm. In the traversing process of the robot, the algorithm is used for carrying out target detection and target classification on the image acquired by the camera, the coordinate and the angle information of the target are acquired as the input information of the sorting control module, and the motion control module executes a garbage grabbing task. The main characteristics are:
(1) cleaning large garbage (most sweeping robots in the market are cleaning dust or small-volume garbage);
(2) automatically traversing navigation and automatically identifying and grabbing garbage;
(3) garbage and non-garbage can be accurately identified through deep learning training;
(4) the method is suitable for smoothly detecting and classifying the targets of the embedded equipment by improving the algorithm.
A block diagram of a mobile robot recognition motion system is shown in fig. 2.
Step five: visual identification:
(5.1) building a visual platform according with the requirement by comparing visual system schemes, and learning the principle of monocular vision;
(5.2) determining image preprocessing, wherein the flow is shown in figure 3.
And (5.3) learning a visual technology of deep learning.
Popular algorithms can be divided into two types, one type is R-CNN algorithm (R-CNN, Fast R-CNN, etc.) based on Region Proposal, the algorithms are two-stage algorithms, target candidate frames, namely target positions, need to be generated by the algorithms, and then classification and regression are carried out on the candidate frames. And the other is a one-stage algorithm like Yolo, SSD, which directly predicts the classes and positions of different targets using only one convolutional neural network CNN. The first category of methods is more accurate but slower, but the second category of algorithms is faster but less accurate.
The accuracy is higher by adopting the Faster R-CNN algorithm. Firstly, acquiring color images of a plurality of target objects in different poses by a Kinect2 camera, and labeling the color image of each target object in a data set by using a LabelImg tool; and then, taking the marking information and the color image information as samples used for training of the Faster R-CNN model, and finally obtaining an accurate target detection model based on the Faster R-CNN. The target detection system based on the Faster R-CNN model mainly comprises a deep full convolution neural network for generating candidate regions and a Fast R-CNN detection operator for using candidate frames. And the Fast R-CNN detection module extracts features according to the region of interest proposed by the region suggestion network RPN module so as to complete the search task of the target object.
The Faster R-CNN replaces the selective search, the Region to be detected is directly generated through a Region pro-social Network (RPN), and the time is reduced from 2s to 10ms when the RoI Region is generated.
a. Firstly, extracting feature maps for the whole graph by using a shared convolution layer;
b. sending the obtained feature maps into an RPN (resilient packet network), generating a frame to be detected (specifying the position of the RoI) by the RPN, and correcting the bounding frame of the RoI for the first time;
selecting features corresponding to each RoI on a feature map by a RoI Pooling Layer according to the output of the RPN, and setting the dimension as a fixed value;
d. the box is classified using a full connectivity Layer (FC Layer) and a second correction of the target bounding box is made.
Step six: mechanical arm:
the ROS Moveit is a software package specially used for mechanical arm motion planning, integrates the latest results in the field of motion control, and provides a software platform for developing advanced robot applications. ROS Moveit! overall framework is shown in FIG. 4.
The working principle of the specific embodiment is as follows: firstly, a garbage sorting robot is put in a working environment to be cleaned, the robot is controlled through a handle or a keyboard to establish an indoor 2D map, then the robot automatically navigates indoors, images are collected through a camera, algorithm processing is carried out on the images to judge whether garbage exists, and if not, navigation is continued and whether garbage exists is scanned. If so, a signal is sent to allow the mobile robot to navigate around the trash. The robot reaches the periphery of the garbage, is used for adjusting the relative position of the garbage sorting robot according to the recognized position of the garbage, and then controls the mechanical arm to grab the garbage, and continues navigation scanning after the garbage is grabbed.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (1)

1. The utility model provides a house arrangement mobile robot based on ROS which characterized in that: the method comprises the following steps:
the method comprises the following steps: positioning and mapping algorithm:
SLAM comprises feature extraction, data association, state estimation, state update and feature update; adopting 2D laser SLAM; input of 2D laser SLAM: IMU data, odometer data, 2D lidar data;
cartographer algorithm: generating a real-time grid map with the resolution of 5 cm through data measured by sensors such as a laser range finder; the Cartogrrapher adopts an SLAM theoretical framework based on a graph optimization method and is divided into a front end and a rear end: the front end is mainly responsible for Scan-to-Submap and loop detection, processed laser radar data is matched with a subgraph, and local loop detection is carried out when a subgraph is generated and no new data frame is inserted; after the subgraph is created, if a matching result which is the best with the current estimation pose is found, adding the matching result into a loopback constraint; the back end is mainly responsible for optimizing the pose estimation, and adopts branch-bound and pre-calculated grids to realize global closed-loop detection;
step two: the path planning method comprises the following steps:
searching a feasible collision-free path by using an A-algorithm, wherein the A-algorithm is a heuristic search algorithm, namely, the search in a state space evaluates each searched position to obtain the best position, and then the position is searched to the target; in the A-algorithm, the distance between the connecting points is used as the weight of the edge points, and the distance between the target points is used as a heuristic function;
step three: modeling and simulating an ROS robot:
simulating in Rviz and Gazebo to realize drawing establishment, automatic navigation and mechanical arm grabbing of the target object;
step four: the robot comprises:
the control system is the core for collecting and processing the posture data of the wheeled robot and sending and receiving control commands, and a Jetson Nano is adopted for the development mainboard;
step five: visual identification:
(5.1) building a visual platform according with the requirement by comparing visual system schemes, and learning the principle of monocular vision;
(5.2), determining image preprocessing,
(5.3) learning a visual technology of deep learning;
step six: establishing a mechanical arm: ROS Moveit | is used as a software package for mechanical arm motion planning.
CN202110548387.4A 2021-05-19 2021-05-19 ROS-based household arrangement mobile robot Pending CN113110513A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110548387.4A CN113110513A (en) 2021-05-19 2021-05-19 ROS-based household arrangement mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110548387.4A CN113110513A (en) 2021-05-19 2021-05-19 ROS-based household arrangement mobile robot

Publications (1)

Publication Number Publication Date
CN113110513A true CN113110513A (en) 2021-07-13

Family

ID=76722787

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110548387.4A Pending CN113110513A (en) 2021-05-19 2021-05-19 ROS-based household arrangement mobile robot

Country Status (1)

Country Link
CN (1) CN113110513A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113601504A (en) * 2021-08-04 2021-11-05 之江实验室 Robot limb action control method and device, electronic device and storage medium
CN114018268A (en) * 2021-11-05 2022-02-08 上海景吾智能科技有限公司 Indoor mobile robot navigation method
CN114089753A (en) * 2021-11-11 2022-02-25 江苏科技大学 Night astronomical assistant observation method based on wheeled robot
CN114355910A (en) * 2021-12-23 2022-04-15 西安建筑科技大学 Indoor robot autonomous map building navigation system and method based on Jetson Nano
CN114474054A (en) * 2022-01-14 2022-05-13 广东纯米电器科技有限公司 Humanoid robot navigation method
CN114577199A (en) * 2022-02-17 2022-06-03 广州大学 Garbage classification robot two-dimensional grid map construction system based on Gmapping algorithm
CN115299245A (en) * 2022-09-13 2022-11-08 南昌工程学院 Control method and control system of intelligent fruit picking robot
CN117428792A (en) * 2023-12-21 2024-01-23 商飞智能技术有限公司 Operating system and method for robot

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106826822A (en) * 2017-01-25 2017-06-13 南京阿凡达机器人科技有限公司 A kind of vision positioning and mechanical arm crawl implementation method based on ROS systems
CN109341694A (en) * 2018-11-12 2019-02-15 哈尔滨理工大学 A kind of autonomous positioning air navigation aid of mobile sniffing robot
US20200047340A1 (en) * 2018-08-13 2020-02-13 Beijing Jingdong Shangke Information Technology Co., Ltd. System and method for autonomous navigation using visual sparse map
CN111055281A (en) * 2019-12-19 2020-04-24 杭州电子科技大学 ROS-based autonomous mobile grabbing system and method
CN112025729A (en) * 2020-08-31 2020-12-04 杭州电子科技大学 Multifunctional intelligent medical service robot system based on ROS

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106826822A (en) * 2017-01-25 2017-06-13 南京阿凡达机器人科技有限公司 A kind of vision positioning and mechanical arm crawl implementation method based on ROS systems
US20200047340A1 (en) * 2018-08-13 2020-02-13 Beijing Jingdong Shangke Information Technology Co., Ltd. System and method for autonomous navigation using visual sparse map
CN109341694A (en) * 2018-11-12 2019-02-15 哈尔滨理工大学 A kind of autonomous positioning air navigation aid of mobile sniffing robot
CN111055281A (en) * 2019-12-19 2020-04-24 杭州电子科技大学 ROS-based autonomous mobile grabbing system and method
CN112025729A (en) * 2020-08-31 2020-12-04 杭州电子科技大学 Multifunctional intelligent medical service robot system based on ROS

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
何松等: "基于激光SLAM和深度学习的语义地图构建", 《计算机技术与发展》 *
杜学丹等: "一种基于深度学习的机械臂抓取方法", 《机器人》 *
段荣杰: "室内环境下移动机械臂的目标抓取技术", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
管晨曦: "基于ROS的室内巡检机器人系统设计与研究", 《机电信息》 *
贾浩: "基于Cartographer算法的SLAM与导航机器人设计", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113601504A (en) * 2021-08-04 2021-11-05 之江实验室 Robot limb action control method and device, electronic device and storage medium
CN114018268A (en) * 2021-11-05 2022-02-08 上海景吾智能科技有限公司 Indoor mobile robot navigation method
CN114089753A (en) * 2021-11-11 2022-02-25 江苏科技大学 Night astronomical assistant observation method based on wheeled robot
CN114355910A (en) * 2021-12-23 2022-04-15 西安建筑科技大学 Indoor robot autonomous map building navigation system and method based on Jetson Nano
CN114474054A (en) * 2022-01-14 2022-05-13 广东纯米电器科技有限公司 Humanoid robot navigation method
CN114577199A (en) * 2022-02-17 2022-06-03 广州大学 Garbage classification robot two-dimensional grid map construction system based on Gmapping algorithm
CN115299245A (en) * 2022-09-13 2022-11-08 南昌工程学院 Control method and control system of intelligent fruit picking robot
CN115299245B (en) * 2022-09-13 2023-07-14 南昌工程学院 Control method and control system of intelligent fruit picking robot
CN117428792A (en) * 2023-12-21 2024-01-23 商飞智能技术有限公司 Operating system and method for robot

Similar Documents

Publication Publication Date Title
CN113110513A (en) ROS-based household arrangement mobile robot
Bormann et al. Room segmentation: Survey, implementation, and analysis
US10372968B2 (en) Object-focused active three-dimensional reconstruction
US11747813B1 (en) Method for developing navigation plan in a robotic floor-cleaning device
Wurm et al. Hierarchies of octrees for efficient 3d mapping
KR102629036B1 (en) Robot and the controlling method thereof
CN111368759B (en) Monocular vision-based mobile robot semantic map construction system
US20200334887A1 (en) Mobile robots to generate occupancy maps
CN114089752B (en) Autonomous exploration method for robot, and computer-readable storage medium
CN111709988A (en) Method and device for determining characteristic information of object, electronic equipment and storage medium
CN116448118B (en) Working path optimization method and device of sweeping robot
CN111739066B (en) Visual positioning method, system and storage medium based on Gaussian process
CN114091515A (en) Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium
CN107728612A (en) Identify that different crowd carries out method, storage device and the mobile terminal of advertisement pushing
CN116109047A (en) Intelligent scheduling method based on three-dimensional intelligent detection
Finean et al. Motion planning in dynamic environments using context-aware human trajectory prediction
CN117948976B (en) Unmanned platform navigation method based on graph sampling and aggregation
WO2019079598A1 (en) Probabilistic object models for robust, repeatable pick-and-place
Lee et al. Autonomous view planning methods for 3D scanning
CN116520302A (en) Positioning method applied to automatic driving system and method for constructing three-dimensional map
Beevers Mapping with limited sensing
Nguyen et al. Language-Conditioned Observation Models for Visual Object Search
Liu et al. Active and Interactive Mapping with Dynamic Gaussian Process Implicit Surfaces for Mobile Manipulators
Zhang et al. Path Planning Model of Intelligent Robot based on Computer Vision Recognition Algorithm
Zoto Navigation Algorithms for Unmanned Ground Vehicles in Precision Agriculture Applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210713

WD01 Invention patent application deemed withdrawn after publication