CN112180937A - Subway carriage disinfection robot and automatic navigation method thereof - Google Patents

Subway carriage disinfection robot and automatic navigation method thereof Download PDF

Info

Publication number
CN112180937A
CN112180937A CN202011098810.7A CN202011098810A CN112180937A CN 112180937 A CN112180937 A CN 112180937A CN 202011098810 A CN202011098810 A CN 202011098810A CN 112180937 A CN112180937 A CN 112180937A
Authority
CN
China
Prior art keywords
laser radar
robot
pose
radar scanning
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011098810.7A
Other languages
Chinese (zh)
Inventor
史聪灵
车洪磊
李建
胡鹄
任飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Academy of Safety Science and Technology CASST
Original Assignee
China Academy of Safety Science and Technology CASST
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Academy of Safety Science and Technology CASST filed Critical China Academy of Safety Science and Technology CASST
Priority to CN202011098810.7A priority Critical patent/CN112180937A/en
Priority to CN202110365642.1A priority patent/CN113126621A/en
Publication of CN112180937A publication Critical patent/CN112180937A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to a subway carriage disinfection robot and an automatic navigation method thereof, and belongs to the technical field of robots. Atomizing plate runner of disinfection robot communicates micro air compressor machine through the air inlet, be equipped with each other on the standpipe lateral wall of runner and be reverse gas outlet each other still be equipped with the gas pocket under the atomizing plate, pressurized air gets into in the runner spun gas form the whirl extremely gas hole department carries out the secondary breakage with the granule after the atomizing, adopts pneumatic ultrasonic atomizer promptly for atomizing back fog volume grow, the droplet particle diameter is thinner. The automatic navigation method adopts the combination of the laser radar and the visual odometer or the visual navigation, and the closed-loop slam establishes a map navigation strategy to realize the automatic and accurate navigation of the robot in the subway carriage, overcome the problem of the corridor mobile navigation which only depends on the laser radar for positioning, and also avoid the problem that the visual odometer or the visual navigation is greatly influenced by light.

Description

Subway carriage disinfection robot and automatic navigation method thereof
Technical Field
The invention relates to a subway carriage disinfection robot and an automatic navigation method thereof, and belongs to the technical field of robots.
Background
In the period of new coronary pneumonia epidemic, parts of subway station security check equipment, self-service ticket selling equipment, a gate, a stair handrail, an escalator handrail, a public washroom door handle, a door curtain, four walls of a vertical elevator car, internal and external keys and the like which are frequently contacted by passengers, public areas of subway stations, other operating equipment facilities, washrooms and the like need to be regularly disinfected; in addition, before the train leaves the garage or when the train returns to a terminal station, the interior of the carriage needs to be disinfected in a spraying manner. However, most of the prior methods adopt a manual disinfection mode, have large dependence on the quality, technology and subjectivity of personnel, belong to a hidden space which is difficult to cover by manual disinfection, and cannot realize full disinfection coverage. Therefore, the automatic and intelligent disinfection robot is adopted to replace workers to regularly disinfect subway carriages and operation equipment facilities, and the development direction of technical change and update in the field is provided in the future.
The structures of all carriages of the subway are almost completely consistent, even if people are in the carriages, the subway carriages can only be distinguished by the identification in the carriages, but cannot be judged by the structures in the carriages, the symmetrical and repeated structures are typical gallery environments, and gallery mobile navigation is a difficult problem in mobile robot navigation. When only a laser radar or other single navigation equipment is adopted, the robot is easy to lose coordinates, and automatic navigation and autonomous disinfection tasks cannot be realized.
Disclosure of Invention
In order to solve the above problems, an object of the present invention is to provide a subway car sterilizing robot and an automatic navigation method thereof. The automatic navigation method adopts a laser radar and visual odometer or visual navigation composite mode, and a closed-loop slam graph building navigation strategy is adopted, so that automatic accurate navigation of the robot in a subway carriage is realized.
The purpose of the invention is realized by the following technical scheme:
a subway carriage disinfection robot comprises a robot body and an atomization system, wherein the atomization system is arranged in a cavity in the robot body and is sequentially provided with a fog outlet, a fog outlet hose and an atomization box from top to bottom, and the bottom of the atomization box is provided with a plurality of atomization plates;
a piezoelectric wafer is arranged below the piezoelectric substrate of the atomization plate, and liquid is atomized on a flow channel of the atomization plate; the flow channel is communicated with a micro air compressor through an air inlet, air outlets which are opposite to each other are arranged on the side wall of the vertical pipe of the flow channel, air holes are further arranged under the atomizing plate, the diameter of each air hole is 0.10-0.85mm, the pressurized air enters the flow channel and the sprayed air forms rotational flow to the air holes, and the atomized particles are crushed for the second time.
Further, a water storage tank is arranged in the atomization tank, and a float valve is arranged in the water storage tank; the surface of the atomization plate is provided with a liquid level sensor.
Furthermore, the disinfection robot also comprises a driving mechanism, a main control module, a control module and a power supply module; wherein the content of the first and second substances,
the driving mechanism comprises two driving wheels and two driven wheels, the driving wheels are driven by a servo hub motor, and the driven wheels are universal wheels;
the main control module comprises an industrial personal computer, a laser radar sensor and a wireless module, wherein the laser radar sensor is arranged at the top of the robot body;
the control module comprises a servo controller and a control panel;
the power module comprises an electric control panel button and a lithium battery.
An automatic navigation method of a subway carriage disinfection robot comprises the following steps:
step 1, generating a subgraph by a laser radar scanning frame continuously scanned within a period of time, wherein the subgraph adopts a map expression model of a probability grid, and when a new laser radar scanning frame is inserted into the probability grid, the grid state is updated;
step 2, before the laser radar scanning frame is inserted into the subgraph, optimizing the pose of the laser radar scanning frame and the current subgraph through a Ceres Solver;
step 3, optimizing the poses of all the laser radar scanning frames and the subgraphs by a sparse pose adjustment method, caching the poses of the laser radar scanning frames when the sub-graphs are inserted into a memory for loop detection, and performing loop detection on all the laser radar scanning frames and the sub-graphs when the sub-graphs are not changed any more, so that the accumulated errors of the sub-graphs are eliminated, and an environment map is obtained;
step 4, mounting a binocular camera at the top end of the disinfection robot platform, shooting environmental information once every k seconds, wherein the image shot by the left camera at the n moments is a left image InlThe image at the moment n shot by the right camera is a right image Inr
Step 5, respectively taking the left image InlAnd a right image InrDetecting and describing feature points, and displaying the left image InlAnd a right image InrMatching the characteristic points to obtain an image fnlr
Step 6, the front and rear frame images f are processed(n-1)lrAnd fnlrMatching the characteristic points;
and 7, reconstructing three-dimensional coordinates of the matching feature points of the front and rear frame images by using a triangulation method, and performing motion estimation to obtain the global pose of the robot.
Further, in step 1, the laser radar scanning frame is subjected to region segmentation by an adaptive threshold segmentation method, and the 3D point cloud image is processed into a 2D plan view.
Further, in step 3, a branch-and-bound scanning matching algorithm is used for accelerating the loop detection and the solving process of the relative pose, a search window is determined, and a loop is constructed by adopting a searching method.
An automatic navigation method of a subway carriage disinfection robot comprises the following steps:
step 1, generating a subgraph by a laser radar scanning frame continuously scanned within a period of time, wherein the subgraph adopts a map expression model of a probability grid, and when a new laser radar scanning frame is inserted into the probability grid, the grid state is updated;
step 2, before the laser radar scanning frame is inserted into the subgraph, optimizing the pose of the laser radar scanning frame and the current subgraph through a Ceres Solver;
step 3, optimizing the poses of all the laser radar scanning frames and the subgraphs by a sparse pose adjustment method, caching the poses of the laser radar scanning frames when the sub-graphs are inserted into a memory for loop detection, and performing loop detection on all the laser radar scanning frames and the sub-graphs when the sub-graphs are not changed any more, so that the accumulated errors of the sub-graphs are eliminated, and an environment map is obtained;
step 4, in the observation model based on the camera, the observation quantity Z (t) of the environmental characteristic i acquired by the camera at the time t is expressed by polar coordinates:
Figure RE-GDA0002768008070000031
wherein the content of the first and second substances,
Figure RE-GDA0002768008070000032
and
Figure RE-GDA0002768008070000033
respectively representing the distance relation and the angle relation of the environment characteristic i and the robot;
and 5, expressing the observation model based on the camera as follows:
Z(k)=h(X(k))+V(k)
x (k) is the pose of the robot at the time k; z (k), h (·) and V (k) are respectively observation information, an observation function and observation noise of the camera at the moment k;
step 6, the pose of the robot at the moment k is (x)t(k),yt(k),θt(k))TAt this time, the camera observes N characteristics, and the observed value of each characteristic is (x)i,yi) The following expression is obtained:
Figure RE-GDA0002768008070000041
wherein x isi,yiCoordinates representing the ith landmark feature detected by the sensor at time k;
Figure RE-GDA0002768008070000042
and
Figure RE-GDA0002768008070000043
respectively showing the position relation and the angle relation of the ith road sign and the robot at the moment k; x is the number oft(k),yt(k),θt(k) Respectively representing the poses of the mobile robot at the moment k;
and 7, recording the relative position of the camera and the environmental characteristics.
Further, in step 1, the laser radar scanning frame is subjected to region segmentation by an adaptive threshold segmentation method, and the 3D point cloud image is processed into a 2D plan view.
Further, in step 3, a branch-and-bound scanning matching algorithm is used for accelerating the loop detection and the solving process of the relative pose, a search window is determined, and a loop is constructed by adopting a searching method.
The invention has the beneficial effects that:
the disinfection robot adopts pneumatic ultrasonic atomizer for atomizing back fog volume grow, and the droplet particle size is thinner.
The laser radar and the visual odometer or the visual navigation are combined, and a map navigation strategy is built in a closed loop slam to realize automatic and accurate navigation of the robot in a subway carriage, so that the problem that the corridor mobile navigation is only positioned by the laser radar is solved, and the problem that the visual odometer or the visual navigation is greatly influenced by light is also avoided.
When the laser radar constructs an environment map, a branch-and-bound method can be used for efficiently calculating a value, and a branch-and-bound scanning matching algorithm with depth-first searching is adopted, so that the robot searches a target point in the running process, and the feedback speed is high. The pose estimation result of the laser radar combined with the visual odometer is more accurate, the environment adaptability is better, and the driving range of the robot is obtained by adopting a scheme based on feature point matching.
In addition, the pose of the mobile robot at the moving moment can be observed through a robot vision system.
Drawings
FIG. 1 is a schematic view of an appearance structure of a subway carriage disinfection robot;
FIG. 2 is a schematic diagram of the internal structure of the subway carriage disinfection robot;
FIG. 3 is a schematic view of the structure of the atomization box;
FIG. 4 is a schematic diagram of the atomization system;
FIG. 5 is a schematic diagram of the atomizer;
FIG. 6 is a schematic view of an air outlet structure of an air hole of a flow channel of an atomization plate;
FIG. 7 is a schematic diagram of an atomization plate structure;
FIG. 8 is a schematic view of the overall atomizer assembly;
fig. 9 is a schematic diagram of updating the status of the bitmap grid in embodiments 2 and 3;
1-a hand-held handle, 2-a fog outlet, 3-a fog outlet hose, 4-an atomization box, 5-a control module, 6-a fan, 7-a driving wheel, 8-a driven wheel, 9-a lithium battery, 10-a water outlet, 11-a laser radar sensor, 12-a main control module, 13-a water storage box, 14-a fog outlet pipe, 15-a fan air inlet, 16-a ball float valve inlet, 17-a water inlet, 18-a ventilation port, 19-an air outlet, 20-an atomization plate, 21-a water inlet, 22-a micro air compressor, 23-an air hole, 24-a piezoelectric wafer, 25-an air receiving source, 26-an air outlet, 27-an air outlet direction, 28-an air inlet, 29-an atomization sheet and 30-a liquid level sensor.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
A subway carriage disinfection robot comprises a robot body and an atomization system, wherein the atomization system is arranged in a cavity in the robot body and is sequentially provided with a fog outlet 2, a fog outlet hose 3 and an atomization box 4 from top to bottom as shown in figures 2 and 4. As shown in fig. 8, the bottom of the atomization box 4 is provided with a plurality of atomization plates 20.
The piezoelectric wafer 24 is arranged below the piezoelectric substrate of the atomization plate, the piezoelectric wafer generates a plurality of waveforms under mutual vibration after being electrified, and liquid is atomized on a flow channel of the atomization plate under the action of motion inertia. Meanwhile, as shown in fig. 5, an air hole 23 with a diameter of 0.1mm is also provided under the atomization plate. The structure of giving vent to anger of gas pocket 23 is shown in fig. 6, and the runner is through air inlet 28 intercommunication miniature air compressor machine 22, and in pressurized air got into the runner through air inlet 28, be equipped with 2 each other on the standpipe lateral wall of runner and be reverse gas outlets each other, pressurized air got into in the runner spun gas form the whirl extremely gas pocket 23 department carries out the secondary crushing with the granule after the atomizing, carries out the secondary atomization promptly to form pneumatic ultrasonic atomization ware, make the atomizer further improve liquid atomization effect, the electric energy of consumption simultaneously also greatly reduced.
The disinfection robot also comprises a driving mechanism, a main control module 12, a control module 13 and a power supply module; wherein the content of the first and second substances,
the driving mechanism comprises two driving wheels 7 and two driven wheels 8, the driving wheels 7 are driven by a servo hub motor, and the driven wheels 8 are universal wheels.
The main control module 12 comprises an industrial personal computer, a laser radar sensor 11 and a wireless module, wherein the laser radar sensor is arranged at the top of the robot body.
The control module 13 includes a servo controller and a control board.
The power module comprises an electronic control panel button and a lithium battery 9.
For the positioning and navigation of the robot, a camera or a binocular camera is also arranged on the disinfection robot.
The atomizing plate structure is as shown in fig. 7, and the power supply adopts 48v power supply, atomizing plate 20 surface is equipped with level sensor, and when the water level was lower, level sensor spread the switching value signal and sent to main control module, and main control module controls the start-stop command of atomizer.
As shown in fig. 3, which is a schematic view of the appearance structure of the atomization box, the top of the atomization box is respectively provided with a mist outlet pipe port 14 for connecting the mist outlet hose 3; a fan air inlet 15 is arranged for supplying air to the atomizer; a float valve inlet 16 is arranged for installing a float valve in the water storage tank 13; a water inlet 17 is arranged for storing water in the water storage tank 13; a vent 18 is arranged for ensuring the gas circulation inside the atomization box; in addition, a water outlet 10 is arranged at the bottom of the atomization box and used for draining the water storage box 13 and the atomizer.
The atomizing box 3 is internally provided with a water storage tank 13, a water inlet 17 of the water storage tank stores water, and the water storage tank is communicated with the atomizer through a water inlet 21. The water storage tank 13 is internally provided with a ball float valve which can control the liquid level of the water storage tank, the water level is not interfered by water pressure, and the water storage tank is tightly opened and closed without water leakage.
As shown in fig. 1, for the appearance structure schematic diagram of the disinfection robot, the top and the side of the disinfection robot are respectively provided with a handle 1, which is mainly used for the disinfection of subway carriages, and is convenient for the carrying and moving of the robot, and if the robot is placed in a subway carriage, the operation is started to perform the disinfection operation.
Wherein, go out the fog mouth and can be equipped with a plurality ofly through going out the fog device, set up respectively and go out the fog at robot top, left and right, rear end, disinfect to the carriage, because go out the fog granule about 5 microns, go out the fog and can spill whole carriage, can disinfect to carriage handle, seat and health dead angle.
Example 2
An automatic navigation method of a subway carriage disinfection robot comprises the following steps:
step 1, generating a subgraph by a laser radar scanning frame continuously scanned within a period of time, wherein the subgraph adopts a map expression model of a probability grid, and when a new laser radar scanning frame is inserted into the probability grid, the grid state is updated, specifically as follows:
the pose of the robot can be set as xi (xi)xyθ) To represent ξxAnd xiyIndicating the amount of translation, ξ, in the x and y directionsθRepresenting the amount of rotation in a two-dimensional plane. Recording the data measured by the lidar sensor
Figure RE-GDA0002768008070000071
The initial laser point is
Figure RE-GDA0002768008070000072
The pose transformation of the mapping of the scanning data frame of the laser radar to the subgraph is recorded as TξCan be mapped to the subgraph coordinate system by formula (1):
Figure RE-GDA0002768008070000073
in the formula, RiRotation angle, t, representing the pose of the lidar observationiAnd (4) representing the translation coordinate of the observation pose of the laser radar, and p representing the probability of the scanning point to the sub-graph.
The method comprises the steps that a subgraph is generated by laser radar scanning frames which are continuously scanned in a period of time, the subgraph adopts a map expression model of a probability grid, when a new laser radar scanning frame is inserted into the probability grid, the states of the grid are calculated, and each grid has a hit (hit) state and a loss (miss) state. And the hit grid inserts adjacent grids into the hit set, and adds all relevant points on the scanning center and the scanning point connecting ray into the lost set. A probability value is set for each grid that has not been observed before, and probability updating is performed for the grid that has been observed according to formula (2), which is schematically shown in fig. 9, where the grid with crosses and shadows indicates a hit, and only the shaded grid indicates a miss.
Figure RE-GDA0002768008070000074
Mnew(x)=clamp(odds-1(odds(Mold(x))·odds(phit)))
Where odds denotes the occurrence ratio, M denotes the corresponding value of the updated grid, and the clamp function denotes the limitation of the randomly varying value to a given area, returning a value between two numbers min and max, if the value is less than min, and max if the value is greater than max.
The method comprises the steps of extracting and fitting linear section features in a laser radar scanning frame, and then translating the laser radar scanning points to a straight line according to the fitted linear section. The field test proves that the method can effectively reduce the error of the scanning point of the laser radar and improve the definition and sharpness of the image building edge.
Step 2, before the laser radar scanning frame is inserted into the subgraph, the pose of the laser radar scanning frame and the current subgraph are optimized through a Ceres Solver, and the problem can be converted into the problem of solving the nonlinear least square problem:
Figure RE-GDA0002768008070000081
xi belongs to the scanning pose, where TξAccording to the characteristic pose hξConverting the feature point coordinates into sub-graph coordinates, MsmoothFunction completion is from
Figure RE-GDA0002768008070000082
And smoothing the probability value in the local subgraph, wherein the function adopts a bicubic interpolation method.
And 3, because the scanning frame of the laser radar is only matched with the current subgraph, the environment map is formed by a series of subgraphs, and accumulated errors exist. Optimizing the poses of all laser radar scanning frames and sub-images by a Sparse Pose Adjustment method (SPA): the pose of the laser radar scanning frame when the sub-graph is inserted is cached in a memory for loop detection; and when the sub-map is not changed any more, all the laser radar scanning frames and the sub-map are subjected to loop detection, so that the accumulated error of the sub-map is eliminated, and the environment map is obtained.
Specifically, the mathematical expression of the optimization problem constructed by using the sparse pose adjustment method is as follows:
Figure RE-GDA0002768008070000083
in the above formula, the first and second carbon atoms are,
Figure RE-GDA0002768008070000084
respectively representing the poses of the sub-1 picture and the pose sets of the feature points under certain constraint conditions; ρ is a loss function;
Figure RE-GDA0002768008070000085
representing the pose of the subgraphs, wherein m is the number of the subgraphs;
Figure RE-GDA0002768008070000086
representing the poses of the feature points, wherein s is the number of the feature points; relative pose xiijAnd (4) representing the matching position of the characteristic point j in the subgraph i, and forming optimization constraint together with the related covariance matrix sigma ij. The cost function of this constraint, expressed using the residual E, can be calculated by equation (5):
Figure RE-GDA0002768008070000087
Figure RE-GDA0002768008070000091
Figure RE-GDA0002768008070000092
and
Figure RE-GDA0002768008070000093
showing the sub-graph pose and the feature point pose under the same radar angle,
Figure RE-GDA0002768008070000094
the representation represents a transpose of the sub-graph pose matrix,
Figure RE-GDA0002768008070000095
representing the transpose of the feature point pose matrix,
Figure RE-GDA0002768008070000096
and representing the rank of the sub-graph pose inverse matrix.
In addition, in the embodiment, a branch-and-bound scanning matching algorithm is used for accelerating the loop detection and the solving process of the relative pose, determining the search window and constructing the loop by adopting a searching method.
Figure RE-GDA0002768008070000097
Xi belongs to the scanning pose, W is the search space, MnearestIs the value of M, T, at the grid point corresponding to the pixelξDenotes the subgraph coordinates, hkAnd expressing the pose of the feature point, wherein W is obtained by the following formula, and the feature point is substituted into the following formula:
dmax=maxk=1,...k||hk||
Figure RE-GDA0002768008070000098
dmaxrepresenting the distance from the farthest point in the feature points to the origin, r representing the resolution, the angle step to be obtainedθSubstitution into equation (8):
Figure RE-GDA0002768008070000099
Wx、Wyvalues representing the radar level range, WθValues representing the scan angle, let W bex=Wy=7m, WθThe linear and angular window sizes are calculated from the above equation at 30 °, and the resulting values are substituted into equation (9):
Figure RE-GDA00027680080700000910
Figure RE-GDA00027680080700000911
ξ0feature point pose j representing initial search box centerx、jyAnd jθRepresenting arbitrary level and angle values in the search box;
and (3) solving the size of the search space by the formula (9), substituting the obtained W value into the formula (6) to solve the constraint relation, and further carrying out loop optimization.
The global optimization mainly uses graph optimization to add the subgraph pose established after local optimization into the global pose optimization, the global optimization adopts a branch-and-bound method to accelerate so as to detect the loop and solve the relative pose, compared with the traditional method of detecting the loop in advance, the relative pose is more uniform in structure in the solution, the process of establishing the loop is converted into a searching process, on the other hand, after the tree structure of a discrete candidate solution space is completed, the speed of the searching process of the solution in the tree is very high, the remaining selection of the boundary of the node in the process of establishing the tree is completed by pre-calculating the subgraph, the loop process can be completed in real time due to the introduction of the intermediate structure, and the subgraph is continuously adjusted by the loop so as to eliminate the accumulated error.
The method can calculate the value efficiently by using a branch-and-bound method, and the robot can search a target point in the running process with high feedback speed and high accuracy by adopting a branch-and-bound scanning matching algorithm with depth-first search.
Step 4, mounting a binocular camera at the top end of the disinfection robot platform, shooting environmental information once every k seconds, wherein the image shot by the left camera at the n moments is a left image InlThe image at the moment n shot by the right camera is a right image Inr
Step 5, respectively taking the left image InlAnd a right image InrDetecting and describing feature points, and displaying the left image InlAnd a right image InrMatching the characteristic points to obtain an image fnlr
Step 6, the front and rear frame images f are processed(n-1)lrAnd fnlrMatching the characteristic points;
and 7, reconstructing three-dimensional coordinates of the matching feature points of the front and rear frame images by using a triangulation method, and performing motion estimation to obtain the global pose of the robot.
The coordinate system of the binocular camera is generally established by taking the coordinate system of the left camera as a reference, and is transformed by a rigid body
Figure RE-GDA0002768008070000101
The pose variation of the binocular camera between two adjacent frames from n-1 to n can be represented, namely the pose variation of the mobile robot between the frames:
Figure RE-GDA0002768008070000102
wherein r isn,n-1Representing the amount of attitude change, is a 3 × 3 matrix, tn,n-1The representative amount of positional variation is a 3 × 1 vector. Image I shot by left and right camerasnThe corresponding pose is marked as CnAnd C isn=Tn,n-1Cn-1. The subscript of the coordinate or angle change is changed from two adjacent moments to the next moment (r is changedn,n-1Is changed into rn) This means that the most important role of the visual odometer is to simultaneously obtain the image I at the time n-1n-1And image I at time nnCalculating the pose change T of the imagen
The complete positioning information of the mobile robot in the three-dimensional space has six degrees of freedom, including three coordinates of x, y and z and
Figure RE-GDA0002768008070000103
theta, phi, three euler angles. When the method does not depend on the assistance of other positioning navigation modes, the monocular vision can only be obtained and comprises
Figure RE-GDA0002768008070000104
Three euler angles of theta, phi and two five degrees of freedom including position information that can only be found by means of scale factors. In order to obtain positioning information of the robot in six degrees of freedom, a binocular camera-based visual odometer model is usually selected. Method for matching consistency information between two adjacent frames of imagesBecause the dense matching method, such as an optical flow method, reduces the matching accuracy when the illumination changes, and the calculated amount is large, in order to make the pose estimation result of the visual odometer more accurate and have better adaptability to the environment, the embodiment adopts a scheme based on feature point matching.
Preferably, in step 1, the laser radar scanning frame is subjected to region segmentation by an adaptive threshold segmentation method, and the 3D point cloud image is processed into a 2D plan view.
When a large number of points fluctuate in a laser radar scanning frame, probability calculation and updating of grid points in the map construction process are affected, and the map construction is unclear or even wrong. The distance is measured every 1 degree by the laser radar, 360 scanning point data are obtained in a circle of rotation, so for a wall body close to the laser radar, the points of the laser radar scanning points on the wall body become dense, and for a wall body far away from the laser radar, the laser radar scanning points on the wall body become sparse.
If we use a fixed threshold value to segment the region, the whole wall far away from the laser radar will be cut into a plurality of pieces, which obviously does not meet the requirement of region segmentation. For this purpose, a segmentation method of an adaptive threshold value should be adopted for region segmentation, when the distance between a scanning point and a laser radar is assumed to be d, the threshold value for region segmentation is set to be g, and when the distance between the scanning point and the laser radar is 2d, the threshold value for region segmentation is set to be 2g, so that the influence of the distance on region segmentation can be reduced.
After the laser radar scanning frame is subjected to region segmentation, one laser radar scanning frame is segmented into a plurality of regions, all points in each region can be considered to be continuous, and then feature extraction is carried out on all points in the region.
Firstly, the points are assumed to form a broken line, the angular points of the broken line can be one or more, then the points are fitted into a straight line by using a least square method, then the distances between all the points and the fitted straight line are checked, the point with the farthest distance is found, if the distance is greater than a certain threshold value, the point is considered to be an angular point, the straight line is folded into two straight lines from the angular point, and if the distance is not greater than the certain threshold value, the fact that all the points form the straight line is explained. If the region has a plurality of corner points, multiple fitting is needed, each fitting finds one corner point, all data points are divided into two parts, then data fitting continues to be performed on each part to find the corner points, and the steps are repeated until no corner points exist in all the regions.
Example 3
An automatic navigation method of a subway carriage disinfection robot comprises the following steps:
step 1, generating a subgraph by a laser radar scanning frame continuously scanned within a period of time, wherein the subgraph adopts a map expression model of a probability grid, and when a new laser radar scanning frame is inserted into the probability grid, the grid state is updated, specifically as follows:
the pose of the robot can be set as xi (xi)xyθ) To represent ξxAnd xiyIndicating the amount of translation, ξ, in the x and y directionsθRepresenting the amount of rotation in a two-dimensional plane. Recording the data measured by the lidar sensor
Figure RE-GDA0002768008070000121
The initial laser point is
Figure RE-GDA0002768008070000122
The pose transformation of the mapping of the scanning data frame of the laser radar to the subgraph is recorded as TξCan be mapped to the subgraph coordinate system by formula (1):
Figure RE-GDA0002768008070000123
in the formula, RiRotation angle, t, representing the pose of the lidar observationiAnd (4) representing the translation coordinate of the observation pose of the laser radar, and p representing the probability of the scanning point to the sub-graph.
The method comprises the steps that a subgraph is generated by laser radar scanning frames which are continuously scanned in a period of time, the subgraph adopts a map expression model of a probability grid, when a new laser radar scanning frame is inserted into the probability grid, the states of the grid are calculated, and each grid has a hit (hit) state and a loss (miss) state. And the hit grid inserts adjacent grids into the hit set, and adds all relevant points on the scanning center and the scanning point connecting ray into the lost set. A probability value is set for each grid that has not been observed before, and probability updating is performed for the grid that has been observed according to formula (2), which is schematically shown in fig. 9, where the grid with crosses and shadows indicates a hit, and only the shaded grid indicates a miss.
Figure RE-GDA0002768008070000124
Mnew(x)=clamp(odds-1(odds(Mold(x))·odds(phit)))
Where odds denotes the occurrence ratio, M denotes the corresponding value of the updated grid, and the clamp function denotes the limitation of the randomly varying value to a given area, returning a value between two numbers min and max, if the value is less than min, and max if the value is greater than max.
The method comprises the steps of extracting and fitting linear section features in a laser radar scanning frame, and then translating the laser radar scanning points to a straight line according to the fitted linear section. The field test proves that the method can effectively reduce the error of the scanning point of the laser radar and improve the definition and sharpness of the image building edge.
Step 2, before the laser radar scanning frame is inserted into the subgraph, the pose of the laser radar scanning frame and the current subgraph are optimized through a Ceres Solver, and the problem can be converted into the problem of solving the nonlinear least square problem:
Figure RE-GDA0002768008070000131
xi belongs to the scanning pose, where TξAccording to the characteristic pose hξConverting the feature point coordinates into sub-graph coordinates, MsmoothFunction completion is from
Figure RE-GDA0002768008070000132
And smoothing the probability value in the local subgraph, wherein the function adopts a bicubic interpolation method.
And 3, because the scanning frame of the laser radar is only matched with the current subgraph, the environment map is formed by a series of subgraphs, and accumulated errors exist. Optimizing the poses of all laser radar scanning frames and sub-images by a Sparse Pose Adjustment method (SPA): the pose of the laser radar scanning frame when the sub-graph is inserted is cached in a memory for loop detection; and when the sub-map is not changed any more, all the laser radar scanning frames and the sub-map are subjected to loop detection, so that the accumulated error of the sub-map is eliminated, and the environment map is obtained.
Specifically, the mathematical expression of the optimization problem constructed by using the sparse pose adjustment method is as follows:
Figure RE-GDA0002768008070000133
in the above formula, the first and second carbon atoms are,
Figure RE-GDA0002768008070000134
respectively representing the poses of the sub-1 picture and the pose sets of the feature points under certain constraint conditions; ρ is a loss function;
Figure RE-GDA0002768008070000135
representing the pose of the subgraphs, wherein m is the number of the subgraphs;
Figure RE-GDA0002768008070000136
representing the poses of the feature points, wherein s is the number of the feature points; relative pose xiijAnd (4) representing the matching position of the characteristic point j in the subgraph i, and forming optimization constraint together with the related covariance matrix sigma ij. The cost function of this constraint, expressed using the residual E, can be calculated by equation (5):
Figure RE-GDA0002768008070000137
Figure RE-GDA0002768008070000141
Figure RE-GDA0002768008070000142
and
Figure RE-GDA0002768008070000143
showing the sub-graph pose and the feature point pose under the same radar angle,
Figure RE-GDA0002768008070000144
the representation represents a transpose of the sub-graph pose matrix,
Figure RE-GDA0002768008070000145
representing the transpose of the feature point pose matrix,
Figure RE-GDA0002768008070000146
and representing the rank of the sub-graph pose inverse matrix.
In addition, in the embodiment, a branch-and-bound scanning matching algorithm is used for accelerating the loop detection and the solving process of the relative pose, determining the search window and constructing the loop by adopting a searching method.
Figure RE-GDA0002768008070000147
Xi belongs to the scanning pose, W is the search space, MnearestIs the value of M, T, at the grid point corresponding to the pixelξDenotes the subgraph coordinates, hkAnd expressing the pose of the feature point, wherein W is obtained by the following formula, and the feature point is substituted into the following formula:
dmax=maxk=1,...k||hk||
Figure RE-GDA0002768008070000148
dmaxrepresenting the distance from the farthest point in the feature points to the origin, r representing the resolution, the angle step to be obtainedθSubstitution into equation (8):
Figure RE-GDA0002768008070000149
Wx、Wyvalues representing the radar level range, WθValues representing the scan angle, let W bex=Wy=7m, WθThe linear and angular window sizes are calculated from the above equation at 30 °, and the resulting values are substituted into equation (9):
Figure RE-GDA00027680080700001410
Figure RE-GDA00027680080700001411
ξ0feature point pose j representing initial search box centerx、jyAnd jθRepresenting arbitrary level and angle values in the search box;
and (3) solving the size of the search space by the formula (9), substituting the obtained W value into the formula (6) to solve the constraint relation, and further carrying out loop optimization.
The global optimization mainly uses graph optimization to add the subgraph pose established after local optimization into the global pose optimization, the global optimization adopts a branch-and-bound method to accelerate so as to detect the loop and solve the relative pose, compared with the traditional method of detecting the loop in advance, the relative pose is more uniform in structure in the solution, the process of establishing the loop is converted into a searching process, on the other hand, after the tree structure of a discrete candidate solution space is completed, the speed of the searching process of the solution in the tree is very high, the remaining selection of the boundary of the node in the process of establishing the tree is completed by pre-calculating the subgraph, the loop process can be completed in real time due to the introduction of the intermediate structure, and the subgraph is continuously adjusted by the loop so as to eliminate the accumulated error.
The method can calculate the value efficiently by using a branch-and-bound method, and the robot can search a target point in the running process with high feedback speed and high accuracy by adopting a branch-and-bound scanning matching algorithm with depth-first search.
Step 4, in the observation model based on the camera, the observation quantity Z (t) of the environmental characteristic i acquired by the camera at the time t is expressed by polar coordinates:
Figure RE-GDA0002768008070000151
wherein the content of the first and second substances,
Figure RE-GDA0002768008070000152
and
Figure RE-GDA0002768008070000153
respectively representing the distance relation and the angle relation of the environment characteristic i and the robot;
and 5, expressing the observation model based on the camera as follows:
Z(k)=h(X(k))+V(k)
x (k) is the pose of the robot at the time k; z (k), h (·) and V (k) are respectively observation information, an observation function and observation noise of the camera at the moment k;
step 6, the pose of the robot at the moment k is (x)t(k),yt(k),θt(k))TAt this time, the camera observes N special charactersCharacterization, the observed value of each feature is (x)i,yi) The following expression is obtained:
Figure RE-GDA0002768008070000154
wherein x isi,yiCoordinates representing the ith landmark feature detected by the sensor at time k;
Figure RE-GDA0002768008070000155
and
Figure RE-GDA0002768008070000156
respectively showing the position relation and the angle relation of the ith road sign and the robot at the moment k; x is the number oft(k),yt(k),θt(k) Respectively representing the poses of the mobile robot at the moment k;
and 7, recording the relative position of the camera and the environmental characteristics.
The camera is used as a very common sensing component widely applied, is an important tool for the mobile robot to acquire external information, and is a key hardware support for the robot positioning navigation based on the mobile robot acquiring road sign information or feature observation information of the visual navigation. The camera understands and records the relative position of the camera and the environmental characteristics by acquiring and identifying external environmental information.
In the same embodiment 2, in step 1, the laser radar scanning frame is subjected to region segmentation by an adaptive threshold segmentation method, and the 3D point cloud image is processed into a 2D plane map.
In the invention, the single board computer communicates with the laser range finder and the microcontroller through an RS-232 serial interface. The disinfection robot uses a client-server mobile robot control architecture. The single board computer is a client and the microcontroller is a server. The microcontroller is provided with server operating software of the robot, and the operating system is stored in a FLASH ROM of the microcontroller. The microcontroller manages all low-level details of robot control and operation, including motion, heading, and ranging; it also includes transmitting sonar sensors, collecting data from the sonar sensors, wheel encoders and gyroscopes. The server (microcontroller) communicates with the control client (single board computer) using a special client-server communication data packet protocol, using two basic command packets; one from the client to the server and another from the server to the client (server information package, SIP).
The operating system on the microcontroller has a structured command format for receiving and responding to instructions from the computer to control and operate the robot. The number of commands that can be sent to the microcontroller in a particular time period depends on the baud rate of the serial interface and the synchronization link. The AROS on the microcontroller accepts several different computer motion commands, but there are two mutually exclusive types: independent wheel speed mode or platform translation/rotation mode. In the independent wheel speed mode, the microcontroller of the robot attempts to maintain accurate wheel speeds, while in the pan/rotate mode, the microcontroller maintains platform speed and heading. All motion command parameters sent by the computer to the microcontroller are in millimeters or degrees, and the AROS uses two independent parameters to convert these units into encoder-dependent motion values, one for translation and the other for rotation.
The user also has the flexibility to enable or disable any particular sonar, or to change the cycle time, using special client commands on the single board computer. The onboard gyroscopes are used to compensate for changes in robot heading that are not detected by the wheel encoders, such as wheel slip, gearbox backlash, wheel imbalance or surface conditions. The microcontroller collects 10-bit gyro rate and 8-bit temperature data every 25 milliseconds and sends the data to the single board computer in SIP upon request. The adjustment of the direction and position of the robot is completed on the onboard computer.
When the method is applied specifically, client software is started to connect WIFI, and the WIFI is connected with the laser radar, the fan and the IP and Port number Port of the mobile platform controller in the communication setting area through Socket.
After setting the operation parameters, starting the mobile platform, and selecting a corresponding epidemic prevention disinfection mode through the acquired images: automatic disinfection and remote control disinfection. Under the condition of automatic disinfection, the system calls an automatic disinfection mode function and executes the function according to the parameter value set by the original plan; in the remote control mode, the client controls the running track of the mobile platform by calling a laser radar, a fan, an atomizer, chassis motion and the like, and controls the atomization distance and the liquid medicine flow by calling an atomizer power processing program.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. The subway carriage disinfection robot is characterized by comprising a robot body and an atomization system, wherein the atomization system is arranged in a cavity in the robot body and is sequentially provided with a mist outlet (2), a mist outlet hose (3) and an atomization box (4) from top to bottom, and the bottom of the atomization box (4) is provided with a plurality of atomization plates (20);
a piezoelectric wafer (24) is arranged below the piezoelectric substrate of the atomization plate, and liquid is atomized on a flow channel of the atomization plate; the runner still communicates micro air compressor machine (22) through the air inlet, be equipped with the gas outlet that each other is reverse on the standpipe lateral wall of runner still be equipped with gas pocket (23) under the atomizing board, gas pocket (23) diameter is 0.1-0.85mm, and pressurized air gets into the interior spun gas of runner and forms the whirl to gas pocket (23) department carries out the secondary crushing with the granule after the atomizing.
2. A robot as claimed in claim 1, wherein a water storage tank (13) is provided in the atomizing tank (3), and a ball float valve is provided in the water storage tank (13); the surface of the atomization plate is provided with a liquid level sensor.
3. The robot for disinfecting a subway car as claimed in claim 1, wherein said disinfecting robot further comprises a driving mechanism, a main control module (12), a control module (13) and a power supply module; wherein the content of the first and second substances,
the driving mechanism comprises two driving wheels (7) and two driven wheels (8), the driving wheels (7) are driven by a servo hub motor, and the driven wheels (8) are universal wheels;
the main control module (12) comprises an industrial personal computer, a laser radar sensor (11) and a wireless module, wherein the laser radar sensor is arranged at the top of the robot body;
the control module (13) comprises a servo controller and a control board;
the power module comprises an electric control panel button and a lithium battery (9).
4. An automatic navigation method of a subway carriage disinfection robot is characterized by comprising the following steps:
step 1, generating a subgraph by a laser radar scanning frame continuously scanned within a period of time, wherein the subgraph adopts a map expression model of a probability grid, and when a new laser radar scanning frame is inserted into the probability grid, the grid state is updated;
step 2, before the laser radar scanning frame is inserted into the subgraph, optimizing the pose of the laser radar scanning frame and the current subgraph through a Ceres Solver;
and 3, optimizing the poses of all the laser radar scanning frames and sub-images by a sparse pose adjustment method: the pose of the laser radar scanning frame when the sub-graph is inserted is cached in a memory for loop detection; when the sub-map is not changed any more, all the laser radar scanning frames and the sub-map are subjected to loop detection, so that the accumulated error of the sub-map is eliminated, and the environment map is obtained;
step 4, mounting a binocular camera at the top end of the disinfection robot platform, shooting environmental information once every k seconds, wherein the image shot by the left camera at the n moments is a left image InlThe image at the moment n shot by the right camera is a right image Inr
Step 5, respectively taking the left image InlAnd a right image InrDetecting and describing feature points, and displaying the left image InlAnd a right image InrMatching the characteristic points to obtain an image fnlr
Step 6, the front and rear frame images f are processed(n-1)lrAnd fnlrMatching the characteristic points;
and 7, reconstructing three-dimensional coordinates of the matching feature points of the front and rear frame images by using a triangulation method, and performing motion estimation to obtain the global pose of the robot.
5. The automatic navigation method for the disinfection robot of the subway car as claimed in claim 4, wherein in step 1, the laser radar scanning frame is segmented by adaptive threshold segmentation method, and the 3D point cloud picture is processed into 2D plane picture.
6. The automatic navigation method for the disinfection robot for the subway carriage as claimed in claim 4, wherein in step 3, a branch-and-bound scan matching algorithm is used to accelerate the loop detection and the solving process of the relative pose, determine the search window, and construct the loop by using a search method.
7. An automatic navigation method of a subway carriage disinfection robot is characterized by comprising the following steps:
step 1, generating a subgraph by a laser radar scanning frame continuously scanned within a period of time, wherein the subgraph adopts a map expression model of a probability grid, and when a new laser radar scanning frame is inserted into the probability grid, the grid state is updated;
step 2, before the laser radar scanning frame is inserted into the subgraph, optimizing the pose of the laser radar scanning frame and the current subgraph through a Ceres Solver;
and 3, optimizing the poses of all the laser radar scanning frames and sub-images by a sparse pose adjustment method: the pose of the laser radar scanning frame when the sub-graph is inserted is cached in a memory for loop detection; when the sub-map is not changed any more, all the laser radar scanning frames and the sub-map are subjected to loop detection, so that the accumulated error of the sub-map is eliminated, and the environment map is obtained;
step 4, in the observation model based on the camera, the observation quantity Z (t) of the environmental characteristic i acquired by the camera at the time t is expressed by polar coordinates:
Figure FDA0002724660970000031
wherein the content of the first and second substances,
Figure FDA0002724660970000032
and
Figure FDA0002724660970000033
respectively representing the distance relation and the angle relation of the environment characteristic i and the robot;
and 5, expressing the observation model based on the camera as follows:
Z(k)=h(X(k))+V(k)
x (k) is the pose of the robot at the time k; z (k), h (·) and V (k) are respectively observation information, an observation function and observation noise of the camera at the moment k;
step 6, the pose of the robot at the moment k is (x)t(k),yt(k),θt(k))TAt this time, the camera observes N characteristics, and the observed value of each characteristic is (x)i,yi) The following expression is obtained:
Figure FDA0002724660970000034
wherein x isi,yiCoordinates representing the ith landmark feature detected by the sensor at time k;
Figure FDA0002724660970000035
and
Figure FDA0002724660970000036
respectively showing the position relation and the angle relation of the ith road sign and the robot at the moment k; x is the number oft(k),yt(k),θt(k) Respectively representing the poses of the mobile robot at the moment k;
and 7, recording the relative position of the camera and the environmental characteristics.
8. The automatic navigation method for the disinfection robot for the subway car as claimed in claim 7, wherein in step 1, the lidar scanning frame is segmented by adaptive threshold segmentation method, and the 3D point cloud image is processed into 2D plane image.
9. The automatic navigation method for the disinfection robot for the subway carriage as claimed in claim 7, wherein in step 3, a branch-and-bound scan matching algorithm is used to accelerate the loop detection and the solution process of the relative pose, determine the search window, and construct the loop by using a search method.
CN202011098810.7A 2020-10-14 2020-10-14 Subway carriage disinfection robot and automatic navigation method thereof Pending CN112180937A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011098810.7A CN112180937A (en) 2020-10-14 2020-10-14 Subway carriage disinfection robot and automatic navigation method thereof
CN202110365642.1A CN113126621A (en) 2020-10-14 2020-10-14 Automatic navigation method of subway carriage disinfection robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011098810.7A CN112180937A (en) 2020-10-14 2020-10-14 Subway carriage disinfection robot and automatic navigation method thereof

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202110365642.1A Division CN113126621A (en) 2020-10-14 2020-10-14 Automatic navigation method of subway carriage disinfection robot

Publications (1)

Publication Number Publication Date
CN112180937A true CN112180937A (en) 2021-01-05

Family

ID=73950097

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110365642.1A Pending CN113126621A (en) 2020-10-14 2020-10-14 Automatic navigation method of subway carriage disinfection robot
CN202011098810.7A Pending CN112180937A (en) 2020-10-14 2020-10-14 Subway carriage disinfection robot and automatic navigation method thereof

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202110365642.1A Pending CN113126621A (en) 2020-10-14 2020-10-14 Automatic navigation method of subway carriage disinfection robot

Country Status (1)

Country Link
CN (2) CN113126621A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112692837A (en) * 2021-01-11 2021-04-23 成都海瑞斯轨道交通设备有限公司 Welding robot system and welding method for overhauling wagon body of railway wagon
CN112900315A (en) * 2021-03-30 2021-06-04 周鹏 Rail transit fare collection queue isolating device
CN113129379A (en) * 2021-06-17 2021-07-16 同方威视技术股份有限公司 Global relocation method and device for automatic mobile equipment
CN113368285A (en) * 2021-06-16 2021-09-10 厦门大学 Be applied to passenger train's wisdom carriage disinfection machine people
CN114594768A (en) * 2022-03-03 2022-06-07 安徽大学 Mobile robot navigation decision-making method based on visual feature map reconstruction
CN114674308A (en) * 2022-05-26 2022-06-28 之江实验室 Vision-assisted laser gallery positioning method and device based on safety exit indicator

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6459955B1 (en) * 1999-11-18 2002-10-01 The Procter & Gamble Company Home cleaning robot
CN107321548A (en) * 2017-05-05 2017-11-07 福建省雾精灵环境科技有限公司 Water smoke fountain system and forming method thereof
CN109343548A (en) * 2018-09-30 2019-02-15 中国安全生产科学研究院 The cruising inspection system of subway tunnel crusing robot
CN209490292U (en) * 2018-11-29 2019-10-15 济南恒天广洁环境科技有限公司 A kind of movable type atomization disinfection machine
CN111514360A (en) * 2020-04-29 2020-08-11 北京势焰天强科技有限公司 Mobile disinfection and purification robot
CN111546359A (en) * 2020-06-06 2020-08-18 深圳全智能机器人科技有限公司 Intelligent disinfection robot
CN111569108A (en) * 2020-06-02 2020-08-25 电子科技大学中山学院 Automatic spraying disinfection and sterilization robot
CN111632174A (en) * 2020-05-22 2020-09-08 平湖丞士机器人有限公司 Crawler-type disinfection robot
CN111686960A (en) * 2020-05-13 2020-09-22 安徽伽马莱恩机器人有限公司 Atomizing device of disinfection robot
CN111744040A (en) * 2020-07-21 2020-10-09 四川旭信科技有限公司 Intelligent disinfection robot with automatic liquid feeding

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3078935A1 (en) * 2015-04-10 2016-10-12 The European Atomic Energy Community (EURATOM), represented by the European Commission Method and device for real-time mapping and localization
US10444761B2 (en) * 2017-06-14 2019-10-15 Trifo, Inc. Monocular modes for autonomous platform guidance systems with auxiliary sensors
CN107764270A (en) * 2017-10-19 2018-03-06 武汉工控仪器仪表有限公司 A kind of laser scan type indoor map generation and updating device and method
CN111076733B (en) * 2019-12-10 2022-06-14 亿嘉和科技股份有限公司 Robot indoor map building method and system based on vision and laser slam
CN111540005B (en) * 2020-04-21 2022-10-18 南京理工大学 Loop detection method based on two-dimensional grid map
CN111459166B (en) * 2020-04-22 2024-03-29 北京工业大学 Scene map construction method containing trapped person position information in post-disaster rescue environment
CN111531534B (en) * 2020-04-29 2022-03-08 北京眸视科技有限公司 Robot and control method thereof

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6459955B1 (en) * 1999-11-18 2002-10-01 The Procter & Gamble Company Home cleaning robot
CN107321548A (en) * 2017-05-05 2017-11-07 福建省雾精灵环境科技有限公司 Water smoke fountain system and forming method thereof
CN109343548A (en) * 2018-09-30 2019-02-15 中国安全生产科学研究院 The cruising inspection system of subway tunnel crusing robot
CN209490292U (en) * 2018-11-29 2019-10-15 济南恒天广洁环境科技有限公司 A kind of movable type atomization disinfection machine
CN111514360A (en) * 2020-04-29 2020-08-11 北京势焰天强科技有限公司 Mobile disinfection and purification robot
CN111686960A (en) * 2020-05-13 2020-09-22 安徽伽马莱恩机器人有限公司 Atomizing device of disinfection robot
CN111632174A (en) * 2020-05-22 2020-09-08 平湖丞士机器人有限公司 Crawler-type disinfection robot
CN111569108A (en) * 2020-06-02 2020-08-25 电子科技大学中山学院 Automatic spraying disinfection and sterilization robot
CN111546359A (en) * 2020-06-06 2020-08-18 深圳全智能机器人科技有限公司 Intelligent disinfection robot
CN111744040A (en) * 2020-07-21 2020-10-09 四川旭信科技有限公司 Intelligent disinfection robot with automatic liquid feeding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
金国淼: "《化工设备设计全书千燥设备设计》", 31 December 1986, 上海科学技术出版社 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112692837A (en) * 2021-01-11 2021-04-23 成都海瑞斯轨道交通设备有限公司 Welding robot system and welding method for overhauling wagon body of railway wagon
CN112900315A (en) * 2021-03-30 2021-06-04 周鹏 Rail transit fare collection queue isolating device
CN113368285A (en) * 2021-06-16 2021-09-10 厦门大学 Be applied to passenger train's wisdom carriage disinfection machine people
CN113129379A (en) * 2021-06-17 2021-07-16 同方威视技术股份有限公司 Global relocation method and device for automatic mobile equipment
CN114594768A (en) * 2022-03-03 2022-06-07 安徽大学 Mobile robot navigation decision-making method based on visual feature map reconstruction
CN114674308A (en) * 2022-05-26 2022-06-28 之江实验室 Vision-assisted laser gallery positioning method and device based on safety exit indicator

Also Published As

Publication number Publication date
CN113126621A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
CN112180937A (en) Subway carriage disinfection robot and automatic navigation method thereof
CN110023867B (en) System and method for robotic mapping
JP6896077B2 (en) Vehicle automatic parking system and method
JP3994950B2 (en) Environment recognition apparatus and method, path planning apparatus and method, and robot apparatus
CN108256430B (en) Obstacle information acquisition method and device and robot
CN108885459A (en) Air navigation aid, navigation system, mobile control system and mobile robot
CN112347840A (en) Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
WO2019232803A1 (en) Mobile control method, mobile robot and computer storage medium
CN108827306A (en) A kind of unmanned plane SLAM navigation methods and systems based on Multi-sensor Fusion
CN108283021A (en) Locating a robot in an environment using detected edges of a camera image from a camera of the robot and detected edges derived from a three-dimensional model of the environment
Kuramachi et al. G-ICP SLAM: An odometry-free 3D mapping system with robust 6DoF pose estimation
CN103926933A (en) Indoor simultaneous locating and environment modeling method for unmanned aerial vehicle
CN110082781A (en) Fire source localization method and system based on SLAM technology and image recognition
Wulf et al. Benchmarking urban six‐degree‐of‐freedom simultaneous localization and mapping
WO2022016754A1 (en) Multi-machine cooperative vehicle washing system and method based on unmanned vehicle washing device
CN108780319A (en) Oftware updating method, system, mobile robot and server
CN115932882A (en) System for providing 3D detection of an environment through an autonomous robotic vehicle
Karam et al. Integrating a low-cost mems imu into a laser-based slam for indoor mobile mapping
CN118020038A (en) Two-wheeled self-balancing robot
Li et al. An overview of the simultaneous localization and mapping on mobile robot
Borkowski et al. Towards semantic navigation in mobile robotics
Jensen et al. Laser range imaging using mobile robots: From pose estimation to 3D-models
Her et al. Localization of mobile robot using laser range finder and IR landmark
Zhao et al. 3D indoor map building with monte carlo localization in 2D map
Hu et al. [Retracted] Real‐Time Evaluation Algorithm of Human Body Movement in Football Training Robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination