CN114895680A - Mobile robot vision positioning system and method - Google Patents

Mobile robot vision positioning system and method Download PDF

Info

Publication number
CN114895680A
CN114895680A CN202210549367.3A CN202210549367A CN114895680A CN 114895680 A CN114895680 A CN 114895680A CN 202210549367 A CN202210549367 A CN 202210549367A CN 114895680 A CN114895680 A CN 114895680A
Authority
CN
China
Prior art keywords
mobile robot
display
mobile
positioning system
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210549367.3A
Other languages
Chinese (zh)
Inventor
梁帅
朱松毅
黄梅涛
张执南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202210549367.3A priority Critical patent/CN114895680A/en
Publication of CN114895680A publication Critical patent/CN114895680A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a mobile robot vision positioning system, which comprises: the mobile robot is provided with a display on the body; an image acquisition device having a field of view covering a movement area of the mobile robot and capable of acquiring an image displayed on a display of the mobile robot; the positioning calculation device is used for obtaining an image displayed by a display from the image acquisition device, and the ID of the mobile robot is identified through the image, so that the position of the mobile robot in the moving area is obtained; and a movement control device for planning a movement path of the mobile robot based on the mobile robot position data obtained from the positioning calculation device, and transmitting data and a control command to the mobile robot.

Description

Mobile robot vision positioning system and method
Technical Field
The invention belongs to the technical field of robots, and particularly relates to a mobile robot vision positioning system and a mobile robot vision positioning method.
Background
When the indoor mobile robots are controlled individually or in a cluster, the positions and postures of the robots need to be sensed accurately and in real time. As the market demand for such indoor mobile robots has grown, users have placed greater demands on the positioning accuracy and aesthetics of the robot products.
Disclosure of Invention
The embodiment of the invention provides a mobile robot vision positioning system based on a display, which comprises a plurality of mobile robots, wherein the bodies of the mobile robots are provided with the displays;
at least one image acquisition device, the field of view of which covers the moving area of the mobile robot and is capable of acquiring the image displayed by the display of the mobile robot;
and the system computer is used for acquiring the image displayed by the display from the image acquisition device, further identifying the ID of each mobile robot, acquiring the position and the posture of each mobile robot in the moving area, planning the moving path of each mobile robot according to the pose data of each mobile robot, and sending data and a control command to each mobile robot.
Through installing electronic display and showing two-dimensional pattern on mobile robot body, with the help of indoor camera module, can discern the position and the deflection angle of display screen in the camera field of vision, and then calculate mobile robot body's position and gesture to control mobile robot motion better, and can support the location of several robots simultaneously.
One of the advantages of the embodiment of the invention is that the positioning of the mobile robot body is realized by fixing the light-emitting display on the mobile robot body and displaying various two-dimensional patterns. The method can be applied to the field of the group control robot, and the hardware cost is further reduced. The robot is particularly suitable for small-sized mobile robots, educational robots and desktop level group control robots.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 is a diagram of a mobile display-based robot vision positioning system according to an embodiment of the present invention.
Fig. 2 is a bottom view of a bottom structure of a robot body according to an embodiment of the present invention.
Fig. 3 is a schematic view of a robot body for mounting an LED lamp panel according to an embodiment of the present invention.
Fig. 4 is a schematic view of a structure of an LED lamp panel according to an embodiment of the invention.
Fig. 5 is a schematic diagram of a grid designed for an LED lamp panel on a mobile robot body according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of an LED lamp panel displaying a reference mark pattern according to an embodiment of the invention.
Fig. 7 is a schematic diagram of a post-processing method for an acquired display image of an LED lamp panel according to an embodiment of the present invention.
FIG. 8 is a flowchart illustrating a method for visual positioning of a mobile robot based on a display according to an embodiment of the present invention.
Fig. 9 is a schematic diagram of a robot body equipped with an LCD display according to an embodiment of the present invention.
Fig. 10 is a schematic diagram of a robot mounted with an LCD display showing a fiducial mark pattern according to an embodiment of the present invention.
FIG. 11 is a schematic diagram of a cylindrical mobile robot showing ReacTIVision fiducial marks in accordance with one embodiment of the present invention.
Fig. 12 is a schematic diagram of a cylindrical mobile robot showing a Bullseye fiducial mark according to an embodiment of the present invention.
Fig. 13 is a schematic view of a mobile robot vision positioning method based on a display according to an embodiment of the invention.
1-mobile robot body, 11-driving wheel, 12-universal wheel, 13-lamp board grid,
2-a first display, 21-an LED lamp panel, 22-an LED lamp bead,
3-a camera (camera),
4-the computer, the computer is connected with the computer,
5-the table top or other flat surface,
6-a cylindrical robot body,
7-a second display, the second display,
8-circular display.
Detailed Description
The existing solution for visual positioning of a commonly used indoor mobile robot is that a camera is installed on a robot body, and tags such as a two-dimensional code and an ArUco are installed on a ceiling or a ground, a barrier and a scene object in the surrounding environment. The known coordinate information of the scene object or the label in the world coordinate system is obtained through the image processing module, so that the pose coordinate information of the mobile robot in the world coordinate system is calculated, and the self-positioning of the mobile robot is realized. When a plurality of mobile robots need to be controlled simultaneously, a camera needs to be installed on each mobile robot, so that the hardware cost is increased, in addition, under the scheme, the robot preferentially senses own coordinates through the camera, the coordinate information of the robot needs to be sent to a control center, the control center uniformly schedules the motion of each robot, the communication flow is complex, and relatively large control delay can be brought.
According to one or more embodiments, as shown in fig. 1, a mobile robot vision positioning system based on a display comprises a mobile robot body 1, a display 2 positioned on the top of the mobile robot body, a camera 3 positioned at the uppermost part of the whole system, a computer 4 used for processing image information and sending control information, and a desktop or other plane 5 where the mobile robot is positioned. The mobile robot body 1 has a mobile and wireless communication function.
In the present disclosure, as shown in fig. 3, the display 2 is horizontally installed on the top of the mobile robot body 1, and may display a reference mark image as needed. Fiducial markers are terms of computer vision, which have the major advantage of being easy to identify by the camera, a single marker can provide enough correspondence information for the camera to calculate the pose of the fiducial marker, and the internal coding of some fiducial markers makes the markers robust to error checking and correction, etc. and the ID of the fiducial marker can be determined.
As shown in fig. 2, the bottom structure of the mobile robot body 1 in the present disclosure is schematically illustrated as a three-wheel configuration, in which two driving wheels 11 are symmetrically disposed, free universal wheels 12 play a supporting role, and the chassis size is 4cm × 4 cm. The mobile robot body 1 can respectively control the steering and rotating speed of the two driving wheels 11, so that the mobile robot body 1 can move forward and backward in a straight line or can turn and rotate in place. In addition, the mobile robot body 1 also needs to have wireless communication functions, such as wireless communication technologies like infrared, bluetooth, WiFi, ZigBee, and the like, as well as radio communication, microwave communication, free space optical communication, acoustic communication, and electromagnetic induction communication, so as to receive information sent from the computer 6.
In accordance with one or more embodiments, as shown in FIG. 4, a display 2 is disclosed. This display 2 includes LED lamp plate 21, and this LED lamp plate 21 is printed circuit board, and 64 LED lamp pearls 22 are installed to its front, arrange 8 rows altogether. The back surface of the robot is provided with a circuit plug which can be connected with a robot mainboard to realize power supply and signal transmission.
As shown in fig. 5, a grid 13 is designed for the LED lamp panel 21 on the mobile robot body 1. Install LED lamp plate 21 back on mobile robot body 1, can design grid 13 between each lamp pearl, make the light that each lamp pearl sent not mutually interfere.
According to one or more embodiments, as shown in fig. 6, a schematic diagram of an ArUco (4 × 4 size) fiducial mark pattern is displayed for the LED lamp panel in the present disclosure. Each LED lamp bead 22 can be adjusted to emit light independently, and the intensity of three colors of red, green and blue can be adjusted independently. When the reference mark image is displayed, the area of the reference mark which is white can be controlled, the lamp bead 22 emits light, the area of the reference mark which is black is controlled, and the lamp bead 22 does not emit light. The pattern that lamp plate 21 shows can dispose with the help of the host computer through modes such as connection data line or wireless communication. In addition, the lamp panel 21 may be controlled by a graphical programming software, or may be directly configured by interacting with a sensor (e.g., a button, a gyroscope, a touch screen, etc.) built in the mobile robot.
Since the light-emitting part of the lamp bead 22 is located in the center of each small area, the brightness of the center is high, and the brightness of the edge is low, and the image of the lamp panel collected by the camera (camera) 3 is as shown in the first image in fig. 7, and cannot be directly recognized as an ArUco reference mark by an algorithm. For this reason, at the hardware aspect, can increase the even membrane of one deck in LED lamp plate 21 top for the image light that the lamp plate that camera (camera) 3 gathered shows is even, soft. On the software level, the collected images can be processed by an expansion algorithm through a computer, and the independent luminous lamp bead images can be connected to form a complete ArUco reference mark.
In accordance with one or more embodiments, as shown in FIG. 8, steps of the display-based mobile robot visual positioning method of the present disclosure are illustrated. Comprises the steps of (a) preparing a mixture of a plurality of raw materials,
s01, determining the type and the installation height of the camera according to the application site limitation, the number and the motion range of the mobile robot body 1 and the positioning precision requirement, and adjusting the focal length of the camera to ensure clear imaging;
s02, displaying the same type of reference mark on the display at the top of the mobile robot body 1, and ensuring that the ID of the reference mark displayed by each mobile robot body 1 is different;
s03, because the height of the robot, the size of the displayed reference mark and other dimension information are known, the camera can calculate the height of the camera from the desktop and the actual dimension of the field of view according to the dimension of the lamp panel image displayed by the mobile robot body in the field of view, and further the pixel coordinate system of the camera can be in one-to-one correspondence with the world coordinate system of the robot to obtain a transformation matrix;
s04, acquiring moving image data of the mobile robot body 1 in real time by using a camera, and calculating 6D poses of each robot in a camera coordinate system in real time after proper image processing, wherein the 6D poses comprise three-axis coordinates and three-axis rotation angles, namely X, Y, Z three-axis coordinates and pitching, yawing and rolling angles;
s05, calculating the actual 6D position of each robot body 1 in the world coordinate system according to the transformation matrix between the world coordinate system and the pixel coordinate system obtained in the S03;
s06, the computer 4 runs cluster control algorithms such as area coverage, role allocation, speed obstacle avoidance, dynamic formation selection and the like, calculates the expected pose of each mobile robot body 1 at the next moment, generates a corresponding motion control instruction, and sends the motion control instruction to each mobile robot body 1 in a wireless communication mode;
s07, the mobile robot body 1 makes corresponding movement after acquiring the control command corresponding to its own reference mark ID.
In accordance with one or more embodiments, as shown in FIG. 13. The visual positioning method of the mobile robot based on the display comprises the following steps,
s11: the robot is initialized. The camera 3 acquires images of the mobile robot bodies 1 on the desktop 5, and the computer 4 calculates the ID and the initial pose of each mobile robot body 1 and the total number of the mobile robot bodies 1. In the present embodiment, there are 25 mobile robot bodies 1, and the reference signs of ArUco are displayed in white light at the initial time.
S12: the pattern is initialized. The desired color art pattern is input to the computer 4, in this embodiment three areas, a red circular area, a green five-pointed star area and a blue L-shaped area.
S13: and planning the area coverage. The computer 4 will plan the coverage of each area based on triangulation algorithms or Voronoi algorithms etc. In the present embodiment, 25 end target positions are generated correspondingly.
S14: and (4) allocating roles. According to the Hungarian algorithm or other allocation algorithms, a target terminal point is allocated to each robot body 1, so that the sum of the distances from the initial positions to the target positions of all the robot bodies 1 is the minimum. At this time, the computer 4 may control the ArUco labels displayed by the robot bodies 1 to be colors corresponding to the respective end point target positions. For the sake of convenience of distinction, the mobile robot shown by the triangle in the S14 shows red, the mobile robot shown by the quadrangle shows blue, and the mobile robot shown by the pentagon shows green.
S15: and moving and avoiding. The pose of each mobile robot body 1 is captured in real time through the camera 3, the expected position of each mobile robot at the next moment is calculated according to the target position of each terminal point and by combining a dynamic avoidance algorithm, a motion control command is further generated and sent to each mobile robot body 1, and each mobile robot body 1 executes corresponding action.
S16: and ending the movement and displaying the pattern. When the mobile robot body 1 moves to its destination, the display can be turned off to reduce the amount of computation and communication traffic of the computer 4. When all the mobile robot bodies 1 move to the designated end point target, the movement is finished, and at the moment, the display 2 of all the mobile robots can be controlled to integrally show the color of the end point target position and can show dynamic effects such as a breathing lamp and a horse race lamp.
In the scene of the embodiment, the mobile robots are included, the color artistic patterns are dynamically generated, and the mobile robot visual positioning system based on the display can present more beautiful, clear and eye-catching visual effects to people in the formation change process; the mobile robot reaching the terminal point in advance can stop displaying the reference label, so that the calculation amount of the visual recognition system is reduced, the data transmission amount of the wireless communication system is reduced, and the system efficiency is improved.
According to one or more embodiments, a method of cluster algorithm visualization is implemented in a display-based mobile robotic visual positioning system. In the field of algorithm research of clustered robots, robot IDs are often distinguished, robot types are often distinguished, and robot states are often displayed through colors in a simulation stage. The robot of the system disclosed by the invention is provided with the display, so that the simulation effect can be better presented.
If the movement tracks of 10 independent robots are expected to be observed during the operation of the speed avoidance algorithm, displays of the 10 robots can be set to present ArUco labels with different colors, so that the human eyes can distinguish and observe records conveniently.
If two groups of robots are expected to be inserted and moved in an interpenetration mode when a formation control algorithm is operated, the two groups of robots can be respectively controlled to display blue and red, and therefore the motion process of the algorithm can be better visualized.
When running bird-oil object, for example, each mobile machine reacts only to mobile robots within a certain small proximity of its surroundings. The ArUco tags displayed by each robot may be set to different RGB colors and their colors may be slowly affected by the color of the mobile robot displaying the ArUco tags within the proximity range. The effect of the bird swarm algorithm can be visually displayed through the color of the robot.
In accordance with one or more embodiments, a method of human-machine interaction is implemented in a display-based mobile robotic visual positioning system. When people participate in the multi-robot system, the mobile robot vision positioning system based on the display can better realize human-computer interaction.
For example, when a person lifts up or picks up a certain mobile robot body 1, the size of the displayed ArUco reference mark in the camera field of view will increase, so that the system can determine that the mobile robot body is lifted up, and timely feedback can be given to the person by means of changing the color, adjusting the brightness and the like of the display.
According to one or more embodiments, the mobile robot visual positioning method and system based on the display provided by the present disclosure do not limit the kind of the display. The display can be a colorful LED lattice, a monochromatic ink screen, a multicolor ink screen, an LCD, an OLED, a display screen with a touch screen and the like, and the display brightness can be adjusted. Fig. 9 is a schematic view of another robot body 1 with an LCD display 7 mounted thereon according to the present disclosure.
Fig. 10 is a schematic diagram of a robot mounted with an LCD display 7 showing a fiducial mark pattern. Compared with a colorful LED lamp panel, the LCD has higher resolution, can display ArUco with more coded information so as to obtain more ID number, and can also display other types of reference mark images.
According to one or more embodiments, the mobile robot visual positioning method and system based on the display provided by the present disclosure do not limit the category of the mobile robot body 1, the shape of the display screen, or the type of the fiducial mark. Fig. 11 and 12 show another cylindrical mobile robot body 6 according to the present disclosure, and the display is a circular LCD screen, which respectively shows schematic diagrams of the ReacTIVision reference mark and the Bullseye reference mark.
Therefore, in the scheme of the disclosure, a display is installed on the top of the mobile robot and displays a reference mark, and after images are collected through a camera, the ID and pose information of each mobile robot are calculated in real time, so that high-precision positioning and motion control are realized. The robot direction control method is particularly suitable for controlling the robot direction in a group mode.
The robot may use the two-wheel differential motion chassis in the embodiment of the present invention, but the relative position between the driving wheels 11 and the universal wheels 12 is not limited, and in addition, the robot may also use other conventional wheel type, ball type, air suspension type, all-wheel steering type, mecanum wheel type, ball track all-direction motion mechanism, and other all-direction motion chassis. The robot may be a crawler-type moving mechanism or a foot-type moving mechanism, instead of a wheel-type moving mechanism.
The robot in the present disclosure is not limited to a mobile robot moving in a 2D plane, but may be a robot moving in a 3D space such as an unmanned aerial vehicle, a boat, or a submersible vehicle. The display device can be a monochromatic LED lattice, a seven-color LED lattice, a monochromatic ink screen, a multicolor ink screen, an LCD, an OLED, a display screen with a touch screen and the like, can adjust the display brightness, and can be square or round in shape or designed into other shapes corresponding to the reference marks. The fiducial marker can use images of different types of fiducial markers, such as AprilTag, Aruco, reacTIVision, Bullseye and the like, and efficient and accurate pose identification and ID information acquisition can be realized. The lens of the camera is preferably an undistorted lens; quantitatively: the positioning can be realized by means of a single camera, data fusion can be carried out on the overlapped part by a plurality of cameras, more accurate positioning is obtained, the plurality of cameras are basically not overlapped, and wider coverage is realized.
The positioning method in the disclosure identifies the position and the direction of the robot, and further controls the movement of the robot. The method not only can identify and control a single robot, but also can identify and control a plurality of robots simultaneously to carry out cluster control research. This positioning may extend to the positioning of mobile robots in 3D space. The size and the posture of the reference mark can be displayed through the robot display in the camera view field, and the 6D posture of the moving body can be reversely solved.
The beneficial effects of the present disclosure thus include:
the method for installing the light-emitting display on the mobile robot body and displaying various reference marks can efficiently realize the positioning of the mobile robot body, adjust different brightness of the display under different ambient light and improve the identification accuracy;
because the dimensions of the robot and the display are known, the camera can be fixed at any height, the height of the camera from the desktop and the actual dimension of the visual field can be calculated according to the dimension of the lamp panel image displayed by the mobile robot body in the visual field, and further the pixel coordinate system of the camera can be in one-to-one correspondence with the world coordinate system of the robot, so that the configuration is convenient;
by means of the display, various different types of reference marks can be switched as required, under the condition that the number of the cluster robots is large, the cluster robots are quicker to replace paper labels one by one, and the problem that the measured robot positioning has errors due to sticking errors of paper visual reference images is solved;
the luminous display can also be used as a part of the robot body, and can display other patterns or be used as decoration according to the requirements of users, so that more attractive and clear visual effects are presented to people;
the calculation amount of a visual identification system is reduced, the data transmission amount of a wireless communication system is reduced, and the system efficiency is improved;
the method has more obvious advantages in the aspects of realizing functions, algorithm visualization, man-machine interaction and the like.
It should be understood that, in the embodiment of the present invention, the term "and/or" is only one kind of association relation describing an associated object, and means that three kinds of relations may exist. For example, a and/or B, may represent: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. A mobile robot vision positioning system is characterized by comprising,
the mobile robot comprises at least one mobile robot, wherein a display is arranged on a body of the mobile robot;
at least one image acquisition device, the field of view of which covers the moving area of the mobile robot and is capable of acquiring the image displayed by the display of the mobile robot;
and the positioning calculation device is used for obtaining an image displayed by the display from the image acquisition device, and identifying the ID of the mobile robot through the image so as to obtain the position of the mobile robot in the moving area.
2. The mobile robot vision positioning system of claim 1, further comprising a movement control device that plans a movement path of the mobile robot based on mobile robot position data obtained from the positioning computing device and sends data and control instructions to the mobile robot.
3. The mobile robotic visual positioning system of claim 1, wherein the display is one of an LCD, OLED, LED, or ink screen.
4. The mobile robot vision positioning system of claim 1, wherein the display is disposed on top of the mobile robot with the display facing upward.
5. The mobile robot visual positioning system of claim 1, wherein the display is rectangular or circular in shape.
6. The mobile robot vision positioning system of claim 1, where the mobile robot employs one or a combination of conventional wheeled, ball wheeled, air suspended, all-wheel steered, mecanum wheeled, ball track omni-directional, tracked, foot-based.
7. The mobile robot vision positioning system of claim 1, wherein the image displayed by the display is an ArUco fiducial marker, an AprilTag fiducial marker, a reactivion fiducial marker, or a Bullseye fiducial marker.
8. The mobile robotic vision positioning system of claim 1, wherein the image acquisition device is a camera or a video camera.
9. A mobile robot vision positioning method, characterized in that, with the mobile robot according to claim 1, the following steps are performed,
displaying a fiducial mark on a display of a mobile robot that uniquely corresponds to the mobile robot ID;
acquiring moving image data of the mobile robots in real time, and calculating the 6D position of each mobile robot in a preset coordinate system in real time;
sending the expected motion instruction or the self coordinate and the target coordinate of the mobile robot to each mobile robot in a wireless communication mode;
the mobile robot obtains data corresponding to the self ID, obtains a motion instruction or self coordinates and target coordinates, and then achieves mobile control.
CN202210549367.3A 2022-05-20 2022-05-20 Mobile robot vision positioning system and method Pending CN114895680A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210549367.3A CN114895680A (en) 2022-05-20 2022-05-20 Mobile robot vision positioning system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210549367.3A CN114895680A (en) 2022-05-20 2022-05-20 Mobile robot vision positioning system and method

Publications (1)

Publication Number Publication Date
CN114895680A true CN114895680A (en) 2022-08-12

Family

ID=82723564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210549367.3A Pending CN114895680A (en) 2022-05-20 2022-05-20 Mobile robot vision positioning system and method

Country Status (1)

Country Link
CN (1) CN114895680A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116079704A (en) * 2022-10-08 2023-05-09 西北工业大学 Cluster robot local positioning and communication method based on fisheye machine vision

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116079704A (en) * 2022-10-08 2023-05-09 西北工业大学 Cluster robot local positioning and communication method based on fisheye machine vision
CN116079704B (en) * 2022-10-08 2024-04-30 西北工业大学 Cluster robot local positioning and communication method based on fisheye machine vision

Similar Documents

Publication Publication Date Title
EP3168704B1 (en) 3d surveying of a surface by mobile vehicles
US10602591B2 (en) Mechatronic transforming luminaire swarm
US20170106738A1 (en) Self-Balancing Robot System Comprising Robotic Omniwheel
RU2355027C2 (en) Marking large surfaces using visual images
US20210387346A1 (en) Humanoid robot for performing maneuvers like humans
US9179182B2 (en) Interactive multi-display control systems
CN205024577U (en) Self -walking -type building machine
CN108571971A (en) A kind of AGV vision positioning systems and method
CN106681510B (en) Pose recognition device, virtual reality display device and virtual reality system
CN106737687A (en) Indoor Robot system based on visible ray location navigation
CN210225419U (en) Optical communication device
CN108544912A (en) Four-wheel differentia all-terrain mobile robot control system and its control method
US10820395B2 (en) Methods for operating mechatronic transforming luminaire swarms
CN104199452B (en) Mobile robot, mobile-robot system, movement and communication means
CN114895680A (en) Mobile robot vision positioning system and method
US20220047138A1 (en) Systems and methods for visual docking in an autonomous mobile robot
CN102520723A (en) Wheelchair indoor global video monitor navigation system based on suspended wireless transmission camera
JP2010128133A (en) Mobile information superimposition system and information superimposition method
US20210255630A1 (en) Mobile object control apparatus, mobile object control method, and program
CN107632604A (en) Autonomous device guide wire air navigation aid and device
Saeed et al. Up and away: A visually-controlled easy-to-deploy wireless UAV Cyber-Physical testbed
US20130082923A1 (en) Optical pointer control system and method therefor
CN211565903U (en) Intelligent logistics transfer robot for teaching competition
CN206399422U (en) Multifunctional vision sensor and mobile robot
EP3147752B1 (en) An arrangement for providing a user interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination