CN113954064A - Robot navigation control method, device and system, robot and storage medium - Google Patents

Robot navigation control method, device and system, robot and storage medium Download PDF

Info

Publication number
CN113954064A
CN113954064A CN202111136743.8A CN202111136743A CN113954064A CN 113954064 A CN113954064 A CN 113954064A CN 202111136743 A CN202111136743 A CN 202111136743A CN 113954064 A CN113954064 A CN 113954064A
Authority
CN
China
Prior art keywords
chassis
preset
real
robot
preset path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111136743.8A
Other languages
Chinese (zh)
Other versions
CN113954064B (en
Inventor
谢军
吴小清
郑帅印
王品隆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Bozhilin Robot Co Ltd
Original Assignee
Guangdong Bozhilin Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Bozhilin Robot Co Ltd filed Critical Guangdong Bozhilin Robot Co Ltd
Priority to CN202111136743.8A priority Critical patent/CN113954064B/en
Publication of CN113954064A publication Critical patent/CN113954064A/en
Application granted granted Critical
Publication of CN113954064B publication Critical patent/CN113954064B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/02Programme-controlled manipulators characterised by movement of the arms, e.g. cartesian coordinate type
    • B25J9/04Programme-controlled manipulators characterised by movement of the arms, e.g. cartesian coordinate type by rotating at least one arm, excluding the head movement itself, e.g. cylindrical coordinate type or polar coordinate type
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/10Programme-controlled manipulators characterised by positioning means for manipulator elements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Abstract

The application relates to a robot navigation control method, a device, a system, a robot and a storage medium. The method comprises the following steps: acquiring a real-time image of a reference object shot by a vision module attached to a chassis in the process that a robot chassis moves according to a preset path, wherein the reference object comprises lines and characters, the lines extend along the direction of the preset path of the chassis, and the characters are arranged along the extending direction of the lines; identifying the real-time image to obtain target line information and target character information; controlling the chassis to move in a first direction perpendicular to the direction of the preset path according to the target line information, so that the pose of the chassis meets the preset pose condition; and controlling the chassis to move in the direction of the preset path according to the target character information, so that the chassis is positioned to each work station. By adopting the method, the influence of the change of the surrounding environment can be reduced, and the navigation positioning precision is improved.

Description

Robot navigation control method, device and system, robot and storage medium
Technical Field
The present application relates to the field of robot navigation technologies, and in particular, to a method, an apparatus, a system, a robot and a storage medium for controlling robot navigation.
Background
Currently, the commonly used navigation and positioning modes are mainly laser SLAM (instant positioning and map building) navigation, visual guidance navigation, magnetic navigation and GPS (global positioning system) navigation. Laser SLAM navigation and ordinary vision navigation are generally used for complicated change environment, and its cost is higher, and all need wide angle measurement, and it can influence positioning accuracy to shelter from certain angle to and receive the surrounding environment influence greatly, and too many obstacles shelter from can leading to the map scheduling problem that mismatches. The magnetic navigation is greatly influenced by electromagnetic interference, nearby electromagnetic sensitive objects such as metal and the like are not allowed, and the measurement precision is not high. The GPS navigation is mainly used outdoors, and the positioning precision is not high.
Disclosure of Invention
In view of the above, it is desirable to provide a robot navigation control method, apparatus, system, robot, and storage medium, which are less affected by changes in the surrounding environment and have high positioning accuracy.
A robot navigation control method, the method comprising:
acquiring a real-time image of a reference object shot by a vision module attached to a chassis during the movement of the robot chassis according to a preset path, wherein the reference object comprises lines and characters, the lines extend along the preset path direction of the chassis, and the characters are arranged along the extending direction of the lines;
identifying the real-time image to obtain target line information and target character information;
controlling the chassis to move in a first direction perpendicular to the direction of the preset path according to the target line information, so that the pose of the chassis meets a preset pose condition;
and controlling the chassis to move in the direction of the preset path according to the target character information, so that the chassis is positioned to each work station.
A robot navigation control device, the device comprising:
the robot comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring a real-time image of a reference object shot by a vision module attached to a chassis in the process that a robot chassis moves according to a preset path, the reference object comprises lines and characters, the lines extend along the direction of the preset path of the chassis, and the characters are arranged along the extending direction of the lines;
the identification module is used for identifying the real-time image to obtain target line information and target character information;
the first control module is used for controlling the chassis to move in a first direction perpendicular to the direction of the preset path according to the target line information, so that the pose of the chassis meets a preset pose condition;
and the second control module controls the chassis to move in the direction of the preset path according to the target character information, so that the chassis is positioned to each work station.
A robot comprising a memory and a processor, the memory storing a computer program which when executed by the processor performs the steps of:
acquiring a real-time image of a reference object shot by a vision module attached to a chassis during the movement of the robot chassis according to a preset path, wherein the reference object comprises lines and characters, the lines extend along the preset path direction of the chassis, and the characters are arranged along the extending direction of the lines;
identifying the real-time image to obtain target line information and target character information;
controlling the chassis to move in a first direction perpendicular to the direction of the preset path according to the target line information, so that the pose of the chassis meets a preset pose condition;
and controlling the chassis to move in the direction of the preset path according to the target character information, so that the chassis is positioned to each work station.
A robot navigation control system comprises a reference object and a robot, wherein the robot comprises a chassis and a vision module; the vision module is attached to the chassis and moves with the chassis according to a preset path; the reference object comprises lines and characters in the visual field range of the visual module, the lines extend along the direction of a preset path of the chassis, and the characters are arranged along the extending direction of the lines;
the robot further comprises a memory storing a computer program and a processor implementing the following steps when executing the computer program:
acquiring a real-time image of a reference object shot by a vision module attached to the chassis during the chassis moving according to a preset path, wherein the reference object comprises lines and characters, the lines extend along the preset path direction of the chassis, and the characters are arranged along the extending direction of the lines;
identifying the real-time image to obtain target line information and target character information;
controlling the chassis to move in a first direction perpendicular to the direction of the preset path according to the target line information, so that the pose of the chassis meets a preset pose condition;
and controlling the chassis to move in the direction of the preset path according to the target character information, so that the chassis is positioned to each work station.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a real-time image of a reference object shot by a vision module attached to a chassis during the movement of the robot chassis according to a preset path, wherein the reference object comprises lines and characters, the lines extend along the preset path direction of the chassis, and the characters are arranged along the extending direction of the lines;
identifying the real-time image to obtain target line information and target character information;
controlling the chassis to move in a first direction perpendicular to the direction of the preset path according to the target line information, so that the pose of the chassis meets a preset pose condition;
and controlling the chassis to move in the direction of the preset path according to the target character information, so that the chassis is positioned to each work station.
According to the robot navigation control method, the device and the system, the robot and the storage medium, in the process that the robot chassis moves according to the preset path, the real-time image of the reference object shot by the vision module attached to the chassis is acquired; identifying the real-time image to obtain target line information and target character information; controlling the chassis to move in a first direction perpendicular to the direction of the preset path according to the target line information, so that the pose of the chassis meets the preset pose condition; and controlling the chassis to move in the direction of the preset path according to the target character information, so that the chassis is positioned to each work station. The reference object comprises lines and characters, the lines extend along the direction of a preset path of the chassis, and the characters are arranged along the extending direction of the lines. The reference object is slightly influenced by the change of the surrounding environment, the real-time position of the chassis can be obtained only by identifying the information in the image of the reference object, and then navigation rectification and positioning are carried out, and the positioning is not influenced by the change of other information on the periphery; in addition, the camera with a smaller view field angle can be used for shooting the reference object at a fixed position to acquire an image, a plurality of sensors and multi-angle measurement are not needed, and therefore cost can be reduced, and navigation positioning accuracy is improved.
Drawings
FIG. 1 is a diagram of an exemplary environment in which a method for controlling navigation of a robot may be implemented;
FIG. 2 is a schematic diagram of a vision module in one embodiment;
FIG. 3 is a schematic view of a reference object in one embodiment;
FIG. 4 is a schematic flow chart diagram illustrating a method for controlling robot navigation in one embodiment;
FIG. 5 is a schematic view of the chassis in an initial position according to one embodiment;
FIG. 6 is a schematic illustration of a chassis excursion in one embodiment;
FIG. 7 is a schematic diagram illustrating an embodiment of correcting the pose of the chassis;
FIG. 8 is a schematic illustration of a work station location of the chassis in one embodiment;
FIG. 9 is a schematic flow chart of a method for controlling brick laying by a robot according to an embodiment;
fig. 10 is a block diagram showing the structure of the robot navigation control device according to the embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The robot navigation control method provided by the application can be applied to the application environment shown in fig. 1. The application environment is specifically a masonry scene and comprises a masonry wall body 1 and a robot navigation control system, wherein the robot navigation control system comprises a robot and a reference object 8. The robot comprises a vision module 2 and a chassis 3, the vision module 2 being attached to the chassis 3 and moving with the chassis 3 according to a preset path. The reference object 8 is arranged between the robot and the masonry wall 1, the extending direction of the reference object is parallel to the masonry wall 1, the extending direction of the reference object can also be understood to be consistent with the direction of the preset path of the chassis 3, and the reference object 8 and the masonry wall 1 can be separated by a certain distance, such as 200-700 mm. The reference object 8 is arranged in the visual field of the vision module 2 and comprises lines and characters, the lines extend along the direction of the preset path of the chassis 3, and the characters are arranged along the extending direction of the lines.
In addition, as shown in fig. 1, the robot further comprises a lifting mechanism 4, a brick laying jaw 5, a robot arm 6 and a brick carrying platform 7 for carrying out brick laying work. The chassis 3 may specifically comprise a frame, steering wheels, controllable road wheels, a battery and a controller (not shown in the figure).
In one embodiment, as shown in fig. 2, the vision module 2 includes a camera 21, and may further include a light source 22, an image acquisition card 23, and an industrial personal computer 24, the reference object 8 is fixed on the ground, a lens of the camera 21 faces the ground and is spaced from the ground by a certain distance, such as 200mm to 500mm, so as to ensure that the reference object 8 is within a visual field of the camera 2, which may be 200mm × 200mm to 400mm × 400 mm. The light emitted by the light source 22 is directed onto the reference object 8. The image acquisition card 23 is connected to the camera 21 and is used for acquiring the reference object image captured by the camera 21. The industrial personal computer 24 is connected with the image acquisition card and is used for carrying out corresponding processing on the image.
In one embodiment, as shown in fig. 3, the reference 8 is a scale, the lines are the edges of the scale, and the characters are the numbers on the scale. In particular, the borderlines may comprise either or both of the two borderlines of the extension direction, and the numbers may comprise respective scale values on the scale. The X direction indicates the extending direction of the scale, that is, the preset path direction of the chassis 3. The Y direction indicates a direction perpendicular to the X direction, and may be understood as a direction in which the base 3 points toward the masonry wall 1. Theta represents the angle between the Y direction and the X direction, and it will be understood that theta is 90 deg. during normal operation of the robot, and changes if the robot yaws.
In one embodiment, as shown in fig. 4, a robot navigation control method is provided, which is described by taking the method as an example applied to the controller of the robot in fig. 1, and includes the following steps S402 to S408.
S402, acquiring a real-time image of a reference object shot by a vision module attached to the chassis in the process that the robot chassis moves according to a preset path, wherein the reference object comprises lines and characters, the lines extend along the direction of the preset path of the chassis, and the characters are arranged along the extending direction of the lines.
The reference object is taken as a scale for illustration, the scale is fixed on the ground between the chassis and the masonry wall, the scale is respectively separated from the chassis and the masonry wall by a certain distance, the extending direction of the scale is parallel to the masonry wall, and the direction of the preset path of the chassis is the extending direction of the scale.
In the process that the robot chassis moves according to the preset path, the lens of the camera shoots in real time (for example, shoots at a certain frequency, the frequency is 10-30 Hz) towards the ground facing the scale, and a real-time image of the scale is obtained, wherein the real-time image comprises a sideline and a scale value of the scale. It can be understood that as the chassis moves, the viewing area of the camera moves, and therefore the captured real-time images change.
S404, identifying the real-time image to obtain target line information and target character information.
The target line refers to a line identified from the real-time image, and any image edge detection algorithm that may be used in the past or may appear in the future may be used to extract the target line information in the real-time image, which is not limited herein. Taking the reference object as the scale as an example, the target line is the scale sideline identified from the real-time image.
The target character refers to a character recognized from the real-time image, and the target character information in the real-time image may be extracted by using any image character recognition algorithm that is already available or may appear later, which is not limited herein. Taking the reference object as a scale as an example, the target character is a scale number recognized from the real-time image.
In one embodiment, the image digital identification method comprises the following steps: firstly, binarization is carried out, unnecessary information (such as background, interference lines, interference pixels and the like) in an image is removed, only characters needing to be identified are reserved, and the image is changed into a binary lattice; then, carrying out character segmentation, segmenting the picture containing the characters, and taking each character as an independent image so as to identify each character; then, standardization is carried out, for part of special verification codes, standardization processing (such as rotation reduction and distortion reduction) is carried out on the segmented images, each same character is changed into the same format as much as possible, and the random degree is reduced; and finally, identifying by adopting template comparison, processing each appeared character, converting the dot matrix into a character string, indicating what character is, judging the similarity through character string comparison, and finally obtaining target character information.
And S406, controlling the chassis to move in a first direction perpendicular to the direction of the preset path according to the target line information, so that the pose of the chassis meets the preset pose condition.
The first direction refers to a direction perpendicular to the preset path direction. The preset pose condition can be understood as a pose condition which needs to be met when the chassis works normally, and specifically includes: the distance value between the chassis and the reference object in the first direction and the included angle value between the moving direction of the chassis and the preset path direction are preset. The preset distance value can be set according to actual requirements, and is not limited here. The preset pinch value may be set to zero.
The target line information can feed back the distance offset and the angle offset of the chassis in the first direction, so that the chassis is controlled to move in the first direction according to the target line information, the chassis can be corrected, and the pose of the chassis meets the preset pose condition, and the method specifically comprises the following steps: keeping the distance between the chassis and the reference object in the first direction to be a preset distance value, and keeping the moving direction of the chassis consistent with the preset path direction.
And S408, controlling the chassis to move in the direction of a preset path according to the target character information, and positioning the chassis to each work station.
The work station refers to a position where the chassis needs to be stopped for bricklaying. The target character information can feed back the real-time position of the chassis in the preset path direction, so that the chassis is controlled to move in the preset path direction according to the target character information, and the chassis can be positioned to each work station.
In the robot navigation control method, a real-time image of a reference object shot by a vision module attached to a chassis is acquired in the process that a robot chassis moves according to a preset path; identifying the real-time image to obtain target line information and target character information; controlling the chassis to move in a first direction perpendicular to the direction of the preset path according to the target line information, so that the pose of the chassis meets the preset pose condition; and controlling the chassis to move in the direction of the preset path according to the target character information, so that the chassis is positioned to each work station. The reference object comprises lines and characters, the lines extend along the direction of a preset path of the chassis, and the characters are arranged along the extending direction of the lines. The reference object is slightly influenced by the change of the surrounding environment, the real-time position of the chassis can be obtained only by identifying the information in the image of the reference object, and then navigation rectification and positioning are carried out, and the positioning is not influenced by the change of other information on the periphery; in addition, the camera with a smaller view field angle can be used for shooting the reference object at a fixed position to acquire an image, a plurality of sensors and multi-angle measurement are not needed, and therefore cost can be reduced, and navigation positioning accuracy is improved.
In one embodiment, the method further comprises the steps of: when the chassis is at an initial position and the pose of the chassis meets a preset pose condition, acquiring an initial image of a reference object shot by a vision module; and identifying the initial image to obtain initial line information and initial character information.
The chassis is moved to an initial position, a section of the reference object scale is ensured to be in the visual field range of the camera, and meanwhile, the pose of the chassis is adjusted to meet the preset pose condition, specifically, the distance between the chassis and the scale in the first direction is a preset distance value, and the moving direction of the chassis is parallel to the preset path direction.
As shown in fig. 5, a schematic diagram of the chassis in the initial position in one embodiment is provided, where 9 denotes a masonry wall, 10 denotes a column, a denotes a camera view range, C denotes a robot coordinate system, P0 denotes an initial position of the chassis 3, D1 denotes a distance between the reference object 8 and the masonry wall 9, the distance being a constant distance, the reference object 8 is embodied as a scale, and a segment of the scale is located in the center of the camera view range a.
The initial line refers to a line identified from the initial image, and any image edge detection algorithm that is already available or may appear later may be used to extract the initial line information in the initial image, which is not limited herein. Taking the reference object as the scale as an example, the initial line is the scale sideline identified from the initial image.
The initial character refers to a character recognized from an initial image, and any image character recognition algorithm that may be used or may appear later may be used to extract information of the initial character in the initial image, which is not limited herein. Taking the reference object as a scale as an example, the initial character is a scale number recognized from the initial image.
In one embodiment, the initial line information includes a first reference position of the identified initial line in the initial image, and the initial character information includes the identified initial character and a second reference position thereof in the initial image; the target line information includes a first real-time position of the identified target line in the real-time image, and the target character information includes an identified target character and a second reference position thereof in the real-time image.
Taking the reference object as the scale as an example, the first reference position refers to a position of a scale border recognized from the initial image in the initial image, the second reference position refers to a position of a scale number recognized from the initial image in the initial image, the first real-time position refers to a position of the scale border recognized from the real-time image in the real-time image, and the second reference position refers to a position of the scale number recognized from the real-time image in the real-time image.
In an embodiment, the step of controlling the chassis to move in a first direction perpendicular to the direction of the preset path according to the target line information to make the pose of the chassis meet a preset pose condition may specifically include: obtaining offset information based on the first real-time position and the first reference position, wherein the offset information comprises an offset angle and an offset, the offset angle is an included angle between a real-time moving direction of the chassis and a preset path direction, and the offset is an offset of the chassis in the first direction; and correcting the pose of the chassis according to the offset information, so that the distance between the chassis and the reference object in the first direction is a preset distance value, and the included angle between the real-time moving direction of the chassis and the preset path direction is a preset included angle value.
Specifically, the offset information may be obtained by comparing the first real-time position and the first reference position. Taking the reference object as the scale as an example, a linear equation of a straight line (named as a real-time straight line) where the scale sideline is located can be obtained through calculation according to the first real-time position, and a linear equation of a straight line (named as a reference straight line) where the scale sideline is located can be obtained through calculation according to the first reference position; calculating the distance between the real-time straight line and the reference straight line in the first direction to obtain the offset of the chassis in the first direction, namely the difference value between the real-time distance between the chassis and the reference object in the first direction and a preset distance value; and calculating an included angle between the real-time straight line and the reference straight line to obtain an included angle between the real-time moving direction of the chassis and the preset path direction, namely an offset angle of the real-time moving direction of the chassis relative to the preset path direction.
As shown in fig. 6, a schematic diagram of chassis excursion in one embodiment is provided, where D2 represents a preset distance value, D represents an excursion amount of the chassis 3 in a first direction,
Figure BDA0003282286380000081
the deviation angle is represented, the line 81 represents a line of the reference object 8, when the reference object 8 is specifically a scale, the line 81 represents a scale sideline, the straight line where the scale sideline is located is a reference straight line, and Q represents a state diagram of deviation in the chassis moving process.
After the offset angle and the offset are obtained, the controller corrects the pose of the chassis in real time according to the offset angle and the offset, and the correction aims to eliminate the offset, so that the chassis returns to the correct position and the correct moving direction to meet the preset pose condition, namely, the distance between the chassis and a reference object in the first direction is a preset distance value, and the included angle between the real-time moving direction of the chassis and the preset path direction is a preset included angle value. As shown in fig. 7, a schematic diagram of the embodiment after correcting the pose of the chassis is provided.
In the embodiment, the offset information is obtained by comparing the real-time position of the scale sideline with the reference position, and the pose of the chassis is corrected in real time according to the offset information, so that the navigation accuracy is improved.
In an embodiment, the step of controlling the chassis to move in the preset path direction according to the target character information to position the chassis to each work station may specifically include: when the target character and the second real-time position thereof meet the arrival condition of the work station, controlling the chassis to stop moving; the work station arrival conditions include: the target character is the same as the first preset character corresponding to the work station, and the second real-time position meets the requirement of the first preset position.
The first preset character refers to a character corresponding to the work station in the reference object, and the first preset position requirement refers to a requirement which needs to be met by the position of the first preset character in the real-time image when the work station is located. The first predetermined character and the first predetermined position requirement may be set in combination with the actual situation, and are not limited herein.
As shown in fig. 8, a schematic representation of the work station location of the chassis in one embodiment is provided. The chassis may be preset with a plurality of work stations on a preset path, each work station having a predetermined distance from the initial position P0, for example, the first work station P1 having a first predetermined distance d1, the second work station P2 having a second predetermined distance, and so on, with the initial position P0 of the chassis as the origin. Each predetermined distance corresponds to a predetermined character, i.e. each work station corresponds to a predetermined character. Taking the reference object as an example of the scale, each predetermined distance corresponds to a predetermined number on the scale, and the controller can perform accurate positioning based on the predetermined number.
For example, assuming that the predetermined number on the corresponding scale is 0 when the chassis is at the initial position, and the position of the predetermined number 0 in the initial image is the image center at this time, the first predetermined distance between the first workstation and the initial position is 10cm, and the predetermined number on the corresponding scale is 10, when the predetermined number 10 is recognized from the real-time image, and the position of the predetermined number 10 in the real-time image is the image center, it is considered that the first workstation arrival condition is satisfied, that is, it is considered that the chassis arrives at the first workstation, and the controller controls the chassis to stop, so as to perform the brick laying work.
In the above embodiment, the real-time position of the chassis in the preset path direction is obtained through the scale numbers fed back by vision, so that the robot is guided to stop at the preset work station, and high-precision positioning is realized.
In one embodiment, before positioning the chassis to each work station, the method further comprises the steps of: when the condition of reaching the deceleration point is determined to be met according to the target character information, the moving speed of the chassis is controlled to be reduced; the deceleration point reaching condition includes: the target character is the same as a second preset character corresponding to the deceleration point, and the second real-time position meets the requirement of a second preset position.
The second preset character refers to a character corresponding to the deceleration point in the reference object, and the second preset position requirement refers to a requirement which needs to be met by the position of the second preset character in the real-time image when the deceleration point is located. The second predetermined character and the second predetermined position requirement may be set in combination with the actual situation, and are not limited herein.
As shown in fig. 8, before reaching each work station, a predetermined point (i.e., a deceleration point) is preset, for example, a first deceleration point K1 is provided before the first work station P1, and a first deceleration point K2 is provided before the second work station P2, where the initial position P0 of the chassis is taken as an origin, and each deceleration point has a predetermined point distance from the initial position P0, for example, the first deceleration point K1 has a first predetermined point distance g 1. Each deceleration point can also correspond to a predetermined character, taking the reference object as a scale as an example, each deceleration point corresponds to a predetermined number on the scale, the predetermined number corresponding to the deceleration point is smaller than the predetermined number corresponding to the work station, the difference between the two numbers is the deceleration distance of the chassis, and the difference obtained by subtracting g1 from d1 is the first deceleration distance f1 in the figure. The controller controls the chassis to decelerate from the deceleration point to the work station so that the speed is zero when the work station is reached.
For example, assuming that the predetermined number on the corresponding scale is 0 when the chassis is at the initial position, and the position of the predetermined number 0 in the initial image is the image center at this time, and the predetermined number on the scale corresponding to the first deceleration point is 8, when the predetermined number 8 is recognized from the real-time image and the position of the predetermined number 8 in the real-time image is the image center, it is considered that the first deceleration point reaching condition is satisfied, that is, it is considered that the chassis reaches the first deceleration point, and then the controller controls the moving speed of the chassis to decrease.
In the above embodiment, when the chassis reaches the deceleration point, the controller may acquire the moving distance information through the scale numbers fed back visually, plan the motion control in advance to realize deceleration in advance, thereby ensuring that the robot may stop at the preset work station.
It should be noted that in the process of moving the chassis from deceleration to stop, accurate positioning can still be performed according to digital guidance of the scale shot by the camera, and millimeter-level positioning accuracy is realized, so that the robot can still correct the offset angle and the offset amount in the deceleration process.
In one embodiment, the chassis stops moving after reaching a work station, the camera shoots a real-time image of the scale in a stopped state, the pose of the chassis at the moment is obtained, and if the pose is within a preset error range, the chassis of the robot is judged to be in a correct pose at the work station; then, the supporting legs of the chassis are controlled to extend out, the robot is supported, the wheels are separated from the ground, an inclination angle sensor is arranged on the chassis, the inclination angle sensor detects the inclination angle of the chassis relative to the horizontal plane, and based on the feedback of the inclination angle, the controller respectively controls the extending lengths of the supporting legs to enable the chassis to be parallel to the horizontal plane, so that the leveling of the vehicle body is realized; after the vehicle body is leveled, the controller controls the mechanical arm to grab bricks and release the bricks onto the wall body, and masonry construction of a single working station is carried out; after the single work station is built, the mechanical arm is retracted to the initial position, the chassis supporting leg is retracted, the wheels are in contact with the ground, the forward movement can be performed, and then the next work station is built until the whole wall is built.
In one embodiment, as shown in fig. 9, there is provided a robot brick laying control method including the following steps S901 to S910:
s901, fixing a scale on a preset path which needs to be moved by the robot;
s902, the robot moves to an initial position, a section of the ruler is ensured to be within the visual field range of a camera attached to the robot, meanwhile, a chassis of the robot is adjusted to a correct posture, at the moment, a ruler image is shot to serve as an initial image, and a reference edge line and a reference scale value of the ruler are extracted from the initial image;
s903, the robot starts an automatic tracking navigation program;
s904, in the process that the robot moves according to the preset path, the camera continuously takes pictures at a certain frequency to obtain a real-time image;
s905, identifying a real-time edge line of the scale from the real-time image, comparing the real-time edge line with a reference edge line to obtain an offset, and taking the offset as a deviation correction amount perpendicular to a preset path direction (namely a forward direction), wherein the offset comprises a distance offset between the robot and the scale and an offset angle between the robot and the scale;
s906, identifying real-time figures of the scale from the real-time image, and feeding back the real-time position of the robot in the advancing direction;
s907, correcting the pose of the robot chassis according to the offset;
s908, judging whether the robot reaches a work station or not according to the real-time number;
s909, the robot arrives at a work station, the supporting legs of the chassis are controlled to be supported, the machine body is leveled, and the mechanical arm is controlled to grab bricks and release the bricks onto the wall;
s910, after brick laying at a single work station is finished, the mechanical arm and the chassis supporting leg are retracted, and the steps S903 to S909 are executed again until brick laying at all the work stations is finished.
For the detailed description of the steps S901 to S910, reference may be made to the foregoing embodiments, which are not described herein again. In the embodiment, the ruler is used as a reference object, the influence of the change of the surrounding environment is small, the real-time position of the chassis can be obtained only by identifying the sideline and the digital information in the image of the ruler, then navigation rectification and positioning are carried out, the positioning is not influenced by the change of other information of the periphery, and the measurement precision can reach millimeter precision; in addition, the image can be obtained by only shooting the scale at the fixed position through the camera with a smaller view field angle, a plurality of sensors and multi-angle measurement are not needed, and therefore cost can be reduced, and navigation positioning precision is improved.
It should be understood that, although the steps in the flowcharts related to the above embodiments are shown in sequence as indicated by the arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in each flowchart related to the above embodiments may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
In one embodiment, as shown in fig. 10, there is provided a robot navigation control apparatus 1000 including: an acquisition module 1010, an identification module 1020, a first control module 1030, and a second control module 1040, wherein:
the acquisition module 1010 is used for acquiring a real-time image of a reference object shot by a vision module attached to the chassis in the process that the robot chassis moves according to a preset path, wherein the reference object comprises lines and characters, the lines extend along the direction of the preset path of the chassis, and the characters are arranged along the extending direction of the lines.
And the recognition module 1020 is configured to recognize the real-time image to obtain target line information and target character information.
The first control module 1030 is configured to control the chassis to move in a first direction perpendicular to the preset path direction according to the target line information, so that the pose of the chassis meets a preset pose condition.
And the second control module 1040 is configured to control the chassis to move in the preset path direction according to the target character information, so that the chassis is positioned to each work station.
In one embodiment, the obtaining module 1010 is further configured to: when the chassis is at an initial position and the pose of the chassis meets a preset pose condition, acquiring an initial image of a reference object shot by a vision module; and identifying the initial image to obtain initial line information and initial character information.
In one embodiment, the preset pose conditions include: a preset distance value between the chassis and the reference object in the first direction and a preset included angle value between the moving direction of the chassis and the preset path direction; the initial line information comprises a first reference position of the identified initial line in the initial image, and the initial character information comprises an identified initial character and a second reference position thereof in the initial image; the target line information includes a first real-time position of the identified target line in the real-time image, and the target character information includes an identified target character and a second reference position thereof in the real-time image.
In one embodiment, the first control module 1030, when controlling the chassis to move in a first direction perpendicular to the preset path direction according to the target line information, so that the pose of the chassis meets a preset pose condition, is specifically configured to: obtaining offset information based on the first real-time position and the first reference position, wherein the offset information comprises an offset angle and an offset, the offset angle is an included angle between a real-time moving direction of the chassis and a preset path direction, and the offset is an offset of the chassis in the first direction; and correcting the pose of the chassis according to the offset information, so that the distance between the chassis and the reference object in the first direction is a preset distance value, and the included angle between the real-time moving direction of the chassis and the preset path direction is a preset included angle value.
In an embodiment, the second control module 1040, when controlling the chassis to move in the preset path direction according to the target character information, and positioning the chassis to each work station, is specifically configured to: when the target character and the second real-time position thereof meet the arrival condition of the work station, controlling the chassis to stop moving; the work station arrival conditions include: the target character is the same as the first preset character corresponding to the work station, and the second real-time position meets the requirement of the first preset position.
In one embodiment, the second control module 1040 is further configured to: before the chassis is positioned to each work station, when the condition that the speed reduction point is reached is determined to be met according to the target character information, the moving speed of the chassis is controlled to be reduced; the deceleration point reaching condition includes: the target character is the same as a second preset character corresponding to the deceleration point, and the second real-time position meets the requirement of a second preset position.
In one embodiment, the reference object is a scale, the lines are the edges of the scale, and the characters are the numbers on the scale.
For specific limitations of the robot navigation control device, reference may be made to the above limitations of the robot navigation control method, which are not described herein again. The modules in the robot navigation control device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In an embodiment, a robot is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the above-described method embodiments when executing the computer program.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the above-described method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the respective method embodiment as described above.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the steps in the various method embodiments described above.
It should be understood that the terms "first", "second", etc. in the above-described embodiments are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Further, in the description of the present application, the meaning of "a plurality" means at least two unless otherwise specified.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (11)

1. A method of controlling navigation of a robot, the method comprising:
acquiring a real-time image of a reference object shot by a vision module attached to a chassis during the movement of the robot chassis according to a preset path, wherein the reference object comprises lines and characters, the lines extend along the preset path direction of the chassis, and the characters are arranged along the extending direction of the lines;
identifying the real-time image to obtain target line information and target character information;
controlling the chassis to move in a first direction perpendicular to the direction of the preset path according to the target line information, so that the pose of the chassis meets a preset pose condition;
and controlling the chassis to move in the direction of the preset path according to the target character information, so that the chassis is positioned to each work station.
2. The method of claim 1, further comprising:
when the chassis is at an initial position and the pose of the chassis meets the preset pose condition, acquiring an initial image of the reference object shot by the vision module;
and identifying the initial image to obtain initial line information and initial character information.
3. The method according to claim 2, wherein the preset pose conditions include: a preset distance value between the chassis and the reference object in the first direction and a preset included angle value between the moving direction of the chassis and the preset path direction;
the initial line information comprises a first reference position of the identified initial line in the initial image, and the initial character information comprises an identified initial character and a second reference position thereof in the initial image;
the target line information comprises a first real-time position of the identified target line in the real-time image, and the target character information comprises an identified target character and a second reference position of the identified target character in the real-time image.
4. The method according to claim 3, wherein controlling the chassis to move in a first direction perpendicular to the preset path direction according to the target line information so that the pose of the chassis meets a preset pose condition comprises:
obtaining offset information based on the first real-time position and the first reference position, wherein the offset information comprises an offset angle and an offset, the offset angle is an included angle between a real-time moving direction of the chassis and the preset path direction, and the offset is an offset of the chassis in the first direction;
and correcting the pose of the chassis according to the offset information, so that the distance between the chassis and the reference object in the first direction is the preset distance value, and the included angle between the real-time moving direction of the chassis and the preset path direction is the preset included angle value.
5. The method of claim 3, wherein controlling the chassis to move in the preset path direction according to the target character information to position the chassis to each work station comprises:
when the target character and the second real-time position thereof meet the arrival condition of the work station, controlling the chassis to stop moving; the work station arrival conditions include: the target character is the same as a first preset character corresponding to the work station, and the second real-time position meets the requirement of a first preset position.
6. The method of claim 5, further comprising, prior to positioning the chassis to each work station:
when the condition of reaching a deceleration point is determined to be met according to the target character information, controlling the moving speed of the chassis to be reduced; the deceleration point reaching condition includes: and the target character is the same as a second preset character corresponding to the deceleration point, and the second real-time position meets the requirement of a second preset position.
7. The method of any one of claims 1 to 6, wherein the reference object is a scale, the line is an edge of the scale, and the character is a number on the scale.
8. A robot navigation control apparatus, characterized in that the apparatus comprises:
the robot comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring a real-time image of a reference object shot by a vision module attached to a chassis in the process that a robot chassis moves according to a preset path, the reference object comprises lines and characters, the lines extend along the direction of the preset path of the chassis, and the characters are arranged along the extending direction of the lines;
the identification module is used for identifying the real-time image to obtain target line information and target character information;
the first control module is used for controlling the chassis to move in a first direction perpendicular to the direction of the preset path according to the target line information, so that the pose of the chassis meets a preset pose condition;
and the second control module controls the chassis to move in the direction of the preset path according to the target character information, so that the chassis is positioned to each work station.
9. A robot comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any of claims 1 to 7 when executing the computer program.
10. A robot navigation control system comprising a reference and a robot as claimed in claim 9, the robot comprising a chassis and a vision module; the vision module is attached to the chassis and moves with the chassis according to a preset path; the reference object comprises lines and characters in the visual field range of the visual module, the lines extend along the direction of the preset path of the chassis, and the characters are arranged along the extending direction of the lines.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202111136743.8A 2021-09-27 2021-09-27 Robot navigation control method, device and system, robot and storage medium Active CN113954064B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111136743.8A CN113954064B (en) 2021-09-27 2021-09-27 Robot navigation control method, device and system, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111136743.8A CN113954064B (en) 2021-09-27 2021-09-27 Robot navigation control method, device and system, robot and storage medium

Publications (2)

Publication Number Publication Date
CN113954064A true CN113954064A (en) 2022-01-21
CN113954064B CN113954064B (en) 2022-08-16

Family

ID=79462310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111136743.8A Active CN113954064B (en) 2021-09-27 2021-09-27 Robot navigation control method, device and system, robot and storage medium

Country Status (1)

Country Link
CN (1) CN113954064B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108759853A (en) * 2018-06-15 2018-11-06 浙江国自机器人技术有限公司 A kind of robot localization method, system, equipment and computer readable storage medium
CN109397249A (en) * 2019-01-07 2019-03-01 重庆大学 The two dimensional code positioning crawl robot system algorithm of view-based access control model identification
CN109782772A (en) * 2019-03-05 2019-05-21 浙江国自机器人技术有限公司 A kind of air navigation aid, system and cleaning robot
CN110539307A (en) * 2019-09-09 2019-12-06 北京极智嘉科技有限公司 Robot, robot positioning method, positioning navigation system and positioning mark
WO2020012710A1 (en) * 2018-07-13 2020-01-16 オムロン株式会社 Manipulator control device, manipulator control method, and manipulator control program
CN111380535A (en) * 2020-05-13 2020-07-07 广东星舆科技有限公司 Navigation method and device based on visual label, mobile machine and readable medium
CN111823236A (en) * 2020-07-25 2020-10-27 湘潭大学 Library management robot and control method thereof
CN112497223A (en) * 2020-11-18 2021-03-16 广东博智林机器人有限公司 Method and device for generating coating process parameters of coating robot
CN113327281A (en) * 2021-06-22 2021-08-31 广东智源机器人科技有限公司 Motion capture method and device, electronic equipment and flower drawing system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108759853A (en) * 2018-06-15 2018-11-06 浙江国自机器人技术有限公司 A kind of robot localization method, system, equipment and computer readable storage medium
WO2020012710A1 (en) * 2018-07-13 2020-01-16 オムロン株式会社 Manipulator control device, manipulator control method, and manipulator control program
CN109397249A (en) * 2019-01-07 2019-03-01 重庆大学 The two dimensional code positioning crawl robot system algorithm of view-based access control model identification
CN109782772A (en) * 2019-03-05 2019-05-21 浙江国自机器人技术有限公司 A kind of air navigation aid, system and cleaning robot
CN110539307A (en) * 2019-09-09 2019-12-06 北京极智嘉科技有限公司 Robot, robot positioning method, positioning navigation system and positioning mark
CN111380535A (en) * 2020-05-13 2020-07-07 广东星舆科技有限公司 Navigation method and device based on visual label, mobile machine and readable medium
CN111823236A (en) * 2020-07-25 2020-10-27 湘潭大学 Library management robot and control method thereof
CN112497223A (en) * 2020-11-18 2021-03-16 广东博智林机器人有限公司 Method and device for generating coating process parameters of coating robot
CN113327281A (en) * 2021-06-22 2021-08-31 广东智源机器人科技有限公司 Motion capture method and device, electronic equipment and flower drawing system

Also Published As

Publication number Publication date
CN113954064B (en) 2022-08-16

Similar Documents

Publication Publication Date Title
CN108571971B (en) AGV visual positioning system and method
KR102367438B1 (en) Simultaneous positioning and mapping navigation method, apparatus and system combined with markers
KR102022388B1 (en) Calibration system and method using real-world object information
KR20200011978A (en) Map data correction method and device
CN104827480A (en) Automatic calibration method of robot system
EP3070675A1 (en) Image processor for correcting deviation of a coordinate in a photographed image at appropriate timing
JP2006252473A (en) Obstacle detector, calibration device, calibration method and calibration program
KR20100073190A (en) Method and apparatus for detecting position and orientation
US20160093053A1 (en) Detection method and detection apparatus for detecting three-dimensional position of object
KR101379787B1 (en) An apparatus and a method for calibration of camera and laser range finder using a structure with a triangular hole
CN109472778B (en) Appearance detection method for towering structure based on unmanned aerial vehicle
JP2020122785A (en) Method and device for making own vehicle position determination to update hd map by using v2x information fusion
CN107977985A (en) Unmanned plane hovering method, apparatus, unmanned plane and storage medium
CN108154210A (en) A kind of Quick Response Code generation, recognition methods and device
JP5086824B2 (en) TRACKING DEVICE AND TRACKING METHOD
JP2018005709A (en) Autonomous mobile device
KR102490521B1 (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
CN113954064B (en) Robot navigation control method, device and system, robot and storage medium
CN113665591A (en) Unmanned control method, device, equipment and medium
CN114494466B (en) External parameter calibration method, device and equipment and storage medium
KR100773271B1 (en) Method for localization of mobile robot with a single camera
JPH1139464A (en) Image processor for vehicle
CN114554030B (en) Device detection system and device detection method
CN113011212A (en) Image recognition method and device and vehicle
CN113218392A (en) Indoor positioning navigation method and navigation device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant