CN116276997A - Robot three-dimensional scanning positioning method and device, electronic equipment and storage medium - Google Patents

Robot three-dimensional scanning positioning method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116276997A
CN116276997A CN202310247037.3A CN202310247037A CN116276997A CN 116276997 A CN116276997 A CN 116276997A CN 202310247037 A CN202310247037 A CN 202310247037A CN 116276997 A CN116276997 A CN 116276997A
Authority
CN
China
Prior art keywords
target
distance
task
point cloud
mechanical arm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310247037.3A
Other languages
Chinese (zh)
Inventor
谭云龙
刘贵林
张燕彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CISDI Research and Development Co Ltd
Original Assignee
CISDI Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CISDI Research and Development Co Ltd filed Critical CISDI Research and Development Co Ltd
Priority to CN202310247037.3A priority Critical patent/CN116276997A/en
Publication of CN116276997A publication Critical patent/CN116276997A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a robot three-dimensional scanning positioning method, a device, electronic equipment and a storage medium.

Description

Robot three-dimensional scanning positioning method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of robot positioning technologies, and in particular, to a method and apparatus for three-dimensional scanning positioning of a robot, an electronic device, and a storage medium.
Background
In the metallurgical field, a plurality of tasks with severe working environments such as high temperature, high dust and the like exist, for example, tasks such as replacing a ladle cylinder and a medium pipeline in a continuous casting area and the like, and a robot is generally adopted to replace manual execution tasks. The existing vision device basically uses a common optical camera and a laser camera, but is influenced by molten metal in a severe metallurgical environment, the illumination change is severe, and the optical camera is easily influenced, so when the accurate perception of the space position is required, the laser camera is generally adopted. Under the influence of reasons such as hoisting position deviation and mechanical structure errors, a plurality of working procedure task targets are not fixed in a metallurgical flow, but the imaging range of a laser camera is relatively narrow, generally within tens of centimeters, and if a laser camera with a large imaging range is selected, the defects of overlarge camera volume, high price and the like exist. When the target position is greatly changed in the depth of field direction, the fixed position photographing scanning may have a decoking phenomenon because whether the distance between the camera and the target is within the effective imaging distance cannot be determined.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, the present invention provides a method, an apparatus, an electronic device, and a storage medium for three-dimensional scanning and positioning of a robot, so as to solve the technical problem that a decoking phenomenon exists in photographing and scanning at a fixed position when the position of a task target to be positioned is greatly changed in the depth of field direction.
The invention provides a robot three-dimensional scanning and positioning method, which comprises the following steps: acquiring a standard distance and an initial imaging position, and measuring the distance between the tail end of the mechanical arm and a task target as a preliminary distance; performing difference calculation on the standard distance and the preliminary distance to obtain a target offset distance, and determining a target imaging position according to the initial imaging position and the target offset distance; controlling the tail end of the mechanical arm to move towards the target imaging position until point cloud acquisition equipment reaches the target imaging position, wherein the point cloud acquisition equipment is arranged at the tail end of the mechanical arm; three-dimensional scanning is carried out on the mark blocks through the point cloud acquisition equipment to obtain mark block point clouds, and the mark blocks are arranged on the task targets; and positioning the task target based on the mark block point cloud.
In an embodiment of the present invention, measuring a distance between a distal end of a mechanical arm and a task target as a preliminary distance includes: responding to a task instruction, controlling the tail end of the mechanical arm to move to a preset ranging position until a ranging device reaches the preset ranging position, wherein the ranging device is arranged at the tail end of the mechanical arm; and sending a ranging instruction to enable the ranging device to measure the tail end plane distance between the tail end of the mechanical arm and a target plane, wherein the tail end plane distance is used as the preliminary distance, and the target plane is arranged on the task target.
In an embodiment of the present invention, after locating the task target based on the tag block point cloud, the method includes: responding to a next task instruction, and if a next task target is different from the task target, matching a next preset ranging position and a next initial imaging position corresponding to the next task target, wherein the next task target is determined based on the next task instruction, and the next task instruction comprises identification information of the next task target; controlling the tail end of the mechanical arm to move towards the next preset ranging position until the ranging device reaches the next preset ranging position; transmitting a next ranging instruction so that the ranging device measures a next end plane distance between the tail end of the mechanical arm and a target plane of the next task target, and taking the next end plane distance as a next preliminary distance; performing difference calculation on the standard distance and the next preliminary distance to obtain a next target offset distance, and determining a next target imaging position according to the next initial imaging position and the next target offset distance; controlling the tail end of the mechanical arm to move to the next target imaging position until the point cloud acquisition equipment reaches the next target imaging position; three-dimensional scanning is carried out on the mark block of the next task target through the point cloud acquisition equipment, so that a next mark block point cloud is obtained; and positioning the next task target based on the next mark block point cloud.
In an embodiment of the present invention, locating the task target based on the tag block point cloud includes: dividing the point cloud of the mark block according to an image dividing method to obtain point clouds of at least three target objects, wherein the target objects are arranged on the mark block, and the target centers of the target objects are on different straight lines; calculating the point cloud of the target object according to a grid search algorithm to obtain the target coordinate position of the target object; determining three reference targets from the at least three targets, calculating the central coordinate position of the task target based on the target coordinate positions of the three reference targets and the preset relative distance between the target centers of the three reference targets and the center of the task target, and calculating the normal vector of the task target based on the target coordinate positions of the three reference targets to obtain the pose of the task target.
In an embodiment of the present invention, the method further includes: sending a scanning instruction to enable the point cloud acquisition equipment to perform multiple three-dimensional scanning on the mark blocks to obtain multiple groups of mark block point clouds; and repeatedly positioning the task target based on the plurality of groups of mark block point clouds.
In one embodiment of the invention, the target comprises any one of a target hole, a target ball, or a target block.
In an embodiment of the invention, the distance measuring device is a laser distance measuring device, the point cloud collecting device is a three-dimensional laser camera, the laser distance measuring device and the three-dimensional laser camera are arranged in a protection box, and the protection box is used for introducing external cold air.
In an embodiment of the present invention, there is also provided a three-dimensional scanning and positioning device for a robot, including: the distance measuring module is used for acquiring a standard distance and an initial imaging position, and measuring the distance between the tail end of the mechanical arm and a task target as a primary distance; the first processing module is used for carrying out difference value calculation on the standard distance and the preliminary distance to obtain a target offset distance, and determining a target imaging position according to the initial imaging position and the target offset distance; the scanning module is used for controlling the tail end of the mechanical arm to move towards the target imaging position until the point cloud acquisition equipment reaches the target imaging position, and carrying out three-dimensional scanning on the mark block through the point cloud acquisition equipment to obtain mark block point cloud, wherein the point cloud acquisition equipment is arranged at the tail end of the mechanical arm, and the mark block is arranged on the task target; and the second processing module is used for positioning the task target based on the mark block point cloud.
In an embodiment of the present invention, there is also provided an electronic device including: one or more processors; and a storage means for storing one or more programs which, when executed by the one or more processors, cause the electronic device to implement the robotic three-dimensional scanning positioning method as described above.
In an embodiment of the present invention, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to perform the robot three-dimensional scanning positioning method as described above.
The invention has the beneficial effects that: the invention provides a three-dimensional scanning and positioning method, a device, electronic equipment and a storage medium of a robot, wherein the three-dimensional scanning and positioning method of the robot is used for calculating a target imaging position according to a preliminary distance, a standard distance and an initial imaging position by measuring the distance between the tail end of a mechanical arm and a task target as the preliminary distance and controlling the mechanical arm to move so that point cloud acquisition equipment at the tail end of the mechanical arm reaches the target imaging position and then scans a mark block, so that the target imaging position can be adaptively adjusted when the task target position is changed greatly in the depth direction, the point cloud acquisition equipment is arranged at the tail end of the mechanical arm, the point cloud acquisition equipment is driven to reach the target imaging position by controlling the mechanical arm to move, the problem that the mark block is decoked when the target position is changed greatly in the depth direction is effectively avoided, the accuracy of the point cloud of the mark block is ensured, and the accuracy of positioning the task target is improved.
Drawings
FIG. 1 is a flow chart of a method for three-dimensional scanning positioning of a robot, shown in an exemplary embodiment of the invention;
FIG. 2 is a schematic diagram illustrating robotic ranging in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of a laser camera imaging range according to an embodiment of the present invention;
FIG. 4 is a schematic illustration of a robotic arm tip moving to a target imaging position, in accordance with an embodiment of the present invention;
FIG. 5 is a point cloud of marker blocks, according to an embodiment of the present invention;
FIG. 6 is a block diagram of a robotic three-dimensional scanning positioning device, shown in accordance with an exemplary embodiment of the present invention;
FIG. 7 is a schematic view of a robotic three-dimensional scanning positioning device shown in accordance with another exemplary embodiment of the present invention;
fig. 8 is a schematic view of a robot arm tip according to an embodiment of the present invention.
Reference numerals: 1-a mechanical arm; 2-a flange plate; 3-a shooting module; 4-a ranging module; 5-a flag block; 6-a target plane; 7-an execution module; 8-task goal.
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present invention by way of illustration, and only the components related to the present invention are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
It should be noted that, in the present invention, "first", "second", and the like are merely distinguishing between similar objects, and are not limited to the order or precedence of similar objects. The description of variations such as "comprising," "having," etc., means that the subject of the word is not exclusive, except for the examples shown by the word.
It should be understood that the various numbers and steps described in this disclosure are for convenience of description and are not to be construed as limiting the scope of the invention. The magnitude of the present invention reference numerals does not mean the order of execution, and the order of execution of the processes should be determined by their functions and inherent logic.
In the following description, numerous details are set forth in order to provide a more thorough explanation of embodiments of the present invention, it will be apparent, however, to one skilled in the art that embodiments of the present invention may be practiced without these specific details, in other embodiments, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the embodiments of the present invention.
In the metallurgical field operation, when the robot replaces a manual task, if the target position of the task is changed greatly in the depth of field direction of the camera, the situation that the decoking identification fails due to the too narrow scanning range of the laser scanning camera occurs, and the laser scanning camera with a larger depth of field is selected to avoid the problems to a certain extent, but the laser scanning camera with a larger depth of field is usually large in size, heavy in load and more expensive.
To solve these problems, embodiments of the present invention propose a robot three-dimensional scanning positioning method, a robot three-dimensional scanning positioning apparatus, an electronic device, a computer-readable storage medium, and a computer program product, respectively, which will be described in detail below.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for positioning a robot by three-dimensional scanning according to an exemplary embodiment of the present invention. As shown in fig. 1, in an exemplary embodiment, the three-dimensional scanning positioning method of the robot at least includes steps S110 to S150, which are described in detail as follows:
step S110, a standard distance and an initial imaging position are obtained, and the distance between the tail end of the mechanical arm and the task target is measured and used as the initial distance.
In one embodiment of the invention, the task target refers to a task target corresponding to a certain task executed by the robot, for example, when the oil cylinder is replaced, the oil cylinder is plugged into a groove of the steel ladle, and then the groove of the steel ladle is the task target. Before a robot performs a task, the robot needs to sense the position of a task target in space. Firstly, a standard distance and an initial imaging position are acquired, the distance between the tail end of the mechanical arm and a task target is measured, and the distance is named as a primary distance. The robot of this embodiment is a mechanical arm, the mechanical arm may be a six-degree-of-freedom mechanical arm, the standard distance refers to a preset distance threshold between the preset mechanical arm end and the task target, and the initial imaging position refers to an ideal initial imaging position of the preset point cloud acquisition device. The robot performs tasks by an actuator, for example, an actuator gripping cylinder, and the actuator is disposed at the end of the arm.
In one embodiment of the invention, the distance between the tail end of the mechanical arm and the task target is measured as a preliminary distance, and the method comprises the following steps:
step S111, responding to the task instruction, controlling the tail end of the mechanical arm to move towards the preset ranging position until the ranging device reaches the preset ranging position.
In one embodiment of the invention, the distance measuring device is arranged at the tail end of the mechanical arm, and the robot plans a movement track according to the preset distance measuring position after receiving the task instruction, and controls the mechanical arm to move according to the movement track so that the distance measuring device at the tail end of the mechanical arm reaches the preset distance measuring position.
Step S112, a distance measuring instruction is sent to enable the distance measuring device to measure the end plane distance between the tail end of the mechanical arm and the target plane, and the end plane distance is taken as the primary distance.
In one embodiment of the present invention, the target plane is set on the task target, and after the ranging device at the end of the mechanical arm reaches the preset ranging position, the robot sends a ranging command to make the ranging device measure the end plane distance between the end of the mechanical arm and the target plane, and the end plane distance is used as the preliminary distance between the end of the mechanical arm and the task target. The distance measuring device can be any one of a laser distance measuring device, an infrared distance measuring device and an ultrasonic distance measuring device. The target plane is fixedly arranged at the bottom of the task object, and the target plane can be fixedly arranged at other positions on the task object, which can be measured by the distance measuring device. It should be noted that, since the position of the task target is not fixed, the size of the target plane is large enough to ensure the accuracy of the measured distance. The specific dimensions of the target plane are determined according to the specific task to be performed, for example, for changing the ladle cylinder, the width of the target plane may be greater than or equal to 30cm and the height may be greater than or equal to 10cm due to the deviation of the position of each stop of the ladle. In addition, the robot can directly send a ranging instruction to the ranging device, and can also send a ranging instruction to the industrial personal computer, and the industrial personal computer controls the ranging device to measure the end plane distance between the tail end of the mechanical arm and the target plane, so that the distance is not limited.
Referring to fig. 2, fig. 2 is a schematic diagram illustrating a robot ranging according to an embodiment of the present invention. As shown in fig. 2, a black long straight line represents the laser light emitted from the laser range finder 4. The laser range finder 4 is fixedly arranged on the flange plate 2 at the tail end of the mechanical arm 1, and when the robot receives a task execution instruction, the tail end of the mechanical arm 1 is firstly controlled to move to a preset range finding position, and the position is a fixed point for manual teaching. When the tail end of the mechanical arm 1 is positioned at the point, the laser emitted by the laser range finder 4 is perpendicular to the target plane 6, and the robot sends a range finding instruction to enable the industrial personal computer to read the distance (the tail end plane distance) between the target plane 6 fed back by the laser range finder 4 and the tail end of the robot.
Step S120, performing difference calculation on the standard distance and the initial distance to obtain a target offset distance, and determining a target imaging position according to the initial imaging position and the target offset distance.
In one embodiment of the invention, since the task target may change substantially in the depth of field direction, in order to avoid that the mark block is not in the virtual focus, the imaging position of the target should change accordingly, the difference between the standard distance and the preliminary distance may be calculated, the difference may be used as the target offset distance, and the initial imaging position is corrected by using the target offset distance, so as to obtain the end coordinates (target imaging position) of the robot of the current ideal photographing point.
Taking the replacement of the oil cylinder as an example, measuring to obtain the preliminary distance d between the tail end of the mechanical arm and the task target groove, and comparing the preliminary distance d with the stored standard distance d 0 Comparing to obtain a target offset distance delta d, and calculating a target imaging position according to the target offset distance delta d and the initial imaging position in the following calculation mode:
Figure BDA0004126370130000061
wherein, (x, y, z) is an initial imaging position, (x ', y ', z ') is a target imaging position, R is a distance from the center of a rotary shaft of the continuous casting rotary table to a storage position of the ladle cylinder, and Δd is a target offset distance.
It should be noted that, the above-mentioned calculation process of the target imaging position may be performed by a robot or by an industrial personal computer, which is not limited herein.
And step S130, controlling the tail end of the mechanical arm to move towards the target imaging position until the point cloud acquisition equipment reaches the target imaging position.
In one embodiment of the invention, the point cloud acquisition device is arranged at the tail end of the mechanical arm, the robot plans a motion track according to the target imaging position, and the mechanical arm is controlled to move according to the motion track, so that the tail end of the mechanical arm moves towards the target imaging position until the point cloud acquisition device reaches the target imaging position.
And step S140, carrying out three-dimensional scanning on the mark block through the point cloud acquisition equipment to obtain the point cloud of the mark block.
In one embodiment of the invention, the mark block is arranged on the task object and used for calibrating the position relation of the task object. After the point cloud acquisition equipment at the tail end of the mechanical arm reaches the target imaging position, the marker block is located on the optimal imaging distance of the point cloud acquisition equipment, and the robot can perform three-dimensional scanning on the marker block by sending a scanning instruction to generate the marker block point cloud. The point cloud acquisition device may be any one of a three-dimensional laser camera, a laser radar, and the like. The marking blocks are fixedly arranged at the top of the task target, the marking blocks can be fixedly arranged at other positions which can be scanned by the point cloud acquisition equipment on the task target, and the number of the marking blocks can be one or more. In addition, the robot can directly send a scanning instruction to the point cloud acquisition equipment, and can also send the scanning instruction to the industrial personal computer, and the industrial personal computer controls the point cloud acquisition equipment to perform three-dimensional scanning on the mark block, so that the three-dimensional scanning is not limited.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating an imaging range of a laser camera according to an embodiment of the invention. As shown in fig. 3, the effective imaging range of the laser camera is between the maximum imaging distance and the minimum imaging distance, and generally there is an optimal imaging distance within the range, where the optimal imaging distance is calibrated by a camera manufacturer and provided for a user, and a most accurate point cloud image can be obtained by shooting and scanning at the optimal imaging distance.
Before responding to a task instruction, the distance measurement teaching is carried out on the robot by manually teaching a fixed distance measurement point position (preset distance measurement position), when a task target is stationary at an initial position, the tail end of the mechanical arm is controlled to move towards the preset distance measurement position until a distance measurement device at the tail end of the mechanical arm reaches the preset distance measurement position, and the distance d between the tail end of the mechanical arm and the task target is measured through the distance measurement device 0 Will d 0 As a standard distance and stored. When the robot program is calibrated, an ideal photographing point (initial imaging position) is taught, and when the point cloud acquisition equipment is located at the ideal photographing point and the task target is stationary at the initial position, the mark block is just located at the optimal imaging distance of the point cloud acquisition equipment.
In one embodiment of the invention, the distance measuring device is a laser distance measuring device, the point cloud collecting device is a three-dimensional laser camera, the laser distance measuring device and the three-dimensional laser camera are arranged in a protection box, and the protection box is used for introducing external cold air.
Because the effective measurement range of the laser sensor (laser range finder) is usually 10 cm-1200 cm and is much larger than the imaging range of the three-dimensional laser camera (the imaging range of the three-dimensional laser camera is usually 30 cm-80 cm), even if the task target has larger position variation in the radial range, the effective measurement range of the laser range finder is not exceeded, and offset data is provided for self-adaptive adjustment of photographing points. In addition, arrange laser range finder and three-dimensional laser camera in the protection box to the ventilation cooling gas, the material of protection box can be high temperature resistant material simultaneously, in order to adapt to scene high temperature dust environment, extension laser range finder and three-dimensional laser camera's life.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating a movement of a robot arm end to a target imaging position according to an embodiment of the invention. As shown in fig. 4, the end of the mechanical arm 1 is located at the target imaging position, the laser camera 3 (three-dimensional laser camera) is fixed on the flange 2 at the end of the mechanical arm 1, two virtual long straight lines identify the phase fields of the laser camera, the imaging range of the laser camera 3 is shown, and the marker block 5 is located within the imaging range and near the optimal imaging distance. At this time, the robot transmits a scanning instruction to cause the laser camera 3 to start three-dimensional scanning of the marker block 5.
And step S150, positioning a task target based on the mark block point cloud.
In one embodiment of the invention, the marker block point cloud can be processed by adopting an image segmentation method, and the pose of the marker block is calculated by adopting a grid search algorithm, so that the robot can position a task target according to the pose of the marker block, and an executing mechanism at the tail end of the mechanical arm can be controlled to execute the task subsequently.
In one embodiment of the present invention, step S140 includes the following:
dividing the point cloud of the mark block according to an image dividing method to obtain point clouds of at least three target objects, wherein the target objects are arranged on the mark block, and the target centers of the target objects are on different straight lines;
Calculating the point cloud of the target object according to a grid search algorithm to obtain the target center coordinate position of the target object;
determining three reference targets from the at least three targets, calculating the central coordinate position of the task target based on the target coordinate positions of the three reference targets and the preset relative distance between the target centers of the three reference targets and the center of the task target, and calculating the normal vector of the task target based on the target coordinate positions of the three reference targets to obtain the pose of the task target.
In this embodiment, 3 or more target objects are set in the marker block, the edges of the point clouds of the marker block are identified and segmented by adopting an image segmentation method to obtain point clouds containing all the target objects, the edges of the target objects are identified according to a grid search algorithm, and the point clouds of the target objects formed by each target object are extracted to calculate the position of the target coordinates of each target object, so that the position of the target coordinates of all the target objects are obtained. The pose of the task target can be calculated according to a three-point positioning method, and as the relative distance between the target center of each target object and the center of the task target is fixed, the relative distance between the target center of each target object and the center of the task target can be measured and stored in advance by a person skilled in the art as a preset relative distance. Three reference targets can be determined from all targets, the center coordinate position of the task target is calculated based on the target coordinate positions of the three reference targets and the preset relative distance between the target centers of the three reference targets and the center of the task target, and the normal vector of the marker block is calculated based on the target coordinate positions of the three reference targets.
It should be noted that, the above processing process of the point cloud of the mark block may be completed by a robot or by an industrial personal computer, which is not limited herein.
In one embodiment of the invention, the target comprises any one of a target hole, a target ball, or a target block. The marking device can be provided with at least three holes serving as target holes, wherein the target holes can be round holes or polygonal holes, at least three spherical objects serving as target balls can be welded on the marking blocks, at least three blocks serving as target blocks can be welded on the marking blocks, the target blocks can be polygonal blocks such as square blocks and triangular blocks, in addition, the target blocks can be round marks, square marks and triangular marks which are obviously different from the colors of the marking blocks, for example, when the colors of the marking blocks are gray, at least three circles can be marked on the marking blocks by black marks, and the centers of the three circles are not on the same straight line.
According to the technical scheme, when the position of the task target is changed greatly in the depth of field direction, the distance between the tail end of the mechanical arm and the task target is measured and used as the primary distance, the initial imaging position is correspondingly adjusted through the difference value between the primary distance and the standard distance to obtain the target imaging position, the tail end of the mechanical arm is controlled to move to the target imaging position until the point cloud acquisition equipment at the tail end of the mechanical arm reaches the target imaging position, so that the mark block of the task target is located on the optimal imaging distance of the point cloud acquisition equipment, the three-dimensional scanning is carried out on the mark block of the task target through the point cloud acquisition equipment, even if the position of the task target is changed greatly in the depth of field direction, the mark block is prevented from being decoked and virtual focus, the accuracy of the mark block point cloud is guaranteed, and the accuracy of positioning of the task target is improved. Referring to fig. 5, fig. 5 is a point cloud of marker blocks according to an embodiment of the present invention. As shown in fig. 5, the point cloud of the mark block is provided with three round holes, after the industrial personal computer obtains the point cloud data (the point cloud of the mark block), the industrial personal computer calculates the three-dimensional coordinates of the circle centers of the 3 round holes in space according to an image segmentation method, and further, the three-dimensional coordinates and the gesture of the task target in space can be calculated by utilizing an equation (a three-point positioning method). Because the position coordinate is located at an ideal photographing point, namely the optimal imaging distance is scanned, the quality of the point cloud is extremely high, the calculated position coordinate has higher precision (can reach millimeter level), and the requirement of the loading task of the robot can be met.
In another embodiment of the present invention, after step S130, the method further includes the following steps:
sending a scanning instruction to enable the point cloud acquisition equipment to perform multiple three-dimensional scanning on the mark blocks to obtain multiple groups of mark block point clouds;
and repeatedly positioning the task target based on the plurality of groups of mark block point clouds.
In this embodiment, after the point cloud collecting device at the tail end of the mechanical arm reaches the target imaging position, the robot sends a scanning instruction to enable the point cloud collecting device to perform multiple three-dimensional scanning on the marker blocks to obtain multiple sets of marker block point clouds, the target center coordinate positions of the multiple sets of marker block positions are calculated according to the calculation mode, each set of marker block positions comprises the target center coordinate positions of at least three target objects, the task target is repeatedly positioned based on the multiple sets of marker block positions according to the calculation mode to obtain multiple sets of poses of the task target, average calculation or weighted average calculation is performed on the multiple sets of poses of the task target, and the calculation result is determined to be the accurate pose of the task target, so that the accuracy of positioning the task target can be further improved. In addition, before calculating the average value or the weighted average value, each group of pose of the task target can be screened, and the pose of the discrete task target can be removed.
In one embodiment of the present invention, after step S150, it includes:
responding to a next task instruction, and if the next task target is different from the task target, matching a next preset ranging position and a next initial imaging position corresponding to the next task target, wherein the next task target is determined based on the next task instruction, and the next task instruction comprises identification information of the next task target;
controlling the tail end of the mechanical arm to move to the next preset ranging position until the ranging device reaches the next preset ranging position;
transmitting a next ranging instruction so that the ranging device measures the next end plane distance between the tail end of the mechanical arm and the target plane of the next task target, and taking the next end plane distance as the next preliminary distance;
performing difference value calculation on the standard distance and the next initial distance to obtain a next target offset distance, and determining a next target imaging position according to the next initial imaging position and the next target offset distance;
controlling the tail end of the mechanical arm to move to the next target imaging position until the point cloud acquisition equipment reaches the next target imaging position;
three-dimensional scanning is carried out on the mark block of the next task target through the point cloud acquisition equipment, so that the point cloud of the next mark block is obtained;
And positioning the next task target based on the next mark block point cloud.
In this embodiment, the robot may further perform tasks of different task targets, for example, one robot needs to be responsible for replacing cylinders for multiple ladles, when a task instruction is received, the task target may be determined according to the task instruction, the task instruction includes identification information capable of identifying different task targets, each task target is preconfigured with a corresponding preset ranging position and an initial imaging position, when the next task target determined according to the next task instruction is different from the previous task target (the task target in the previous embodiment is regarded as the previous task target), the next preset ranging position and the next initial imaging position corresponding to the next task target are matched to measure the next preliminary distance between the tail end of the mechanical arm and the next task target, and the next target imaging position is calculated accordingly, the next task target is three-dimensionally scanned based on the next target imaging position to obtain a next marker point cloud, and the next task target is positioned based on the next marker point cloud, so that the robot performs the next task, and detailed implementation process is not described in the previous embodiments.
Referring to fig. 6, fig. 6 is a block diagram of a robot three-dimensional scanning positioning device according to an exemplary embodiment of the present invention. As shown in fig. 6, the exemplary robot three-dimensional scanning positioning device includes:
the ranging module 610 is configured to obtain a standard distance and an initial imaging position, and measure a distance between a tail end of the mechanical arm and a task target as a preliminary distance; a first processing module 620, configured to perform a difference calculation on the standard distance and the preliminary distance to obtain a target offset distance, and determine a target imaging position according to the initial imaging position and the target offset distance; the scanning module 630 is configured to control the end of the mechanical arm to move toward the target imaging position until the point cloud acquisition device reaches the target imaging position, and perform three-dimensional scanning on the marker block through the point cloud acquisition device to obtain a point cloud of the marker block, where the point cloud acquisition device is disposed at the end of the mechanical arm, and the marker block is disposed on the task target; and a second processing module 640, configured to locate a task target based on the tag block point cloud.
In another embodiment of the present invention, there is also provided a robot three-dimensional scanning and positioning device including:
the mechanical arm is a six-degree-of-freedom mechanical arm; the mark block is arranged on the task target and used for calibrating the position of the task target; the shooting module (point cloud acquisition equipment) is arranged on the free end (tail end) of the mechanical arm and is used for shooting the mark block to position a task target; and the distance measuring module (distance measuring device) is arranged at the free end of the mechanical arm and is used for measuring the distance between the free end of the mechanical arm and the task target.
In one embodiment of the present invention, the device further comprises an execution module (execution mechanism) disposed on the free end of the mechanical arm, for interfacing with the task object.
In one embodiment of the invention, the free end of the mechanical arm is provided with a flange, and the shooting module, the ranging module and the executing module are all arranged on the flange.
In one embodiment of the present invention, the device further includes a computing module, and the photographing module, the ranging module, and the mechanical arm are respectively connected with the computing module (industrial personal computer).
In one embodiment of the invention, the marker block is provided with at least three targets, the center points of which are on different lines.
In one embodiment of the invention, the target is circular in cross-section.
In one embodiment of the invention, the device further comprises a target plane, wherein the target plane is arranged on the task target, and the distance measuring module is used for determining the distance between the free end of the mechanical arm and the task target by measuring the distance between the free end of the mechanical arm and the target plane.
In one embodiment of the invention, the mechanical arm is provided with a protection box, the ranging module and the shooting module are both arranged in the protection box, and the protection box is used for introducing external cold air.
In one embodiment of the present invention, the ranging module comprises one of a laser range finder, an infrared range finder and an ultrasonic range finder.
In one embodiment of the invention, the photographing module includes a three-dimensional laser camera.
Referring to fig. 7 and 8, fig. 7 is a schematic view of a robot three-dimensional scanning positioning device according to another exemplary embodiment of the present invention, and fig. 8 is a schematic view of a robot arm end according to an embodiment of the present invention. As shown in fig. 7 and 8, the robot three-dimensional scanning positioning device comprises a mechanical arm 1, a flange plate 2, a three-dimensional laser camera 3, a laser range finder 4, a mark block 5, a target plane 6, an executing mechanism 7 and a task target 8, wherein the executing mechanism 7, the three-dimensional laser camera 3 and the laser range finder 4 are fixed on the flange plate 2 at the tail end of the mechanical arm 1, the mark block 5 is fixedly arranged at the top of the task target 8, the target plane 6 is fixedly arranged at the bottom of the task target 8, and three round holes are formed in the mark block 5. The robot arm 1 is a six-degree-of-freedom robot arm so that the robot arm tip moves to a preset ranging position and a target imaging position. In addition, the robot three-dimensional scanning positioning device further comprises an industrial personal computer, which is used for communicating with the mechanical arm 1, the three-dimensional laser camera 3 and the laser range finder 4, calculating a target imaging position, processing a mark block point cloud and the like, and is omitted in fig. 7. In the embodiment, the robot plugs the grabbing cylinder into a task target 8, namely a groove, which is positioned on the ladle, and three directions of x, y and z and a pose angle r can occur along with the ladle loading position and the rotation error of the ladle turret x 、r y 、r z The embodiment of the invention utilizes the secondary sampling of the laser range finder and the three-dimensional laser camera to adaptively and accurately identify the specific position and the pose angle of the groove in the space so as to accurately plug the oil cylinder into the groove.
Through laser range finder and three-dimensional laser scanning camera secondary information acquisition, the self-adaptation and the accuracy of point cloud modeling and location have been effectually promoted. The relative distance (preliminary distance) between the tail end of the robot (tail end of the mechanical arm) and the task target is fed back by the laser range finder, the photographing position of the robot is adaptively adjusted through the distance, after the tail end of the mechanical arm moves to an ideal photographing position, three-dimensional scanning modeling of the marking block of the task target is completed by the three-dimensional laser scanning camera, and the marking block point cloud is obtained, so that the accurate position and posture of the task target in space are calculated, and the accuracy can reach millimeter level due to the optimal photographing distance. The device can effectively cope with the situation that the decoking identification fails due to the fact that the imaging range of the laser scanning camera is too narrow when the target position is changed greatly in the depth of field direction.
It should be noted that, the three-dimensional scanning and positioning device for a robot provided in the foregoing embodiment and the three-dimensional scanning and positioning method for a robot provided in the foregoing embodiment belong to the same concept, and the specific manner in which each module and unit perform operations has been described in detail in the method embodiment, which is not repeated herein. In practical application, the three-dimensional scanning and positioning device for robots provided in the above embodiment may distribute the functions to different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above, which is not limited herein.
The embodiment also provides an electronic device, including: one or more processors; and a storage device for storing one or more programs, which when executed by the one or more processors, cause the electronic device to implement the robot three-dimensional scanning positioning method provided in the above embodiments.
The present embodiment also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to perform the robot three-dimensional scanning positioning method as described above. The computer-readable storage medium may be included in the electronic device described in the above embodiment or may exist alone without being incorporated in the electronic device.
The present embodiments also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the robot three-dimensional scanning positioning method provided in the above-described respective embodiments.
The electronic device provided in this embodiment includes a processor, a memory, a transceiver, and a communication interface, where the memory and the communication interface are connected to the processor and the transceiver and perform communication therebetween, the memory is used to store a computer program, the communication interface is used to perform communication, and the processor and the transceiver are used to run the computer program, so that the electronic device performs each step of the above method.
In this embodiment, the memory may include a random access memory (Random Access Memory, abbreviated as RAM), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processing, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
The computer readable storage medium in this embodiment, as will be appreciated by those of ordinary skill in the art: all or part of the steps for implementing the method embodiments described above may be performed by computer program related hardware. The aforementioned computer program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media capable of storing program codes, such as ROM (read only memory), RAM (random access memory), magnetic disk or optical disk.
The above embodiments are merely illustrative of the principles of the present invention and its effectiveness, and are not intended to limit the invention. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the invention. It is therefore intended that all equivalent modifications and changes made by those skilled in the art without departing from the spirit and technical spirit of the present invention shall be covered by the appended claims.

Claims (10)

1. A method for three-dimensional scanning and positioning of a robot, the method comprising:
acquiring a standard distance and an initial imaging position, and measuring the distance between the tail end of the mechanical arm and a task target as a preliminary distance;
Performing difference calculation on the standard distance and the preliminary distance to obtain a target offset distance, and determining a target imaging position according to the initial imaging position and the target offset distance;
controlling the tail end of the mechanical arm to move towards the target imaging position until point cloud acquisition equipment reaches the target imaging position, wherein the point cloud acquisition equipment is arranged at the tail end of the mechanical arm;
three-dimensional scanning is carried out on the mark blocks through the point cloud acquisition equipment to obtain mark block point clouds, and the mark blocks are arranged on the task targets;
and positioning the task target based on the mark block point cloud.
2. The robot three-dimensional scanning positioning method according to claim 1, wherein measuring a distance between the distal end of the robot arm and the task target as the preliminary distance comprises:
responding to a task instruction, controlling the tail end of the mechanical arm to move to a preset ranging position until a ranging device reaches the preset ranging position, wherein the ranging device is arranged at the tail end of the mechanical arm;
and sending a ranging instruction to enable the ranging device to measure the tail end plane distance between the tail end of the mechanical arm and a target plane, wherein the tail end plane distance is used as the preliminary distance, and the target plane is arranged on the task target.
3. The method of claim 2, wherein after locating the task object based on the marker block point cloud, the method comprises:
responding to a next task instruction, and if a next task target is different from the task target, matching a next preset ranging position and a next initial imaging position corresponding to the next task target, wherein the next task target is determined based on the next task instruction, and the next task instruction comprises identification information of the next task target;
controlling the tail end of the mechanical arm to move towards the next preset ranging position until the ranging device reaches the next preset ranging position;
transmitting a next ranging instruction so that the ranging device measures a next end plane distance between the tail end of the mechanical arm and a target plane of the next task target, and taking the next end plane distance as a next preliminary distance;
performing difference calculation on the standard distance and the next preliminary distance to obtain a next target offset distance, and determining a next target imaging position according to the next initial imaging position and the next target offset distance;
Controlling the tail end of the mechanical arm to move to the next target imaging position until the point cloud acquisition equipment reaches the next target imaging position;
three-dimensional scanning is carried out on the mark block of the next task target through the point cloud acquisition equipment, so that a next mark block point cloud is obtained;
and positioning the next task target based on the next mark block point cloud.
4. A method of positioning a task object based on the marker block point cloud according to any one of claims 1 to 3, comprising:
dividing the point cloud of the mark block according to an image dividing method to obtain point clouds of at least three target objects, wherein the target objects are arranged on the mark block, and the target centers of the target objects are on different straight lines;
calculating the point cloud of the target object according to a grid search algorithm to obtain the target coordinate position of the target object;
determining three reference targets from the at least three targets, calculating the central coordinate position of the task target based on the target coordinate positions of the three reference targets and the preset relative distance between the target centers of the three reference targets and the center of the task target, and calculating the normal vector of the task target based on the target coordinate positions of the three reference targets to obtain the pose of the task target.
5. The robotic three-dimensional scanning positioning method of claim 4, wherein the robotic arm tip is controlled to move toward the target imaging location until after the point cloud acquisition device reaches the target imaging location, the method further comprising:
sending a scanning instruction to enable the point cloud acquisition equipment to perform multiple three-dimensional scanning on the mark blocks to obtain multiple groups of mark block point clouds;
and repeatedly positioning the task target based on the plurality of groups of mark block point clouds.
6. The robotic three-dimensional scanning positioning method of claim 4, wherein the target comprises any one of a target hole, a target ball, or a target block.
7. A method of three-dimensional scanning and positioning of a robot according to any of claims 2 or 3, wherein the distance measuring device is a laser distance measuring device, the point cloud collecting device is a three-dimensional laser camera, the laser distance measuring device and the three-dimensional laser camera are arranged in a protective box, and the protective box is used for introducing external cold air.
8. A robotic three-dimensional scanning positioning device, the device comprising:
the distance measuring module is used for acquiring a standard distance and an initial imaging position, and measuring the distance between the tail end of the mechanical arm and a task target as a primary distance;
The first processing module is used for carrying out difference value calculation on the standard distance and the preliminary distance to obtain a target offset distance, and determining a target imaging position according to the initial imaging position and the target offset distance;
the scanning module is used for controlling the tail end of the mechanical arm to move towards the target imaging position until the point cloud acquisition equipment reaches the target imaging position, and carrying out three-dimensional scanning on the mark block through the point cloud acquisition equipment to obtain mark block point cloud, wherein the point cloud acquisition equipment is arranged at the tail end of the mechanical arm, and the mark block is arranged on the task target;
and the second processing module is used for positioning the task target based on the mark block point cloud.
9. An electronic device, the electronic device comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the electronic device to implement the robotic three-dimensional scanning positioning method of any of claims 1-7.
10. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to perform the robot three-dimensional scanning positioning method according to any of claims 1 to 7.
CN202310247037.3A 2023-03-14 2023-03-14 Robot three-dimensional scanning positioning method and device, electronic equipment and storage medium Pending CN116276997A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310247037.3A CN116276997A (en) 2023-03-14 2023-03-14 Robot three-dimensional scanning positioning method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310247037.3A CN116276997A (en) 2023-03-14 2023-03-14 Robot three-dimensional scanning positioning method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116276997A true CN116276997A (en) 2023-06-23

Family

ID=86830110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310247037.3A Pending CN116276997A (en) 2023-03-14 2023-03-14 Robot three-dimensional scanning positioning method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116276997A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116500046A (en) * 2023-06-26 2023-07-28 成都中嘉微视科技有限公司 Film type object scanning method, device, system, equipment and storage medium
CN116577350A (en) * 2023-07-13 2023-08-11 北京航空航天大学杭州创新研究院 Material surface hair bulb point cloud acquisition device and material surface hair bulb data acquisition method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116500046A (en) * 2023-06-26 2023-07-28 成都中嘉微视科技有限公司 Film type object scanning method, device, system, equipment and storage medium
CN116500046B (en) * 2023-06-26 2023-10-03 成都中嘉微视科技有限公司 Film type object scanning method, device, system, equipment and storage medium
CN116577350A (en) * 2023-07-13 2023-08-11 北京航空航天大学杭州创新研究院 Material surface hair bulb point cloud acquisition device and material surface hair bulb data acquisition method

Similar Documents

Publication Publication Date Title
CN116276997A (en) Robot three-dimensional scanning positioning method and device, electronic equipment and storage medium
JP6426725B2 (en) System and method for tracking the location of a movable target object
CN109633612B (en) Single-line laser radar and camera external reference calibration method without common observation
US11667036B2 (en) Workpiece picking device and workpiece picking method
WO2022061673A1 (en) Calibration method and device for robot
US20080252248A1 (en) Device and Method for Calibrating the Center Point of a Tool Mounted on a Robot by Means of a Camera
JP2011179908A (en) Three-dimensional measurement apparatus, method for processing the same, and program
CN113021358B (en) Method and device for calibrating origin of coordinate system of mechanical arm tool and electronic equipment
JP2018136896A (en) Information processor, system, information processing method, and manufacturing method of article
KR102314092B1 (en) Calibration apparatus and the method for robot
US20220230348A1 (en) Method and apparatus for determining a three-dimensional position and pose of a fiducial marker
CN114310901B (en) Coordinate system calibration method, device, system and medium for robot
WO2018043524A1 (en) Robot system, robot system control device, and robot system control method
CN111609847B (en) Automatic planning method of robot photographing measurement system for thin plate
JP5198078B2 (en) Measuring device and measuring method
CN113781558A (en) Robot vision locating method with decoupled posture and position
US9804252B2 (en) System and method for measuring tracker system accuracy
WO2023060717A1 (en) High-precision positioning method and system for object surface
TWI712473B (en) Method for calibrating coordinate of robot arm
WO2020024150A1 (en) Map processing method, apparatus, and computer readable storage medium
CN219854602U (en) Positioning device
CN114378808A (en) Method and device for multi-camera and line laser auxiliary mechanical arm to track target
Fernandes et al. Angle invariance for distance measurements using a single camera
CN115366089A (en) Robot coordinate system calibration method
CN113733078A (en) Method for interpreting fine control quantity of mechanical arm and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination