CN116690562A - Method and device for determining object position, robot and storage medium - Google Patents

Method and device for determining object position, robot and storage medium Download PDF

Info

Publication number
CN116690562A
CN116690562A CN202310645030.7A CN202310645030A CN116690562A CN 116690562 A CN116690562 A CN 116690562A CN 202310645030 A CN202310645030 A CN 202310645030A CN 116690562 A CN116690562 A CN 116690562A
Authority
CN
China
Prior art keywords
point cloud
cloud information
preset position
scanner
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310645030.7A
Other languages
Chinese (zh)
Inventor
黄体森
魏海永
崔存星
王桢垚
丁有爽
邵天兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mech Mind Robotics Technologies Co Ltd
Original Assignee
Mech Mind Robotics Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mech Mind Robotics Technologies Co Ltd filed Critical Mech Mind Robotics Technologies Co Ltd
Priority to CN202310645030.7A priority Critical patent/CN116690562A/en
Publication of CN116690562A publication Critical patent/CN116690562A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages

Abstract

The method is applied to robots, and comprises the steps of photographing a part of an object to be measured on a preset position according to a camera corresponding to the preset position to obtain first point cloud information corresponding to the part, determining second point cloud information of the part under the field of view of a scanner according to a transformation relation corresponding to the first point cloud information and the preset position, determining the transformation relation based on point cloud matching obtained by measuring the preset position by the scanner and the camera, and then determining the position information of the object to be measured under the field of view of the scanner according to the second point cloud information corresponding to each part. According to the technical scheme, the transformation relation of the object position between the camera and the field of view of the scanner is constructed in a point cloud information matching mode, so that the point cloud obtained by photographing the camera is combined with the transformation relation in actual measurement, and the position information of the workpiece in the field of view of the scanner is measured more accurately.

Description

Method and device for determining object position, robot and storage medium
Technical Field
The disclosure relates to the technical field of machine vision, and in particular relates to a method and a device for determining a position of an object, a robot and a storage medium.
Background
As robots are increasingly used in production and living, the problem of robot precision is also becoming a key point of concern for users, especially in the context of large-piece measurement.
In the prior art, in a scenario of measuring a large workpiece, a robot usually carries a camera to take a picture at each position of the workpiece, so as to acquire the coordinates of the whole workpiece in the field of view of the robot.
However, since the robot itself has an absolute accuracy problem, in the above-described implementation, positional information of the entire workpiece cannot be accurately obtained.
Disclosure of Invention
The disclosure provides a method, a device, a robot and a storage medium for determining the position of an object, so as to solve the technical problem that in the prior art, a large workpiece cannot be accurately measured.
In a first aspect, an embodiment of the present disclosure provides a method for determining a position of an object, which is applied to a robot, and the method includes:
photographing the part of the object to be measured on the preset position according to the camera corresponding to the preset position aiming at each preset position to obtain first point cloud information corresponding to the part;
determining second point cloud information of the part under the field of view of the scanner according to a transformation relation corresponding to the first point cloud information and the preset position, wherein the transformation relation is determined based on point cloud matching obtained by measuring the preset position by the scanner and the camera;
and determining the position information of the object to be measured under the field of view of the scanner according to the second point cloud information corresponding to each part.
In one possible design of the first aspect, before the determining, according to the transformation relationship corresponding to the first point cloud information and the preset position, second point cloud information of the portion in the field of view of the scanner, the method further includes:
scanning a calibration workpiece through the scanner to obtain third point cloud information of the calibration workpiece, wherein the calibration workpiece covers all preset positions;
photographing the preset positions aiming at cameras corresponding to each preset position to obtain fourth point cloud information of the calibration workpiece at the preset positions;
and determining a transformation relation corresponding to each preset position according to the third point cloud information and the fourth point cloud information of each preset position.
In another possible design of the first aspect, the determining, according to the third point cloud information and the fourth point cloud information of each preset position, a transformation relationship corresponding to each preset position includes:
cutting the third point cloud information to obtain third sub point cloud information corresponding to each preset position;
and determining a transformation relation corresponding to each preset position according to the third sub-point cloud information and fourth point cloud information of the preset position.
In still another possible design of the first aspect, the determining, according to the third sub-point cloud information and the fourth point cloud information of the preset position, a transformation relationship corresponding to the preset position includes:
3-dimensional matching is carried out on the fourth point cloud information according to the third sub point cloud information so as to obtain transformation information from the scanner to the camera, and inversion is carried out on the transformation information so as to determine a transformation relation corresponding to the preset position;
or alternatively, the first and second heat exchangers may be,
and 3-dimensional matching is carried out on the third sub-point cloud information according to the fourth point cloud information so as to determine a transformation relation corresponding to the preset position.
In still another possible design of the first aspect, the scanning, by the scanner, the calibration workpiece to obtain third point cloud information of the calibration workpiece includes:
scanning a calibration workpiece through the scanner to obtain a stereolithography STL model of the calibration workpiece;
and taking the point cloud format data corresponding to the STL model as the third point cloud information.
In yet another possible design of the first aspect, the determining, according to the second point cloud information corresponding to each part, the position information of the object to be measured in the field of view of the scanner includes:
and loading the second point cloud information to each preset position in the field of view of the scanner so as to obtain the position information of the object to be measured in the field of view of the scanner.
In yet another possible design of the first aspect, the method further includes:
controlling the robot to perform preset operation on the object to be measured according to the position information, wherein the preset operation comprises at least one of the following steps: grasping and pushing.
In a second aspect, an embodiment of the present disclosure provides an apparatus for determining a position of an object, which is applied to a robot, the apparatus including:
the first determining module is used for photographing the part of the object to be measured on the preset position according to the camera corresponding to the preset position aiming at each preset position to obtain first point cloud information corresponding to the part;
the second determining module is used for determining second point cloud information of the part in the field of view of the scanner according to a transformation relation corresponding to the first point cloud information and the preset position, and the transformation relation is determined based on point cloud matching obtained by measuring the preset position by the scanner and the camera;
and the third determining module is used for determining the position information of the object to be measured under the field of view of the scanner according to the second point cloud information corresponding to each part.
In one possible design of the second aspect, before the determining, according to the transformation relationship corresponding to the first point cloud information and the preset position, second point cloud information of the portion in the field of view of the scanner, a fourth determining module is configured to:
scanning a calibration workpiece through the scanner to obtain third point cloud information of the calibration workpiece, wherein the calibration workpiece covers all preset positions;
photographing the preset positions aiming at cameras corresponding to each preset position to obtain fourth point cloud information of the calibration workpiece at the preset positions;
and determining a transformation relation corresponding to each preset position according to the third point cloud information and the fourth point cloud information of each preset position.
In another possible design of the second aspect, the fourth determining module determines, according to the third point cloud information and fourth point cloud information of each preset position, a transformation relationship corresponding to each preset position, where the transformation relationship is specifically configured to:
cutting the third point cloud information to obtain third sub point cloud information corresponding to each preset position;
and determining a transformation relation corresponding to each preset position according to the third sub-point cloud information and fourth point cloud information of the preset position.
In still another possible design of the second aspect, the fourth determining module determines, according to the third sub-point cloud information and fourth point cloud information of the preset position, a transformation relationship corresponding to the preset position, and specifically is configured to:
3-dimensional matching is carried out on the fourth point cloud information according to the third sub point cloud information so as to obtain transformation information from the scanner to the camera, and inversion is carried out on the transformation information so as to determine a transformation relation corresponding to the preset position;
or alternatively, the first and second heat exchangers may be,
and 3-dimensional matching is carried out on the third sub-point cloud information according to the fourth point cloud information so as to determine a transformation relation corresponding to the preset position.
In yet another possible design of the second aspect, the fourth determining module scans the calibration workpiece through the scanner to obtain third point cloud information of the calibration workpiece, specifically for:
scanning a calibration workpiece through the scanner to obtain a stereolithography STL model of the calibration workpiece;
and taking the point cloud format data corresponding to the STL model as the third point cloud information.
In a further possible design of the second aspect, the third determining module is specifically configured to:
and loading the second point cloud information to each preset position in the field of view of the scanner so as to obtain the position information of the object to be measured in the field of view of the scanner.
In a further possible design of the second aspect, the control module is configured to:
controlling the robot to perform preset operation on the object to be measured according to the position information, wherein the preset operation comprises at least one of the following steps: grasping and pushing.
In a third aspect, the present disclosure provides a robot comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the method as described in the first aspect or any of the ways described above.
In a fourth aspect, the present disclosure provides a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, are adapted to carry out the method of the first aspect or any of the ways described above.
In a fifth aspect, the present disclosure provides a computer program product comprising a computer program which, when executed by a processor, implements a method as described in the first aspect or any of the ways described above.
The method is applied to robots, and comprises the steps of photographing a part of an object to be measured on a preset position according to a camera corresponding to the preset position to obtain first point cloud information corresponding to the part, determining second point cloud information of the part under the field of view of a scanner according to a transformation relation corresponding to the first point cloud information and the preset position, determining the transformation relation based on point cloud matching obtained by measuring the preset position by the scanner and the camera, and then determining the position information of the object to be measured under the field of view of the scanner according to the second point cloud information corresponding to each part. In the technical scheme, the transformation relation of the object position between the camera and the field of view of the scanner is constructed in a point cloud information matching mode, so that the position obtained by photographing the camera is combined with the transformation relation in actual measurement, and the position information of the workpiece under the field of view of the scanner is determined, thereby avoiding the phenomenon of inaccurate workpiece measurement caused by absolute precision problems such as temperature drift of a robot in the prior art.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating a method for determining a position of an object according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of first point cloud information acquisition provided in an embodiment of the present disclosure;
fig. 3 is a schematic diagram of second point cloud information conversion provided in an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of positional information under a field of view of a scanner provided by an embodiment of the present disclosure;
fig. 5 is a second flowchart of a method for determining a position of an object according to an embodiment of the present disclosure;
FIG. 6 is a schematic illustration of the determination of transformation relationships provided by the disclosed embodiments;
fig. 7 is a schematic structural diagram of an apparatus for determining a position of an object according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a robot according to an embodiment of the present disclosure.
Specific embodiments of the present disclosure have been shown by way of the above drawings and will be described in more detail below. These drawings and the written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the disclosed concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
Before introducing embodiments of the present disclosure, an application background of the embodiments of the present disclosure is first explained:
as robots are increasingly used in production and living, the problem of robot precision is also becoming a key point of concern for users, especially in the context of large-piece measurement.
In the prior art, in a scene of measuring a large workpiece, a robot usually carries a camera to take a picture at each position of the workpiece, and then each position is collected at coordinates under the field of view of the robot, so as to obtain position information of the large workpiece under the field of view of the robot.
Problems existing in the prior art to be solved by the embodiments of the present disclosure: because the robot has an absolute precision problem, in the implementation manner, the robot carries the camera to acquire the position information of the large workpiece at each position, and then the position information of the whole workpiece cannot be accurately obtained in a manner of being placed under the visual field of the robot, so that how to more accurately determine the position information of the large workpiece becomes a technical problem to be solved urgently.
Aiming at the technical problems in the prior art, the inventor of the present disclosure has the following conception that if the position of the whole calibration workpiece can be scanned to obtain the total point cloud of the whole calibration workpiece, then the parts of the calibration workpiece are scanned by using a plurality of cameras to obtain the sub point cloud corresponding to each part, so as to match, the change relation from the scanner to each camera can be obtained, the relation from each camera to the scanner can be obtained by inverting, then the point cloud of the workpiece to be measured is obtained by using a plurality of cameras, and then the pose of the workpiece to be measured under the coordinate system of the scanner can be obtained, so that the situation that the workpiece measurement is inaccurate due to the problems of temperature drift of a robot and the like can be avoided.
The technical scheme of the present disclosure is described in detail below through specific embodiments. It should be noted that the following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
Noteworthy are: the application fields of the method, the device, the robot and the storage medium for determining the object position are not limited.
The execution body of the present disclosure is a robot, and specifically may be a control unit in the robot, a controller that controls the robot, or the like.
Fig. 1 is a flowchart of a method for determining a position of an object according to an embodiment of the present disclosure, as shown in fig. 1, the method for determining a position of an object may include the following steps:
step 11, photographing the part of the object to be measured on the preset position according to the camera corresponding to the preset position for each preset position to obtain first point cloud information corresponding to the part;
in this step, the object to be measured may be a large workpiece, and one camera cannot perform complete measurement on the object to be measured, at this time, cameras are respectively disposed at a plurality of preset positions that can cover the object to be measured, so that, for each preset position, a portion corresponding to the object to be measured at the preset position can be photographed according to a corresponding camera, and point cloud information corresponding to the portion, that is, first point cloud information, is obtained.
In a possible implementation, fig. 2 is a schematic diagram of first point cloud information acquisition provided by an embodiment of the present disclosure, as shown in fig. 2, cameras (for example, 21, 22, 23, 24) are respectively fixed at each preset position (for example, A, B, C, D), and the first point cloud information is acquired by the cameras from the part of the preset position.
And step 12, determining second point cloud information of the part in the field of view of the scanner according to the transformation relation corresponding to the first point cloud information and the preset position.
The transformation relation is determined based on point cloud matching obtained by measuring a preset position by a scanner and a camera;
in this step, for the first point cloud information measured at each preset position, a transformation relationship (which is described in the following embodiments and is not described in detail here) determined by the point cloud matching obtained by measuring the preset position by the scanner and the camera is predetermined, and according to the transformation relationship, the first point cloud information in the field of view of the camera can be converted into the second point cloud information in the field of view of the scanner.
In a possible implementation, fig. 3 is a schematic diagram of second point cloud information conversion provided in the embodiment of the present disclosure, as shown in fig. 3, taking a preset position B in the above example as an example, the camera 22 photographs a part at the preset position B, and records the photographed part as first point cloud information in the camera 22, and converts the first point cloud information into second point cloud information under the field of view of the scanner according to a transformation relationship corresponding to the preset position B.
And 13, determining the position information of the object to be measured in the field of view of the scanner according to the second point cloud information corresponding to each part.
In the step, after the second point cloud information corresponding to each part is obtained, the position information of the object to be measured under the field of view of the scanner is obtained.
In a possible implementation, fig. 4 is a schematic diagram of positional information under a field of view of a scanner provided in an embodiment of the present disclosure, and as shown in fig. 4, taking the foregoing example fig. 2 as an example, the first point cloud information determined by each camera is sequentially converted into second point cloud information, and then each second point cloud information is filled under the field of view of the scanner, so as to obtain positional information of an object to be measured under the field of view of the scanner.
Alternatively, this step 13 may be: and loading second point cloud information to the preset positions aiming at each preset position in the view of the scanner so as to obtain the position information of the object to be measured in the view of the scanner.
And in actual use, operating the object to be measured according to the position relation between the robot and the tracker and the determined position information of the object to be measured under the field of view of the scanner.
That is, after the step 13, the method of determining the position of the object may further include: controlling the robot to perform preset operation on the object to be measured according to the position information, wherein the preset operation comprises at least one of the following steps: grasping and pushing.
It should be understood that: the handling of the object to be measured is not limited to gripping, pushing, but may also include possible implementations not listed.
The method for determining the object position is applied to robots, and is used for photographing the position of an object to be measured on the preset position according to a camera corresponding to the preset position to obtain first point cloud information corresponding to the position, determining second point cloud information of the position in the field of view of a scanner according to a transformation relation corresponding to the first point cloud information and the preset position, determining the transformation relation based on point cloud matching obtained by measuring the preset position by the scanner and the camera, and then determining the position information of the object to be measured in the field of view of the scanner according to the second point cloud information corresponding to each position. In the technical scheme, the transformation relation of the object position between the camera and the field of view of the scanner is constructed in a point cloud information matching mode, so that the position obtained by photographing the camera is combined with the transformation relation in actual measurement, and the position information of a workpiece under the field of view of the scanner is determined, and the problem of inaccurate workpiece measurement caused by absolute precision problems such as temperature drift of a robot in the prior art is avoided.
On the basis of the above embodiment, fig. 5 is a second flowchart of the method for determining the position of the object according to the embodiment of the present disclosure, as shown in fig. 5, before the step 12, the method for determining the position of the object may further include the following steps:
fig. 6 is a schematic diagram illustrating determination of a transformation relationship according to the embodiment of the disclosure, as shown in fig. 6, including: the calibration piece 61 and any one portion 611 of the calibration piece will be described in the following embodiments.
And 51, scanning the calibration workpiece through a scanner to obtain third point cloud information of the calibration workpiece, wherein the calibration workpiece covers all preset positions.
In the step, the calibration workpiece is scanned by the scanner, so that the point cloud information of the whole calibration workpiece, namely third point cloud information, can be obtained.
Optionally, the step 51 may include: scanning the calibration workpiece by a scanner to obtain a Stereolithography (STL) model of the calibration workpiece; and taking the point cloud format data corresponding to the STL model as third point cloud information.
In one possible implementation, the scanner scans the calibration artifact 61, obtains an STL model of the calibration artifact 61 after scanning, and derives a point cloud format (i.e., ply format) used by vision software, i.e., third point cloud information.
Step 52, photographing the preset positions aiming at the cameras corresponding to each preset position to obtain fourth point cloud information of the calibration workpiece at the preset positions;
in this step, for each camera corresponding to the preset position, the camera photographs the corresponding preset position to acquire scene point cloud corresponding to the preset position, that is, fourth point cloud information.
In one possible implementation, taking the preset position a as an example, the camera 21 photographs the preset position a to obtain fourth point cloud information 211 corresponding to the preset position a.
And step 53, determining a transformation relation corresponding to each preset position according to the third point cloud information and the fourth point cloud information of each preset position.
In the step, the third point cloud is matched with each piece of fourth point cloud information to obtain the transformation relation of the point cloud between the camera and the scanner at each preset position.
Optionally, the step 53 may include:
step 1, clipping the third point cloud information to obtain third sub point cloud information corresponding to each preset position;
and cutting the third point cloud according to the boundary corresponding to the preset position to obtain third sub-point cloud information corresponding to each preset position, for example, third sub-point cloud information 631 corresponding to the preset position A.
Step 2, aiming at each preset position, determining a transformation relation corresponding to the preset position according to the third sub-point cloud information and fourth point cloud information of the preset position.
For each preset position, for example, the third sub-point cloud information is used as a template point cloud, and the fourth point cloud information 211 acquired by the camera 21 is used for matching, so as to determine the transformation relationship of the point cloud between the camera and the scanner at the preset position.
This step can be achieved by two possibilities:
and 3-dimensional matching is carried out on fourth point cloud information according to the third sub point cloud information to obtain conversion information from the scanner to the camera, and inversion is carried out on the conversion information to determine a conversion relation corresponding to the preset position.
The method comprises the steps that a plurality of cameras respectively acquire scene point clouds corresponding to preset positions, the acquired third sub-point cloud information in vision software is used as a template of a 3D matching algorithm to match the scene point clouds acquired by the cameras, namely, a transformation relation from a scanner coordinate system to each camera coordinate system is acquired, and the transformation relation from the camera to the scanner is actually required to be acquired, so that the calculated result is inverted, and the transformation relation from the camera coordinate system to the scanner coordinate system is acquired.
And 2. Carrying out 3-dimensional matching on the third sub-point cloud information according to the fourth point cloud information so as to determine a transformation relation corresponding to the preset position.
And respectively acquiring scene point clouds corresponding to the preset positions by a plurality of cameras, and matching each third sub-point cloud information acquired in vision software by using each scene point cloud to acquire the transformation relation from the camera coordinate system to the scanner coordinate system.
Further, after obtaining the transformation relations at each preset position, the obtained transformation relations may be stored in an internal cache of the robot or output to other caches for subsequent use.
It should be understood that: the 3D matching needs to have enough constraints, namely translation and rotation in x, y and z directions in a coordinate system, and the constraints need to be ensured as much as possible to improve the accuracy of the 3D matching.
That is, in one example, when the fourth point cloud information 211 is matched with the third sub point cloud information 631, the constraint of x, y and z directions is applied to the fourth point cloud information 211, so as to ensure that the distance and the position relationship between the point clouds in the fourth point cloud information 211 are unchanged when the point clouds are translated and rotated in the x, y and z directions when the point clouds are matched with the third sub point cloud information 631, so that the matching precision of the point clouds with the third sub point cloud information 631 is improved.
According to the object position determining method, the calibration workpiece is scanned through the scanner to obtain third point cloud information of the calibration workpiece, the calibration workpiece covers all preset positions, the cameras corresponding to all preset positions are used for shooting the preset positions to obtain fourth point cloud information of the calibration workpiece on the preset positions, and the transformation relation corresponding to all preset positions is determined according to the third point cloud information and the fourth point cloud information of all preset positions. According to the technical scheme, the point cloud information on the calibration workpiece is determined through the camera and the scanner respectively, so that the transformation relation of the positions of the calibration workpiece under the visual field of the camera and the scanner is obtained, and a foundation is provided for subsequent actual workpiece measurement.
The following are device embodiments of the present disclosure that may be used to perform method embodiments of the present disclosure. For details not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the embodiments of the method of the present disclosure.
Fig. 7 is a schematic structural diagram of an apparatus for determining a position of an object according to an embodiment of the present disclosure. As shown in fig. 7, the apparatus for determining the position of an object is applied to a robot, and includes:
a first determining module 71, configured to take a picture of a location of an object to be measured at a preset position according to a camera corresponding to the preset position for each preset position, to obtain first point cloud information corresponding to the location;
a second determining module 72, configured to determine second point cloud information of the portion in the field of view of the scanner according to a transformation relationship corresponding to the first point cloud information and the preset position, where the transformation relationship is determined based on a point cloud match obtained by measuring the preset position by the scanner and the camera;
and the third determining module 73 is configured to determine, according to the second point cloud information corresponding to each part, position information of the object to be measured in the field of view of the scanner.
In one possible design of the embodiment of the disclosure, before determining the second point cloud information of the part in the field of view of the scanner according to the transformation relationship corresponding to the first point cloud information and the preset position, a fourth determining module is configured to:
scanning the calibration workpiece through a scanner to obtain third point cloud information of the calibration workpiece, wherein the calibration workpiece covers all preset positions;
photographing the preset positions aiming at the cameras corresponding to each preset position to obtain fourth point cloud information of the calibration workpiece at the preset positions;
and determining a transformation relation corresponding to each preset position according to the third point cloud information and the fourth point cloud information of each preset position.
In another possible design of the embodiment of the present disclosure, the fourth determining module determines, according to the third point cloud information and the fourth point cloud information of each preset position, a transformation relationship corresponding to each preset position, and is specifically configured to:
cutting the third point cloud information to obtain third sub point cloud information corresponding to each preset position;
and determining a transformation relation corresponding to the preset position according to the third sub-point cloud information and fourth point cloud information of the preset position aiming at each preset position.
In still another possible design of the embodiment of the present disclosure, the fourth determining module determines, according to the third sub-point cloud information and fourth point cloud information of the preset position, a transformation relationship corresponding to the preset position, and is specifically configured to:
3-dimensional matching is carried out on the fourth point cloud information according to the third sub point cloud information so as to obtain transformation information from the scanner to the camera, and inversion is carried out on the transformation information so as to determine a transformation relation corresponding to a preset position;
or alternatively, the first and second heat exchangers may be,
and 3-dimensional matching is carried out on the third sub-point cloud information according to the fourth point cloud information so as to determine a transformation relation corresponding to the preset position.
In still another possible design of the embodiment of the present disclosure, the fourth determining module scans the calibration workpiece through a scanner to obtain third point cloud information of the calibration workpiece, and is specifically configured to:
scanning the calibration workpiece through a scanner to obtain a light-cured stereolithography STL model of the calibration workpiece;
and taking the point cloud format data corresponding to the STL model as third point cloud information.
In yet another possible design of the embodiment of the disclosure, the third determining module 73 is specifically configured to:
and loading second point cloud information to the preset positions aiming at each preset position in the view of the scanner so as to obtain the position information of the object to be measured in the view of the scanner.
In yet another possible design of the embodiments of the present disclosure, the control module is configured to:
controlling the robot to perform preset operation on the object to be measured according to the position information, wherein the preset operation comprises at least one of the following steps: grasping and pushing.
The device for determining the position of the object provided in the embodiment of the present disclosure may be used to execute the method for determining the position of the object in any of the embodiments described above, and its implementation principle and technical effects are similar, and will not be described in detail herein.
It should be noted that, it should be understood that the division of the modules of the above apparatus is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated. And these modules may all be implemented in software in the form of calls by the processing element; or can be realized in hardware; the method can also be realized in a form of calling software by a processing element, and the method can be realized in a form of hardware by a part of modules. In addition, all or part of the modules may be integrated together or may be implemented independently. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form.
Fig. 8 is a schematic structural diagram of a robot according to an embodiment of the disclosure, as shown in fig. 8, the robot may include: a processor 81, a memory 82 and computer program instructions stored on the memory 82 and executable on the processor 81, which processor 81 implements the method provided by any of the preceding embodiments when executed.
Alternatively, the above devices of the robot may be connected by a system bus.
The memory 82 may be a separate memory unit or may be a memory unit integrated in the processor 81. The number of processors 81 is one or more.
It should be appreciated that the processor 81 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors 81, digital signal processors 81 (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), etc. The general purpose processor 81 may be a microprocessor 81 or the processor 81 may be any conventional processor 81 or the like. The steps of a method disclosed in connection with the present disclosure may be embodied directly in hardware processor 81 for execution, or in a combination of hardware and software modules in processor 81.
The system bus may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The system bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus. The memory 82 may include random access memory 82 (random access memory, RAM) and may also include non-volatile memory 82 (NVM), such as at least one disk memory 82.
All or part of the steps for implementing the method embodiments described above may be performed by hardware associated with program instructions. The foregoing program may be stored in a readable memory 82. The program, when executed, performs steps including the method embodiments described above; and the aforementioned memory 82 (storage medium) includes: read-only memory 82 (ROM), RAM, flash memory 82, hard disk, solid state disk, magnetic tape, floppy disk, optical disk, and any combination thereof.
The robot provided in the embodiments of the present disclosure may be used to execute the method for determining the position of the object provided in any of the embodiments of the method, and its implementation principle and technical effects are similar, and are not described herein again.
Embodiments of the present disclosure provide a computer-readable storage medium having stored therein computer instructions that, when executed on a computer, cause the computer to perform the above-described method of determining the position of an object.
The computer readable storage medium described above may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as static random access memory, electrically erasable programmable read-only memory, magnetic memory, flash memory, magnetic disk or optical disk. A readable storage medium can be any available medium that can be accessed by a general purpose or special purpose computer.
In the alternative, a readable storage medium is coupled to the processor such that the processor can read information from, and write information to, the readable storage medium. In the alternative, the readable storage medium may be integral to the processor. The processor and the readable storage medium may reside in an application specific integrated circuit (Application Specific Integrated Circuits, ASIC). The processor and the readable storage medium may reside as discrete components in a device.
The disclosed embodiments also provide a computer program product comprising a computer program stored in a computer readable storage medium, from which at least one processor can read, said at least one processor executing said computer program, implementing the above method for determining the position of an object.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method of determining a position of an object, the method being applied to a robot, the method comprising:
photographing the part of the object to be measured on the preset position according to the camera corresponding to the preset position aiming at each preset position to obtain first point cloud information corresponding to the part;
determining second point cloud information of the part under the field of view of the scanner according to a transformation relation corresponding to the first point cloud information and the preset position, wherein the transformation relation is determined based on point cloud matching obtained by measuring the preset position by the scanner and the camera;
and determining the position information of the object to be measured under the field of view of the scanner according to the second point cloud information corresponding to each part.
2. The method of claim 1, wherein before determining the second point cloud information of the portion in the scanner field of view according to the transformation relationship corresponding to the first point cloud information and the preset position, the method further comprises:
scanning a calibration workpiece through the scanner to obtain third point cloud information of the calibration workpiece, wherein the calibration workpiece covers all preset positions;
photographing the preset positions aiming at cameras corresponding to each preset position to obtain fourth point cloud information of the calibration workpiece at the preset positions;
and determining a transformation relation corresponding to each preset position according to the third point cloud information and the fourth point cloud information of each preset position.
3. The method according to claim 2, wherein determining the transformation relationship corresponding to each preset position according to the third point cloud information and fourth point cloud information of each preset position comprises:
cutting the third point cloud information to obtain third sub point cloud information corresponding to each preset position;
and determining a transformation relation corresponding to each preset position according to the third sub-point cloud information and fourth point cloud information of the preset position.
4. The method of claim 3, wherein the determining the transformation relationship corresponding to the preset position according to the third sub-point cloud information and the fourth point cloud information of the preset position comprises:
3-dimensional matching is carried out on the fourth point cloud information according to the third sub point cloud information so as to obtain transformation information from the scanner to the camera, and inversion is carried out on the transformation information so as to determine a transformation relation corresponding to the preset position;
or alternatively, the first and second heat exchangers may be,
and 3-dimensional matching is carried out on the third sub-point cloud information according to the fourth point cloud information so as to determine a transformation relation corresponding to the preset position.
5. The method of any one of claims 2-4, wherein scanning, by the scanner, the calibration artifact to obtain third point cloud information for the calibration artifact comprises:
scanning a calibration workpiece through the scanner to obtain a stereolithography STL model of the calibration workpiece;
and taking the point cloud format data corresponding to the STL model as the third point cloud information.
6. The method according to any one of claims 1 to 4, wherein determining the position information of the object to be measured in the field of view of the scanner according to the second point cloud information corresponding to each part includes:
and loading the second point cloud information to each preset position in the field of view of the scanner so as to obtain the position information of the object to be measured in the field of view of the scanner.
7. The method according to claim 1, wherein the method further comprises:
controlling the robot to perform preset operation on the object to be measured according to the position information, wherein the preset operation comprises at least one of the following steps: grasping and pushing.
8. An apparatus for determining a position of an object, applied to a robot, comprising:
the first determining module is used for photographing the part of the object to be measured on the preset position according to the camera corresponding to the preset position aiming at each preset position to obtain first point cloud information corresponding to the part;
the second determining module is used for determining second point cloud information of the part in the field of view of the scanner according to a transformation relation corresponding to the first point cloud information and the preset position, and the transformation relation is determined based on point cloud matching obtained by measuring the preset position by the scanner and the camera;
and the third determining module is used for determining the position information of the object to be measured under the field of view of the scanner according to the second point cloud information corresponding to each part.
9. A robot, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the method of any one of the preceding claims 1 to 7.
10. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out the method of any one of the preceding claims 1 to 7.
CN202310645030.7A 2023-06-01 2023-06-01 Method and device for determining object position, robot and storage medium Pending CN116690562A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310645030.7A CN116690562A (en) 2023-06-01 2023-06-01 Method and device for determining object position, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310645030.7A CN116690562A (en) 2023-06-01 2023-06-01 Method and device for determining object position, robot and storage medium

Publications (1)

Publication Number Publication Date
CN116690562A true CN116690562A (en) 2023-09-05

Family

ID=87830592

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310645030.7A Pending CN116690562A (en) 2023-06-01 2023-06-01 Method and device for determining object position, robot and storage medium

Country Status (1)

Country Link
CN (1) CN116690562A (en)

Similar Documents

Publication Publication Date Title
CN107633536B (en) Camera calibration method and system based on two-dimensional plane template
CN110355755B (en) Robot hand-eye system calibration method, device, equipment and storage medium
CN109559371B (en) Method and device for three-dimensional reconstruction
CN108381549B (en) Binocular vision guide robot rapid grabbing method and device and storage medium
US20160321819A1 (en) Method, system and apparatus for determining distance to an object in a scene
CN111028205B (en) Eye pupil positioning method and device based on binocular distance measurement
WO2022160761A1 (en) Method and apparatus for calibrating dual stereo cameras
CN112686950B (en) Pose estimation method, pose estimation device, terminal equipment and computer readable storage medium
CN112446917B (en) Gesture determination method and device
US3943344A (en) Apparatus for measuring the elevation of a three-dimensional foreground subject
CN114952856A (en) Mechanical arm hand-eye calibration method, system, computer and readable storage medium
WO2023134237A1 (en) Coordinate system calibration method, apparatus and system for robot, and medium
CN113610741A (en) Point cloud processing method and device based on laser line scanning
CN113172636B (en) Automatic hand-eye calibration method and device and storage medium
CN111383264B (en) Positioning method, positioning device, terminal and computer storage medium
CN116690562A (en) Method and device for determining object position, robot and storage medium
CN109389645B (en) Camera self-calibration method and system, camera, robot and cloud server
CN115272410A (en) Dynamic target tracking method, device, equipment and medium without calibration vision
CN114693769A (en) Calibration method and device for C-arm machine
CN116907422A (en) Distance determining method and device, electronic equipment and storage medium
CN115813431B (en) Method, device and system for calibrating multi-depth ultrasonic probe
TWI826185B (en) External parameter determination method and image processing device
CN112929535B (en) Binocular camera-based lens attitude correction method and system and intelligent terminal
CN116659384A (en) Method and device for determining object position, robot and storage medium
CN116572246A (en) Method and device for determining object position, robot and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination