CN116214514A - Method and device for determining object position, robot and storage medium - Google Patents
Method and device for determining object position, robot and storage medium Download PDFInfo
- Publication number
- CN116214514A CN116214514A CN202310263886.8A CN202310263886A CN116214514A CN 116214514 A CN116214514 A CN 116214514A CN 202310263886 A CN202310263886 A CN 202310263886A CN 116214514 A CN116214514 A CN 116214514A
- Authority
- CN
- China
- Prior art keywords
- position information
- grabbed
- robot
- reference coordinate
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 230000000007 visual effect Effects 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 6
- 239000013598 vector Substances 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 abstract description 7
- 238000013461 design Methods 0.000 description 12
- 238000004590 computer program Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 6
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1661—Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
- B25J19/04—Viewing devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
- B25J9/161—Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/002—Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Manipulator (AREA)
Abstract
The method is applied to a robot, and obtains third position information of an object to be grabbed relative to a reference coordinate according to second position information between the reference coordinate and the first position information by obtaining the first position information of the object to be grabbed relative to the preset reference coordinate; the reference coordinates are coordinates fixed with position information between the reference coordinates and preset for camera shooting of the robot, the reference coordinates are fixed with the relative positions of the robot, and then the object to be grabbed is grabbed according to third position information. According to the technical scheme, the preset reference coordinate is used as the intermediate conversion point of the reference coordinate and the object to be grabbed, the position information of the object to be grabbed relative to the reference coordinate can be determined more accurately under the condition that the position information between the reference coordinate and the reference coordinate is known, and position errors caused by internal reference problems of cameras such as time varying in the prior art are avoided.
Description
Technical Field
The disclosure relates to the technical field of machine vision, and in particular relates to a method and a device for determining a position of an object, a robot and a storage medium.
Background
With the continuous development of the technical field of machine vision, technicians and users thereof also put higher demands on the determination accuracy of the position of an object in machine vision.
In the prior art, the recognition of the object position is mainly based on photographing by a camera, that is, a reference position is preset, and the camera photographs the object to obtain the object position relative to the camera, so that the position information of the object relative to the reference position is determined according to the position of the camera and the reference position.
However, in the above-mentioned implementation, when the camera has internal parameters such as time variation, the positional information of the object relative to the reference position is inaccurate, and thus the capturing of the object is inaccurate.
Disclosure of Invention
The disclosure provides a method, a device, a robot and a storage medium for determining the position of an object, so as to solve the problems of inaccurate object grabbing and the like caused by the problems of internal parameters of a camera and the like in the prior art.
In a first aspect, an embodiment of the present disclosure provides a method for determining a position of an object, which is applied to a robot, and the method includes:
acquiring first position information of an object to be grabbed relative to a preset reference coordinate;
acquiring third position information of the object to be grabbed relative to the reference coordinate according to second position information between the reference coordinate and the first position information;
grabbing the object to be grabbed according to the third position information;
the reference coordinates are coordinates fixed with position information between the reference coordinates and preset for camera shooting of the robot, and the relative positions of the reference coordinates and the robot are fixed.
In one possible design of the first aspect, the reference position corresponding to the reference coordinate and the object to be grabbed are located within a current field of view of a camera of the robot;
the obtaining the first position information of the object to be grabbed relative to the preset reference coordinate includes:
acquiring a first position of the object to be grabbed and a reference position corresponding to the reference coordinate;
and converting the first position into a reference coordinate taking the reference position as a benchmark to obtain the first position information.
In another possible design of the first aspect, the reference position corresponding to the reference coordinate and the object to be grabbed are respectively located in different visual fields of the camera of the robot before and after movement;
the obtaining the first position information of the object to be grabbed relative to the preset reference coordinate includes:
acquiring a second position of the object to be grabbed based on the camera after moving;
determining, according to fourth position information before and after movement of the camera, that the second position is based on a third position at which the camera was located before movement;
and converting the third position into a reference coordinate taking the reference position as a benchmark to obtain the first position information.
In still another possible design of the first aspect, the obtaining a calculation formula corresponding to the third position information of the object to be grabbed relative to the reference coordinate according to the second position information between the reference coordinate and the first position information is:
wherein the saidFor the third position information, the +.>For the first position information, theIs the second location information.
In a further possible design of the first aspect, before the obtaining the first position information of the object to be grabbed relative to the preset reference coordinate, the method further includes:
determining three markers at fixed positions;
and establishing the reference coordinate by taking the central positions of the three markers as an origin and the normal vector of the plane constructed by the three markers.
Optionally, the three markers are all located within a current field of view of a camera of the robot.
Optionally, if the reference position corresponding to the reference coordinate and the object to be grabbed are respectively located in different visual fields of the camera of the robot before and after movement, the method further includes:
and re-determining the reference coordinates in the visual field range of the camera of the robot where the object to be grabbed is located.
In a second aspect, an embodiment of the present disclosure provides an apparatus for determining a position of an object, which is applied to a robot, the apparatus including:
the acquisition module is used for acquiring first position information of an object to be grabbed relative to a preset reference coordinate;
the determining module is used for acquiring third position information of the object to be grabbed relative to the reference coordinate according to the second position information between the reference coordinate and the first position information;
the grabbing module is used for grabbing the object to be grabbed according to the third position information;
the reference coordinates are coordinates fixed with position information between the reference coordinates and preset for camera shooting of the robot, and the relative positions of the reference coordinates and the robot are fixed.
In a possible design of the second aspect, the reference position corresponding to the reference coordinates and the object to be grabbed are located within a current field of view of a camera of the robot;
the acquisition module is specifically configured to:
acquiring a first position of the object to be grabbed and a reference position corresponding to the reference coordinate;
and converting the first position into a reference coordinate taking the reference position as a benchmark to obtain the first position information.
In another possible design of the second aspect, the reference position corresponding to the reference coordinate and the object to be grabbed are respectively located in different visual fields of the camera of the robot before and after movement;
the acquisition module is specifically configured to:
acquiring a second position of the object to be grabbed based on the camera after moving;
determining, according to fourth position information before and after movement of the camera, that the second position is based on a third position at which the camera was located before movement;
and converting the third position into a reference coordinate taking the reference position as a benchmark to obtain the first position information.
In still another possible design of the second aspect, the obtaining a calculation formula corresponding to the third position information of the object to be grabbed relative to the reference coordinate according to the second position information between the reference coordinate and the first position information is:
wherein the saidFor the third position information, the +.>For the first position information, theIs the second location information.
In a further possible design of the second aspect, before the obtaining the first position information of the object to be grabbed relative to the preset reference coordinates, the determining module is further configured to:
determining three markers at fixed positions;
and establishing the reference coordinate by taking the central positions of the three markers as an origin and the normal vector of the plane constructed by the three markers.
Optionally, the three markers are all located within a current field of view of a camera of the robot.
Optionally, if the reference position corresponding to the reference coordinate and the object to be grabbed are respectively located in different visual fields of the camera of the robot before and after movement, the determining module is further configured to:
and re-determining the reference coordinates in the visual field range of the camera of the robot where the object to be grabbed is located.
In a third aspect, the present disclosure provides a robot comprising: a processor, and a memory and transceiver communicatively coupled to the processor;
the memory stores computer-executable instructions; the transceiver is used for receiving and transmitting data;
the processor executes computer-executable instructions stored in the memory to implement the method as described in the first aspect or any of the ways described above.
In a fourth aspect, the present disclosure provides a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, are adapted to carry out the method of the first aspect or any of the ways described above.
In a fifth aspect, the present disclosure provides a computer program product comprising a computer program which, when executed by a processor, implements a method as described in the first aspect or any of the ways described above.
The method is applied to a robot, and obtains third position information of an object to be grabbed relative to a reference coordinate according to second position information between the reference coordinate and the first position information by obtaining the first position information of the object to be grabbed relative to the preset reference coordinate; the reference coordinates are coordinates fixed with position information between the reference coordinates and preset for camera shooting of the robot, the reference coordinates are fixed with the relative positions of the robot, and then the object to be grabbed is grabbed according to third position information. According to the technical scheme, the preset reference coordinate is used as the intermediate conversion point of the reference coordinate and the object to be grabbed, the position information of the object to be grabbed relative to the reference coordinate can be determined more accurately under the condition that the position information between the reference coordinate and the reference coordinate is known, and position errors caused by internal reference problems of cameras such as time varying in the prior art are avoided.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating a method for determining a position of an object according to an embodiment of the present disclosure;
fig. 2 is a second flowchart of a method for determining a position of an object according to an embodiment of the disclosure;
FIG. 3 is a schematic illustration of determining a position of an object according to an embodiment of the present disclosure;
fig. 4 is a flowchart illustrating a method for determining a position of an object according to an embodiment of the present disclosure;
FIG. 5 is a second schematic illustration of determining a position of an object according to an embodiment of the disclosure;
FIG. 6 is a schematic structural diagram of an apparatus for determining a position of an object according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a robot according to an embodiment of the present disclosure.
Specific embodiments of the present disclosure have been shown by way of the above drawings and will be described in more detail below. These drawings and the written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the disclosed concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
Before introducing embodiments of the present disclosure, an application background of the embodiments of the present disclosure is first explained:
with the continuous development of the technical field of machine vision, technicians and users thereof also put higher demands on the determination accuracy of the position of an object in machine vision.
In the prior art, the recognition of the object position is mainly based on photographing by a camera, namely, a reference position is preset, and the camera photographs the object to obtain the object position relative to the camera, so that the position information of the object relative to the reference position is determined according to the position of the camera and the reference position, wherein the reference position is relatively fixed with the position of the robot.
Problems existing in the prior art to be solved by the embodiments of the present disclosure: when the camera has internal parameters such as time variation, the inaccurate position information of the object relative to the reference position can be caused, and then the inaccurate grabbing condition of the object is caused.
Aiming at the technical problems in the prior art, the inventor of the present disclosure has conceived that the present disclosure needs to obtain the position information of an object to be grabbed relative to a reference point, where the reference point is fixed to the position of a robot, at this time, a reference point may be selected in a physical space, and because of internal parameters such as time varying of a camera, the reference point and the object to be grabbed are shot by the camera, the object to be grabbed is converted into coordinates corresponding to the reference point, and further because of the fixed position of the reference point and the reference point, the coordinates corresponding to the reference point of the object to be grabbed are converted into coordinates corresponding to the reference point, so as to avoid a position error caused by internal parameters of the camera.
The technical scheme of the present disclosure is described in detail below through specific embodiments. It should be noted that the following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
Noteworthy are: the application fields of the method, the device, the robot and the storage medium for determining the object position are not limited.
The execution body of the present disclosure is a robot, and specifically may be a control unit in the robot, a controller that controls the robot, or the like.
Fig. 1 is a flowchart of a method for determining a position of an object according to an embodiment of the present disclosure, as shown in fig. 1, the method for determining a position of an object may include the following steps:
and 11, acquiring first position information of the object to be grabbed relative to a preset reference coordinate.
In this step, when the precision of the camera has a time-varying problem due to some reasons (for example, temperature drift, micro movement of the component after swinging, etc.), a reference coordinate may be preset in the physical space, and the camera photographs the object to be grasped and the reference coordinate, so that the position information of the object to be grasped relative to the reference coordinate, that is, the first position information, may be obtained.
As an example, the first position information, that is, the coordinate point corresponding to the origin of the reference coordinate, of the object to be grabbed, taking the object to be grabbed as a point as an example, where the first position information may be (3, 4, 5); taking the object to be grasped as a cylinder (which may be a point if the bottom surface of the cylinder is relatively small), the first position information may be composed of (3, 4, 5) and (4, 6, 7).
It should be understood that: the coordinate points to which the first position information relates are different as the shape of the object to be grasped is different.
Optionally, before the step 11, the method for determining the position of the object may further include constructing reference coordinates:
step 1, three markers (English: datum point) of fixed positions are determined.
The arrangement of the reference coordinates may be fixed in the physical space without problems such as time-varying, etc., the reference coordinates are offset, i.e. the three markers constituting the reference coordinates are markers of fixed positions.
In one possible implementation, the three markers are all located within the current field of view of the camera of the robot.
The meaning of this definition is: the camera can determine the position information of the reference coordinate under one shooting, multiple shooting is not needed, and the problems of time variation and the like are avoided to influence the accuracy of the position information of the reference coordinate.
And 2, establishing reference coordinates by taking the central positions of the three markers as the origins and normal vectors of planes constructed by the three markers.
In physical space, the center position of the three markers is determined, the center position is marked as an origin B, a plane constructed by the three markers is determined, and then a reference coordinate is constructed based on the normal vector of the plane and the origin B (see the embodiment shown in fig. 3 for a diagram of this implementation).
And step 12, acquiring third position information of the object to be grabbed relative to the reference coordinates according to the second position information between the reference coordinates and the first position information.
The reference coordinates are coordinates fixed with position information between the reference coordinates and preset for camera shooting of the robot, and the relative positions of the reference coordinates and the robot are fixed;
in this step, the positions of the reference coordinates and the reference coordinates in the physical space are known, and after the first position information of the object to be grasped with respect to the reference coordinates is obtained, the position of the object to be grasped may be converted into the reference coordinates based on the second position information between the reference coordinates and the reference coordinates, so as to obtain the third position information of the object to be grasped with respect to the reference coordinates.
That is, the second position information can be understood as a positional relationship between the reference coordinates and the base coordinates.
Optionally, the calculation formula in the step 12 is:
wherein,,for the third position information->For the first position information->Is the second location information.
In one possible implementation, the second location information: taking the reference coordinate as a center origin, and marking the reference coordinate as (0, 0), and the coordinate of the reference coordinate relative to the reference coordinate is (2, 2);
if the first position information of the object to be grasped relative to the reference coordinates is (3, 4, 5), the third position information of the object to be grasped relative to the reference coordinates is (5, 6, 7), that is, the position of the object to be grasped in the coordinate system corresponding to the reference coordinates is (5, 6, 7).
And 13, grabbing an object to be grabbed according to the third position information.
In this step, in the robot field of view, the relative position of the reference coordinate and the robot is known to be fixed, and then the gripping unit can be moved to a position indicated by the third position information according to the third position information of the object to be gripped relative to the reference coordinate, so as to perform the gripping operation on the object to be gripped.
The realization of the process avoids the participation of external parameters and flanges in the robot, and also avoids errors caused by internal parameters of the camera.
The method for determining the object position is applied to a robot, and is used for obtaining third position information of an object to be grabbed relative to a reference coordinate according to second position information between the reference coordinate and the first position information by obtaining the first position information of the object to be grabbed relative to the preset reference coordinate; the reference coordinates are coordinates fixed with position information between the reference coordinates and preset for camera shooting of the robot, the reference coordinates are fixed with the relative positions of the robot, and then the object to be grabbed is grabbed according to third position information. According to the technical scheme, the preset reference coordinate is used as the intermediate conversion point of the reference coordinate and the object to be grabbed, the position information of the object to be grabbed relative to the reference coordinate can be accurately determined under the condition that the position information between the reference coordinate and the reference coordinate is known, the position error caused by internal reference problems of a camera such as time varying in the prior art is avoided, the absolute error of the camera is reduced, the time varying problem of the mechanical arm is corrected, and the external reference is not required to be calibrated when the camera is replaced.
On the basis of the embodiment shown in fig. 1, if the reference position corresponding to the reference coordinate and the object to be grabbed are located in the current field of view of the camera of the robot, the implementation of step 11 may be as shown in fig. 2, and fig. 2 is a second schematic flow chart of the method for determining the position of the object according to the embodiment of the disclosure.
Fig. 3 is a schematic diagram for determining a position of an object according to an embodiment of the present disclosure, and the embodiment shown in fig. 2 is described with reference to the example shown in fig. 3.
In this step, under one photographing of the camera, the reference coordinates formed by the object to be grasped and the three markers are fully received in the field of view of the camera, and at this time, the first position of the object to be grasped in the field of view of the camera and the reference position corresponding to the reference coordinates are obtained.
In one possible implementation, a coordinate system is established with the camera as an origin, the first position obtained from the object O to be grabbed may be (4, 5, 6), and the reference position corresponding to the reference coordinate B may be (1, 1).
In this step, since the object to be grabbed and the reference coordinate are both in the same coordinate system of the camera after the camera takes a picture in the physical space, at this time, the first position is converted into the reference coordinate with the reference position as a reference, so as to obtain the position information of the object to be grabbed in the reference coordinate, i.e. the first position information.
In the above possible implementation, the first position may be (4, 5, 6), the reference position corresponding to the reference coordinate B may be (1, 1), the reference coordinate is set to be the origin (0, 0), and the position information of the object O to be grabbed in the reference coordinate B is (3, 4, 5).
Further, after this embodiment, the above-described step 12 is performed, that is, the positional relationship of the reference coordinates and the base coordinates is known in the physical space, and the object to be grasped is converted into third positional information with respect to the base coordinates.
For example, the reference coordinates (0, 0) and the reference coordinates (2, 2) with respect to the reference coordinates are the reference coordinates, and the third position information for converting the object to be grasped into the relative reference coordinates is (5, 6, 7).
According to the method for determining the object position, the first position of the object to be grabbed and the reference position corresponding to the reference coordinate are obtained, and then the first position is converted into the reference coordinate taking the reference position as a base, so that first position information is obtained. In the technical scheme, the reference position corresponding to the reference coordinate and the object to be grabbed are located in the current visual field range of the camera of the robot, and then the conversion is directly carried out, so that the first position information of the object to be grabbed relative to the preset reference coordinate can be obtained.
On the basis of the embodiment shown in fig. 1, if the reference position corresponding to the reference coordinate and the object to be grabbed are respectively located in different visual fields of the camera of the robot before and after the movement, the implementation of the step 11 may be as shown in fig. 4, and fig. 4 is a flowchart three of the method for determining the object position provided in the embodiment of the present disclosure.
Fig. 5 is a second schematic diagram for determining a position of an object according to an embodiment of the present disclosure, and the embodiment shown in fig. 4 is described with reference to the example shown in fig. 5.
In this step, under one photographing of the camera, the reference coordinates formed by the object to be grasped and the three markers are not fully received in the field of view of the camera, and at this time, the robot controls the movement of the camera to photograph the positions corresponding to the object to be grasped and the reference coordinates.
Further, the moved camera is taken as a reference, and the object to be grabbed is determined based on the second position of the moved camera.
In one possible implementation, the second position, i.e. under the camera field of view 2, the object O to be grabbed is based on the coordinates of Z2 (0, -4).
In this step, before and after the camera moves, the robot knows that the camera moves twice to correspond to fourth position information, and then based on the second position where the camera is located after the movement, the position of the object to be grabbed can be converted into a third position where the camera is located before the movement.
In one possible implementation, the Z-axis and Y-axis coordinates between Z1 and Z2 remain unchanged, and only the camera moves on the X-axis, i.e. the fourth position information, e.g. with Z1 as origin (0, 0), the coordinates of Z2 as (3,0,0), and the object to be grabbed O based on the coordinates of Z2 (0, -4), the second position is based on the third position (3, 0, -4) where the camera was located before the movement.
And step 43, converting the third position into a reference coordinate taking the reference position as a base to obtain first position information.
In this step, since the positional relationship between the reference position corresponding to the reference coordinate and the camera before movement is fixed in the physical space, at this time, the third position is converted into the reference coordinate with the reference position as a reference, and positional information of the object to be grasped in the reference coordinate, that is, first positional information, is obtained.
In the above possible implementation, the third position may be (3, 0, -4), the reference position corresponding to the reference coordinate B may be (1, 1), the reference coordinate is set to be the origin (0, 0), and the position information of the object O to be grabbed in the reference coordinate B is (2, -1, -5).
Further, after this embodiment, the above-described step 12 of converting the object to be grasped into third positional information with respect to the reference coordinates with the positional relationship of the reference coordinates and the base coordinates fixed in the physical space is performed.
For example, the reference coordinates (0, 0) and the reference coordinates (2, 2) with respect to the reference coordinates are the reference coordinates, the object to be grasped is converted into the third position information (0, 1, -3) with respect to the reference coordinates.
In addition, optionally, if the reference position corresponding to the reference coordinate and the object to be grabbed are respectively located in different visual fields of the camera of the robot before and after the movement, the reference coordinate may be determined again when the object to be grabbed is located in the visual field of the camera of the robot.
In this embodiment, in order to avoid the situation that the embodiment shown in fig. 4 needs multiple conversions, the reference coordinate may be redetermined in the field of view of the camera corresponding to the object to be grabbed obtained by shooting, where the positional relationship between the reference coordinate and the reference coordinate is fixed, and then the conversion is performed once, so that the third positional information of the object to be grabbed converted into the relative reference coordinate may be obtained.
That is, in practical application, more markers can be installed, so that the camera can always see 3 markers at the same time when shooting an object to be grabbed at one time.
It should be understood that: when the camera is actually used, the position relation between the camera and the flange is fixed, and the camera can be seen to move from the flange to the camera.
Alternatively, the calculation formula corresponding to the embodiment shown in fig. 4 may be:
wherein,,for the first position information->For the second position, ++>As the information of the fourth location,is the position based on the reference coordinates before the camera is moved.
According to the method for determining the object position, the object to be grabbed is obtained based on the second position of the camera after movement, the second position is determined based on the third position of the camera before movement according to the fourth position information of the camera before and after movement, and then the third position is converted into the reference coordinate taking the reference position as a reference to obtain the first position information. In the technical scheme, the reference position corresponding to the reference coordinate and the object to be grabbed are respectively located in different visual fields of the camera of the robot, and then the conversion is carried out layer by layer, so that first position information of the object to be grabbed relative to the preset reference coordinate can be obtained.
The following are device embodiments of the present disclosure that may be used to perform method embodiments of the present disclosure. For details not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the embodiments of the method of the present disclosure.
Fig. 6 is a schematic structural diagram of an apparatus for determining a position of an object according to an embodiment of the present disclosure. As shown in fig. 6, the apparatus for determining the position of an object is applied to a robot, and includes:
an obtaining module 61, configured to obtain first position information of an object to be grabbed relative to a preset reference coordinate;
the determining module 62 is configured to obtain third position information of the object to be grabbed relative to the reference coordinate according to the second position information between the reference coordinate and the first position information;
a grabbing module 63, configured to grab an object to be grabbed according to the third position information;
the reference coordinates are coordinates fixed with position information between the reference coordinates and preset for camera shooting of the robot, and the relative positions of the reference coordinates and the robot are fixed.
In one possible design of the embodiment of the disclosure, the reference position corresponding to the reference coordinate and the object to be grabbed are located in a current field of view of a camera of the robot;
the obtaining module 61 is specifically configured to:
acquiring a first position of an object to be grabbed and a reference position corresponding to a reference coordinate;
and converting the first position into a reference coordinate taking the reference position as a base to obtain first position information.
In another possible design of the embodiment of the disclosure, the reference position corresponding to the reference coordinate and the object to be grabbed are respectively located in different visual fields of the camera of the robot before and after the movement;
the obtaining module 61 is specifically configured to:
acquiring a second position of an object to be grabbed based on the camera after moving;
determining, based on fourth position information before and after the movement of the camera, that the second position is based on a third position at which the camera is located before the movement;
and converting the third position into a reference coordinate taking the reference position as a base to obtain the first position information.
In still another possible design of the embodiment of the present disclosure, according to the second position information between the reference coordinate and the first position information, a calculation formula corresponding to the third position information of the object to be grabbed relative to the reference coordinate is obtained:
wherein,,for the third position information->For the first position information->Is the second location information.
In yet another possible design of the embodiment of the present disclosure, before acquiring the first position information of the object to be grabbed relative to the preset reference coordinate, the determining module 62 is further configured to:
determining three markers at fixed positions;
the reference coordinates are established by taking the central positions of the three markers as the origin and the normal vector of the plane constructed by the three markers.
Optionally, the three markers are all located within the current field of view of the camera of the robot.
Optionally, if the reference position corresponding to the reference coordinate and the object to be grabbed are respectively located in different fields of view of the camera of the robot before and after the movement, the determining module 62 is further configured to:
the reference coordinates are redetermined within the field of view of the camera of the robot where the object to be grabbed is located.
The device for determining the position of the object provided in the embodiment of the present disclosure may be used to execute the method for determining the position of the object in any of the embodiments described above, and its implementation principle and technical effects are similar, and will not be described in detail herein.
It should be noted that, it should be understood that the division of the modules of the above apparatus is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated. And these modules may all be implemented in software in the form of calls by the processing element; or can be realized in hardware; the method can also be realized in a form of calling software by a processing element, and the method can be realized in a form of hardware by a part of modules. In addition, all or part of the modules may be integrated together or may be implemented independently. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form.
Fig. 7 is a schematic structural diagram of a robot according to an embodiment of the disclosure, as shown in fig. 7, the robot may include: a processor 71, a memory 72 and computer program instructions stored on the memory 72 and executable on the processor 71, the processor 71 implementing the method provided by any of the preceding embodiments when the computer program instructions are executed.
Alternatively, the above devices of the robot may be connected by a system bus.
The memory 72 may be a separate memory unit or may be a memory unit integrated in the processor 71. The number of processors 71 is one or more.
It should be appreciated that the processor 71 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors 71, digital signal processors 71 (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), etc. The general purpose processor 71 may be a microprocessor 71 or the processor 71 may be any conventional processor 71 or the like. The steps of a method disclosed in connection with the present disclosure may be embodied directly in hardware processor 71 for execution, or in a combination of hardware and software modules in processor 71.
The system bus may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The system bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus. The memory 72 may include random access memory 72 (random access memory, RAM) and may also include non-volatile memory 72 (NVM), such as at least one disk memory 72.
All or part of the steps for implementing the method embodiments described above may be performed by hardware associated with program instructions. The foregoing program may be stored in a readable memory 72. The program, when executed, performs steps including the method embodiments described above; and the aforementioned memory 72 (storage medium) includes: read-only memory 72 (ROM), RAM, flash memory 72, hard disk, solid state disk, magnetic tape, floppy disk, optical disk, and any combination thereof.
The robot provided in the embodiments of the present disclosure may be used to execute the method for determining the position of the object provided in any of the embodiments of the method, and its implementation principle and technical effects are similar, and are not described herein again.
Embodiments of the present disclosure provide a computer-readable storage medium having stored therein computer instructions that, when executed on a computer, cause the computer to perform the above-described method of determining the position of an object.
The computer readable storage medium described above may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as static random access memory, electrically erasable programmable read-only memory, magnetic memory, flash memory, magnetic disk or optical disk. A readable storage medium can be any available medium that can be accessed by a general purpose or special purpose computer.
In the alternative, a readable storage medium is coupled to the processor such that the processor can read information from, and write information to, the readable storage medium. In the alternative, the readable storage medium may be integral to the processor. The processor and the readable storage medium may reside in an application specific integrated circuit (Application Specific Integrated Circuits, ASIC). The processor and the readable storage medium may reside as discrete components in a device.
The disclosed embodiments also provide a computer program product comprising a computer program stored in a computer readable storage medium, from which at least one processor can read, said at least one processor executing said computer program, implementing the above method for determining the position of an object.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (10)
1. A method of determining a position of an object, the method being applied to a robot, the method comprising:
acquiring first position information of an object to be grabbed relative to a preset reference coordinate;
acquiring third position information of the object to be grabbed relative to the reference coordinate according to second position information between the reference coordinate and the first position information;
grabbing the object to be grabbed according to the third position information;
the reference coordinates are coordinates fixed with position information between the reference coordinates and preset for camera shooting of the robot, and the relative positions of the reference coordinates and the robot are fixed.
2. The method according to claim 1, wherein the reference position and the object to be grabbed corresponding to the reference coordinates are located within a current field of view of a camera of the robot;
the obtaining the first position information of the object to be grabbed relative to the preset reference coordinate includes:
acquiring a first position of the object to be grabbed and a reference position corresponding to the reference coordinate;
and converting the first position into a reference coordinate taking the reference position as a benchmark to obtain the first position information.
3. The method according to claim 1, wherein the reference position corresponding to the reference coordinates and the object to be grasped are respectively located in different visual fields of the camera of the robot before and after movement;
the obtaining the first position information of the object to be grabbed relative to the preset reference coordinate includes:
acquiring a second position of the object to be grabbed based on the camera after moving;
determining, according to fourth position information before and after movement of the camera, that the second position is based on a third position at which the camera was located before movement;
and converting the third position into a reference coordinate taking the reference position as a benchmark to obtain the first position information.
4. The method according to claim 1, wherein the obtaining a calculation formula corresponding to the third position information of the object to be grabbed relative to the reference coordinate according to the second position information between the reference coordinate and the first position information is:
5. The method according to claim 1, wherein before the acquiring the first position information of the object to be grasped with respect to the preset reference coordinates, the method further comprises:
determining three markers at fixed positions;
and establishing the reference coordinate by taking the central positions of the three markers as an origin and the normal vector of the plane constructed by the three markers.
6. The method of claim 5, wherein the three markers are each located within a current field of view of a camera of the robot.
7. A method according to claim 3, wherein if the reference position corresponding to the reference coordinate and the object to be grasped are located in different fields of view of the camera of the robot before and after movement, respectively, the method further comprises:
and re-determining the reference coordinates in the visual field range of the camera of the robot where the object to be grabbed is located.
8. An apparatus for determining a position of an object, applied to a robot, comprising:
the acquisition module is used for acquiring first position information of an object to be grabbed relative to a preset reference coordinate;
the determining module is used for acquiring third position information of the object to be grabbed relative to the reference coordinate according to the second position information between the reference coordinate and the first position information;
the grabbing module is used for grabbing the object to be grabbed according to the third position information;
the reference coordinates are coordinates fixed with position information between the reference coordinates and preset for camera shooting of the robot, and the relative positions of the reference coordinates and the robot are fixed.
9. A robot, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the method of any one of the preceding claims 1 to 7.
10. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out the method of any one of the preceding claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310263886.8A CN116214514A (en) | 2023-03-17 | 2023-03-17 | Method and device for determining object position, robot and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310263886.8A CN116214514A (en) | 2023-03-17 | 2023-03-17 | Method and device for determining object position, robot and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116214514A true CN116214514A (en) | 2023-06-06 |
Family
ID=86573080
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310263886.8A Pending CN116214514A (en) | 2023-03-17 | 2023-03-17 | Method and device for determining object position, robot and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116214514A (en) |
-
2023
- 2023-03-17 CN CN202310263886.8A patent/CN116214514A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107633536B (en) | Camera calibration method and system based on two-dimensional plane template | |
WO2022052404A1 (en) | Memory alignment and insertion method and system based on machine vision, device, and storage medium | |
CN111263142B (en) | Method, device, equipment and medium for testing optical anti-shake of camera module | |
CN111579561B (en) | Position point compensation method, device, equipment and storage medium | |
CN108805938B (en) | Detection method of optical anti-shake module, mobile terminal and storage medium | |
CN111489288B (en) | Image splicing method and device | |
CN108604374B (en) | Image detection method and terminal | |
CN109544643A (en) | A kind of camera review bearing calibration and device | |
CN113643384B (en) | Coordinate system calibration method, automatic assembly method and device | |
WO2023134237A1 (en) | Coordinate system calibration method, apparatus and system for robot, and medium | |
US11637948B2 (en) | Image capturing apparatus, image processing apparatus, image processing method, image capturing apparatus calibration method, robot apparatus, method for manufacturing article using robot apparatus, and recording medium | |
CN111538029A (en) | Vision and radar fusion measuring method and terminal | |
CN117173254A (en) | Camera calibration method, system, device and electronic equipment | |
CN111383264A (en) | Positioning method, positioning device, terminal and computer storage medium | |
CN115713563A (en) | Camera calibration method and device, electronic equipment and storage medium | |
WO2024109403A1 (en) | 3d camera calibration method, point cloud image acquisition method, and camera calibration system | |
CN116214514A (en) | Method and device for determining object position, robot and storage medium | |
JP2009301181A (en) | Image processing apparatus, image processing program, image processing method and electronic device | |
CN113407030A (en) | Visual positioning method and related device, equipment and storage medium | |
CN116051634A (en) | Visual positioning method, terminal and storage medium | |
CN113379692A (en) | Method and device for calibrating OM and SEM coordinate relation, equipment and storage medium | |
CN114897851A (en) | Coordinate compensation method, device, equipment and medium based on central projection | |
CN118700122A (en) | Method and device for determining object position and storage medium | |
JP6680335B2 (en) | Stereo camera, vehicle, calculation method and program | |
CN115052422B (en) | Method for establishing circuit board impedance line compensation model, compensation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |