CN112529856A - Method for determining the position of an operating object, robot and automation system - Google Patents

Method for determining the position of an operating object, robot and automation system Download PDF

Info

Publication number
CN112529856A
CN112529856A CN202011379352.4A CN202011379352A CN112529856A CN 112529856 A CN112529856 A CN 112529856A CN 202011379352 A CN202011379352 A CN 202011379352A CN 112529856 A CN112529856 A CN 112529856A
Authority
CN
China
Prior art keywords
operation object
robot
coordinate system
target operation
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011379352.4A
Other languages
Chinese (zh)
Inventor
王全鑫
杨师华
匡立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011379352.4A priority Critical patent/CN112529856A/en
Publication of CN112529856A publication Critical patent/CN112529856A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/088Controls for manipulators by means of sensing devices, e.g. viewing or touching devices with position, velocity or acceleration sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Quality & Reliability (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Manipulator (AREA)

Abstract

The application discloses a method for determining the position of an operation object, a robot system and an automation system, and belongs to the technical field of automation. According to the method for determining the position of the operation object, the reference marks are distributed on the operation body as reference positions, and the global image containing the reference marks and shot by the camera positioned on the robot is obtained, so that the user coordinate system of the operation body and the tool coordinate system of the robot can be associated through the camera, the position of the operation object under the user coordinate system can be converted into the position of the operation object under the tool coordinate system of the robot, the robot can move to the operation position corresponding to the operation object according to the position of the operation object under the tool coordinate system, and the operation object can be operated.

Description

Method for determining the position of an operating object, robot and automation system
Technical Field
The present application relates to the field of automation technologies, and in particular, to a method, a robot system, and an automation system for determining a position of an operation object.
Background
In the background of the era of 5.0 industry, the intelligent modification of an Optical Distribution Frame (ODF) machine room is imperative. The intellectualization of the ODF machine room, that is, the operation automation of the ODF machine room, means that the daily operations in the ODF machine room, such as optical fiber plugging and unplugging, optical fiber port cleaning, optical fiber port detection, etc., are implemented by replacing the manual work with a robot.
Before a robot operates an operation object such as an optical fiber port, the relative position of the robot to the operation object needs to be determined, and then the robot can determine a moving direction, a moving distance and the like and further can move to a working position to operate the operation object.
Disclosure of Invention
The embodiment of the application provides a method for determining the position of an operation object, a robot system and an automation system, wherein the method for determining the position of the operation object can be applied to the robot, so that the robot can determine the position of the operation object relative to a tool center point, and the technical scheme of the method for determining the position of the operation object, the robot system and the automation system is as follows:
in a first aspect, a method for determining a position of an operation object is provided, and the method is applied to a robot, and the method includes:
acquiring a global image of an operation subject shot by a camera positioned on the robot, wherein the operation subject is a subject where an operation object of the robot is positioned, reference marks and at least one operation object are distributed on the same side of the operation subject, and the global image is an image comprising the reference marks; acquiring positions of a target operation object and the reference mark in a user coordinate system of the operation body, wherein the target operation object is an operation object needing to be operated in at least one operation object; determining a first position of the target operation object in the tool coordinate system of the robot based on the positions of the target operation object and the reference mark in the user coordinate system, the global image, and the relative physical positions of the tool center point of the robot and the camera center point of the camera.
The operation main body is provided with an operation object, and the operation main body can be a machine frame in an Optical Distribution Frame (ODF) machine room, a machine frame in a server machine room, a test bench in a laboratory, and the like. When the operation main body is an optical fiber distribution frame, the operation object may be an optical fiber port.
The reference mark may be a plane mark, and a certain vertex of the reference mark serves as a reference point, which coincides with the reference point of the operation subject. To facilitate identification of the reference mark, the reference mark may be a combination of a color and a simple shape, for example, the color of the edge of the reference mark may be different from the color of the frame of the operating body and the body portion of the reference mark.
The robot can operate an operation object on the operation body, the robot has a mechanical arm, the camera is fixed on the mechanical arm, and the camera can be a monocular camera. The robot may further have a distance measuring device fixed thereto for measuring a distance to the operation body and the operation object. Illustratively, the distance measuring device may be a laser distance meter.
The tool coordinate system is a coordinate system established with a tool center point of the robot as an origin for describing a position and a posture of a tool at a robot end, and the Tool Center Point (TCP) is a point on the tool (e.g., a jig, a glue gun, and a welding gun) at the robot end. The user coordinate system is a concept relative to the tool coordinate system and is located on a workpiece (e.g., a fiber distribution frame) that the robot needs to operate on, and may also be referred to as a workpiece coordinate system.
The embodiment of the application provides a method for determining the position of an operation object, which is characterized in that reference marks are arranged on an operation body as reference positions, and a global image containing the reference marks and taken by a camera positioned on a robot is acquired, so that a user coordinate system of the operation body and a tool coordinate system of the robot can be associated through the camera, and therefore, the position of the operation object under the user coordinate system can be converted into the position of the operation object under the tool coordinate system of the robot, and the robot can move to the vicinity of the operation object according to the position of the operation object under the tool coordinate system and operate the operation object.
In one possible implementation, the acquiring a global image of an operation subject captured by a camera located on the robot includes: adjusting the posture of the robot so that the tool coordinate system of the robot is consistent with the posture angle of the user coordinate system of the operation body; and controlling the camera to shoot a global image of the operation subject.
When the attitude angles of the tool coordinate system and the user coordinate system are consistent, the X axis, the Y axis and the Z axis of the tool coordinate system are respectively parallel to the X axis, the Y axis and the Z axis of the user coordinate system. The X-axis and the Y-axis may be parallel to an operation plane in which the operation object and the reference mark are located, and the Z-axis is perpendicular to the operation plane. In addition, the optical axis of the camera is parallel to the Z-axis.
According to the scheme shown in the embodiment of the application, the global image is shot under the condition that the attitude angles of the tool coordinate system and the user coordinate system are consistent, so that when the position of the target operation object is determined based on the global image, the coordinate value of the target operation object in the Z axis of the tool coordinate system is not required to be considered (the coordinate value can be directly measured by a distance measuring device), the coordinate values of the target operation object in the X axis and the Y axis of the tool coordinate system are only required to be calculated, the number of considered parameters is small, and therefore the position of the target operation object determined according to the global image in the tool coordinate system is more accurate (the accuracy can reach the level of 0.1 mm), the adopted algorithm is simpler, and the processing speed is higher.
In one possible implementation, the determining a first position of the target operation object in the tool coordinate system of the robot based on the positions of the target operation object and the reference marker in the user coordinate system, the global image, and the relative physical positions of the tool center point of the robot and the camera center point of the camera includes:
determining the position of the target operation object in the X axis and the Y axis based on the positions of the target operation object and the reference mark in the user coordinate system, the global image and the relative physical positions of the tool center point of the robot and the camera center point of the camera; and acquiring the position of the target operation object on the Z axis measured by a distance measuring device on the robot.
Wherein, the camera can be the monocular camera, and range unit can be laser range finder.
According to the scheme shown in the embodiment of the application, the first position of the target operation object in the tool coordinate system of the robot is divided into the position of an X axis, the position of a Y axis and the position of a Z axis in the tool coordinate system. And, the position of the target operation object in the X-axis and the position of the Y-axis are determined by a global image of the operation body photographed by the camera, and the position of the target operation object in the Z-axis is determined by the distance measuring device.
Compared with the prior art that the positions of the target operation object on the X axis, the Y axis and the Z axis are directly determined through the multi-view structured light camera. The camera provided by the embodiment of the application is only used for determining the position of the target operation object in the X axis and the Y axis, and the position of the target operation object in the Z axis can be directly measured by the distance measuring device. Therefore, the camera provided by the embodiment of the application can adopt a monocular camera with lower cost, the adopted algorithm is simpler when the positions of the target operation object on the X axis and the Y axis are determined, and the accuracy of the determined positions of the target operation object on the X axis and the Y axis is higher. In addition, the position of the target operation object on the Z axis is directly measured through the distance measuring device, so that the accuracy of the determined position of the target operation object on the Z axis is higher.
In one possible implementation, the controlling the camera to capture a global image of the operating subject includes: acquiring size information of the operation body; determining a first photographing position based on the size information of the operating body and the parameter information of the camera; and controlling the camera to move to the first shooting position, and shooting the global image at the first shooting position.
The size information of the operation body may include a length and a width of the operation surface of the operation body. The parameter information of the camera may include a working distance of the camera.
According to the scheme shown in the embodiment of the application, the first shooting position is determined according to the size information of the operation subject and the parameter information of the camera, so that the shot global image can better present the operation subject, and the subsequent calculation based on the global image is facilitated.
In a possible implementation manner, a first information flag is further distributed on the same side of the operation main body, where the first information flag is a flag in which size information of the operation main body is stored, and the obtaining of the size information of the operation main body includes: and identifying the first information mark and acquiring the size information of the operation body.
The first information flag and the reference flag may be two separate flags, and the first information flag may also be integrated in the reference flag. The first information mark may be a two-dimensional code, a bar code, or the like that can store information.
The first information flag may further include size information of the operation object.
According to the scheme shown in the embodiment of the application, the size information of the operation body is stored in the first information mark, so that the method for determining the position of the operation object provided by the embodiment of the application can be migrated in different scenes.
For the robot, it is not necessary to store any scene information in advance, and when the robot migrates to a new scene, the size information of the operation subject can be obtained by scanning the first information flag, and the first photographing position can be determined based on the size information of the operation subject, and the global image can be photographed.
In a possible implementation manner, the obtaining of the positions of the target operation object and the reference mark in the user coordinate system of the operation subject includes:
acquiring an identifier of the operation subject; sending an operation instruction acquisition request carrying the identification of the operation main body to an upper computer; and receiving an operation instruction sent by the upper computer, wherein the operation instruction carries the positions of the target operation object and the reference mark in the user coordinate system.
The operation instruction may also carry information such as an operation type.
In a possible implementation manner, a second information flag is further distributed on the same side of the operation subject, where the second information flag is a flag in which the identifier of the operation subject is stored, and the obtaining the identifier of the operation subject includes: and identifying the second information mark and acquiring the identifier of the operation subject.
The second information flag and the reference flag may be two separate flags, and the second information flag may also be integrated in the reference flag. The second information mark may be a two-dimensional code, a bar code, or the like that can store information. For example, the second information flag and the first information flag may be the same flag.
In one possible implementation, the determining a first position of the target operation object in the tool coordinate system of the robot based on the positions of the target operation object and the reference marker in the user coordinate system, the global image, and the relative physical positions of the tool center point of the robot and the camera center point of the camera includes:
determining pixel positions and pixel sizes of the reference markers in the global image; determining a mapping ratio of a physical size and a pixel size based on the pixel size of the reference mark and the physical size of the reference mark; and determining a first position of the target operation object in the tool coordinate system based on the positions of the target operation object and the reference mark in the user coordinate system, the pixel positions of the reference mark in the global image, the mapping proportion and the relative physical positions of the tool central point and the camera central point.
The physical size of the reference mark can be a standard value, and can be stored in the robot in advance.
In one possible implementation, the determining the first position of the target operation object in the tool coordinate system based on the positions of the target operation object and the reference mark in the user coordinate system, the pixel positions of the reference mark in the global image, the mapping ratio, and the relative physical positions of the tool center point and the camera center point includes:
determining the relative physical positions of the camera central point and the reference mark based on the image central point of the global image, the pixel position of the reference mark in the global image and the mapping proportion, wherein the image central point is the mapping point of the camera central point in the global image; determining a relative physical location of the tool center point and the reference marker based on the relative physical location of the camera center point and the reference marker and the relative physical location of the tool center point and the camera center point; determining a first position of the target operation object in the tool coordinate system based on the relative physical positions of the tool center point and the reference mark and the positions of the target operation object and the reference mark in the user coordinate system.
In one possible implementation, the determining the first position of the target operation object in the tool coordinate system based on the positions of the target operation object and the reference mark in the user coordinate system, the pixel positions of the reference mark in the global image, the mapping ratio, and the relative physical positions of the tool center point and the camera center point includes:
determining the pixel position of the target operation object in the global image based on the positions of the target operation object and the reference mark in the user coordinate system, the pixel position of the reference mark in the global image and the mapping proportion; determining the relative physical position of the camera central point and the target operation object based on the image central point of the global image, the pixel position of the target operation object in the global image and the mapping proportion, wherein the image central point is the mapping point of the camera central point in the global image; determining the position of the target operation object in the tool coordinate system based on the relative physical positions of the camera center point and the target operation object and the relative physical positions of the tool center point and the camera center point.
In one possible implementation, after determining the first position of the target operation object in the tool coordinate system of the robot, the method further includes:
determining a second shooting position based on a first position of the target operation object in a tool coordinate system of the robot; controlling the camera to move to the second shooting position, and shooting a local image containing the target operation object at the second shooting position; and determining a second position of the target operation object in a tool coordinate system of the robot based on the local image.
According to the scheme shown in the embodiment of the application, the second position of the target operation object in the tool coordinate system of the robot is determined based on the local image, so that the determined second position of the target operation object in the tool coordinate system is more accurate, and the method provided by the embodiment of the application can be suitable for scenes with higher requirements on accuracy.
In one possible implementation, the determining, based on the local image, a second position of the target operation object in a tool coordinate system of the robot includes: determining the position of the target operation object on the X axis and the Y axis of the tool coordinate system based on the local image; and acquiring the position of the target operation object measured by a distance measuring device on the robot on the Z axis of the tool coordinate system.
According to the scheme shown in the embodiment of the application, the second position of the target operation object in the tool coordinate system of the robot is divided into the position of an X axis, the position of a Y axis and the position of a Z axis in the tool coordinate system. And, the position of the target operation object in the X-axis and the position of the Y-axis are determined by a global image of the operation body photographed by the camera, and the position of the target operation object in the Z-axis is determined by the distance measuring device.
Compared with the prior art that the positions of the target operation object on the X axis, the Y axis and the Z axis are directly determined through the multi-view structured light camera. The camera provided by the embodiment of the application is only used for determining the position of the target operation object in the X axis and the Y axis, and the position of the target operation object in the Z axis can be directly measured by the distance measuring device. Therefore, the camera provided by the embodiment of the application can adopt a monocular camera with lower cost, the adopted algorithm is simpler when the positions of the target operation object on the X axis and the Y axis are determined, and the accuracy of the determined positions of the target operation object on the X axis and the Y axis is higher. In addition, the position of the target operation object on the Z axis is directly measured through the distance measuring device, so that the accuracy of the determined position of the target operation object on the Z axis is higher.
In one possible implementation manner, the determining a second shooting position based on a first position of the target operation object in a tool coordinate system of the robot includes: acquiring size information of the target operation object; determining the second photographing position based on the size information of the target operation object, the parameter information of the camera, and the first position.
Wherein the size information of the target operation object may be acquired when the first information flag is scanned. The parameter information of the camera includes a working distance of the camera.
According to the scheme shown in the embodiment of the application, the second shooting position is determined according to the size information of the target operation object, the parameter information of the camera and the first position through design, so that the target operation object can be better presented in the local image.
In one possible implementation, the determining, based on the local image, a second position of the target operation object in a tool coordinate system of the robot includes:
determining the relative pixel positions of the central point of the target operation object and the central point of the image in the local image; and determining a second position of the target operation object in the tool coordinate system based on the relative pixel positions of the central point of the target operation object and the central point of the image and the mapping proportion of the physical size and the pixel size corresponding to the local image.
According to the scheme of the embodiment of the application, the determined second shooting position can be a position where a default camera center point is opposite to a center point of the target operation object. If the center point of the target operation object is identified to be not coincident with the image center point in the local image, it is indicated that the previously determined first position of the target operation object in the tool coordinate system is not accurate, the relative physical positions of the center point of the target operation object and the image center point can be determined according to the relative pixel positions of the center point of the target operation object and the image center point and the mapping proportion of the physical size and the pixel size corresponding to the local image, the relative physical position is an error value, and the second position of the target operation object in the current tool coordinate system can be determined according to the error value.
In a possible implementation manner, before determining the second position of the target operation object in the tool coordinate system based on the relative pixel positions of the central point of the target operation object and the central point of the image, and the mapping ratio of the corresponding physical size and the pixel size of the local image, the method further includes:
determining a mapping proportion of a physical size and a pixel size corresponding to the global image based on the pixel size of the reference mark in the global image and the physical size of the reference mark; determining the physical size of the target operation object based on the pixel size of the target operation object in the global image and the mapping proportion of the physical size and the pixel size corresponding to the global image; and determining the mapping proportion of the physical size and the pixel size corresponding to the local image based on the physical size of the target operation object and the pixel size of the target operation object in the local image.
According to the scheme shown in the embodiment of the application, when the mapping proportion corresponding to the local image is calculated, the physical size of the target operation object calculated according to the information of the global image is adopted instead of the theoretical physical size of the stored target operation object, so that the determined mapping proportion corresponding to the local image is more accurate. In this case, the theoretical physical size of the target operation object may not be stored in the database.
Because the processes of manufacturing, installing, using and the like of the operation object and the posture problem of the target operation object may cause the difference between the theoretical physical size and the actual physical size of the operation object, the mapping ratio obtained by directly using the theoretical physical size of the target operation object and the pixel size of the target operation object may have poor accuracy.
The reference mark can be a plane mark and is simple, so that the theoretical physical size and the actual physical size of the reference mark are close, the mapping proportion corresponding to the global image calculated based on the theoretical physical size and the pixel size of the reference mark is accurate, and further, the physical size of the target operation object calculated based on the mapping proportion corresponding to the global image and the pixel size of the target operation object in the global image is accurate, so that the mapping proportion obtained by using the calculated physical size of the target operation object and the pixel size of the target operation object is high in accuracy.
In a possible implementation manner, the global image further includes the target operation object, and after the acquiring of the positions of the target operation object and the reference mark in the user coordinate system of the operation subject, the method further includes:
determining the target operation object in the global image based on the positions of the target operation object and the reference mark in the user coordinate system of the operation body and the mapping proportion of the physical size and the pixel size corresponding to the global image; and if the target operation object is identified to be operated by the robot, executing the process of determining the first position of the target operation object in the tool coordinate system of the robot.
According to the scheme shown in the embodiment of the application, after the positions of the target operation object and the reference mark in the user coordinate system are obtained, the target operation object is determined in the global image, the target operation object is subjected to image recognition, whether the target operation object can be operated or not is recognized, so that when the target operation object is recognized to be incapable of being operated, the determination processing of the position of the target operation object can be stopped in time, even if the position of the target operation object is determined, the target operation object cannot be operated, and therefore computer processing resources are saved.
In a second aspect, there is provided a robot comprising a processor and a memory;
the memory stores one or more programs configured to be executed by the processor for implementing the method of any of the first aspects.
In a third aspect, there is provided a robot system comprising a robot, a camera and a distance measuring device, the robot being the robot of the second aspect; the camera and the distance measuring device are fixed on a mechanical arm of the robot.
Wherein, the camera can be the monocular camera, and range unit can be laser range finder.
In a fourth aspect, there is provided an automation system comprising an operational agent and the robotic system of the third aspect; at least one operation object and a reference mark are distributed on the same side of the operation main body.
The automation system provided by the embodiment of the application can be an ODF machine room system, a server machine room system, a laboratory and the like, and the specific type of the automation system is not limited in the embodiment of the application.
In a possible implementation manner, information flags are further distributed on the same side of the operation main body, and one or more of the identifier of the operation main body, the size information of the operation main body, and the size information of the operation object are stored in the information flags.
The information mark and the reference mark can be two separate marks, and the information mark can also be integrated in the reference mark.
In a fifth aspect, there is provided a computer-readable storage medium comprising instructions which, when run on a robot, cause the robot to perform the method of any of the first aspects.
In a sixth aspect, there is provided a computer program product comprising instructions which, when run on a robot, the robot performs the method according to any of the first aspect.
In a seventh aspect, a chip is provided, the chip comprising programmable logic circuits and/or program instructions, when the chip is run, for implementing the method of any of the first aspect above.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
the embodiment of the application provides a method for determining the position of an operation object, which is characterized in that reference marks are arranged on an operation body as reference positions, and a global image containing the reference marks and taken by a camera positioned on a robot is acquired, so that a user coordinate system of the operation body and a tool coordinate system of the robot can be associated through the camera, and therefore, the position of the operation object under the user coordinate system can be converted into the position of the operation object under the tool coordinate system of the robot, and the robot can move to the vicinity of the operation object according to the position of the operation object under the tool coordinate system to operate the operation object.
Drawings
FIG. 1 is a schematic illustration of an operating environment provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a global image provided by an embodiment of the present application;
FIG. 3 is a flowchart of a method for determining a position of an operation object according to an embodiment of the present disclosure;
FIG. 4 is a flowchart of a method for determining a position of an operation object according to an embodiment of the present disclosure;
FIG. 5 is a flowchart of a method for determining a first position of an operation object according to an embodiment of the present disclosure;
FIG. 6 is a flowchart of a method for determining a first position of an operation object according to an embodiment of the present disclosure;
fig. 7 is a schematic partial view of a global image provided in an embodiment of the present application;
fig. 8 is a schematic diagram of a partial image according to an embodiment of the present application.
Description of the figures
1. Robot, 2, camera, 3, operation subject, 31, operation object, 32, reference mark, 33, information mark, 4, distance measuring device.
Detailed Description
An existing Optical Distribution Frame (ODF) machine room has the problems of extremely disordered optical fiber layout, high difficulty in operation and maintenance of the machine room, difficulty in positioning problems, difficulty in positioning idle ports and the like due to large inventory and years of nonstandard service maintenance. Meanwhile, under the era background of industry 5.0, the intelligent transformation of the ODF machine room is imperative. To realize intellectualization, at least three points are needed to be realized:
1. realizing the digitization of the ODF machine room and constructing a digital mirror image of a physical machine room;
2. the automation of the operation of the ODF machine room is realized, and the daily operation (such as optical fiber plugging, cleaning, replacement and the like) is realized by replacing the manual operation with a robot;
3. the intelligent operation, the automatic fault diagnosis and the remote service issuing of the ODF machine room are realized, and the robot performs proper self-adaptive operation according to the environment perception.
In the process of realizing automatic operation of the ODF machine room, the robot needs to sense a complex machine room environment by means of visual guidance, accurately calculate the motion parameters of the robot, judge whether a port is suitable for operation or not in real time and the like.
In view of the above-mentioned needs, the present application provides a method for determining a position of an operation object, which may be performed by a robot to implement automated operation of an ODF machine room. It should be noted that the method for determining the position of the operation object provided in the embodiment of the present application may be applied not only to the ODF room, but also to any scene with similar requirements, for example, a server room, an operation laboratory bench, and the like.
Hereinafter, hardware related to the embodiment of the present application is described as an example, and as shown in fig. 1, the related hardware includes a robot system and an operation body 3.
The robot system comprises a robot 1, a camera 2 and a distance measuring device 4, wherein the camera 1 and the distance measuring device 4 are fixed on a mechanical arm of the robot 1, and an optical axis of the camera 1, a distance measuring axis of the distance measuring device 4 and a certain coordinate axis of a tool coordinate system are parallel. The embodiment of the present application does not limit the types of the camera 1 and the distance measuring device 4, and for example, the camera 1 may be a monocular camera, and the distance measuring device 4 may be a laser distance meter.
The operating body 3 has at least one operating object 31 and a reference mark 32 thereon, and illustratively, as shown in fig. 1, the operating body 1 has the reference mark 32 and a plurality of identical operating objects 31 arranged thereon. The reference mark 32 may be a plane mark, and a vertex of the reference mark 32 serves as a reference point, which coincides with a reference point of the manipulating body 3 (the reference point may be a coordinate origin of a user coordinate system of the manipulating body 3). When the reference mark 32 is arranged, since an error may cause the reference point of the reference mark 32 and the reference point of the manipulating body 3 not to completely coincide, the error of the reference point of the reference mark 32 and the reference point of the manipulating body 3 is required to be not more than 10% of the size of the manipulating object 31.
To facilitate the identification of the reference mark 32, the reference mark 32 may be a combination of a color and a simple shape, for example, the color of the edge of the reference mark 32 is different from the color of the frame of the operating body 3 and the color of the main body of the reference mark 32, and the color of the edge of the reference mark 32 is red and the color of the main body of the reference mark 32 is green. The operation body 3 may further have an information mark 33 arranged thereon, the information mark 33 may be a two-dimensional code, a barcode, or the like, and the information mark 33 may store one or more of a logo of the operation body 3, size information of the operation body 3, and size information of the operation object 31. The information mark 33 may be integrated in the reference mark 32, or may be arranged separately from the reference mark 32, which is not limited in the embodiment of the present application.
For the sake of understanding, some terms referred to in the embodiments of the present application are briefly described as follows:
tool Center Point (TCP): refers to a point on a tool (e.g., a jig, a glue gun, and a welding gun) located at the end of the robot.
Tool coordinate system: the robot is a coordinate system established by taking a tool center point of the robot as an origin, and is used for describing the position and the posture of a tool at the tail end of the robot. Before the robot operates the operation object, the robot first needs to determine the position of the operation object in the tool coordinate system. The tool coordinate system provided by the embodiment of the application comprises an X axis, a Y axis and a Z axis, wherein the X axis and the Y axis are parallel to an operation plane where the operation object 31 and the reference mark 32 are located, and the Z axis is perpendicular to the operation plane.
The user coordinate system: one concept opposite to the tool coordinate system is on the workpiece (e.g. fiber distribution frame) that the robot needs to operate on, which may also be referred to as the workpiece coordinate system. The user coordinate system provided by the embodiment of the application comprises an X axis, a Y axis and a Z axis, wherein the X axis and the Y axis are parallel to the operation plane, and the Z axis is vertical to the operation plane.
Fig. 3 is a flowchart of a method for determining a position of an operation object according to an embodiment of the present application, which may be applied to the robot 1, and includes the following steps:
in step 301, a global image of the operator 3 captured by the camera 2 located on the robot 1 is acquired.
The operation subject 3 is a subject where an operation object 31 of the robot 1 is located, reference signs 32 and at least one operation object 31 are distributed on the same side of the operation subject 3, and the global image is an image including the reference signs 32.
In step 302, the positions of the target operation object and the reference mark 32 in the user coordinate system of the operation body 3 are acquired.
The target operation object is an operation object 31 which needs to be operated in at least one operation object 31.
Step 303, determining a first position of the target operation object in the tool coordinate system of the robot 1 based on the positions of the target operation object and the reference mark 32 in the user coordinate system, the global image, and the relative physical positions of the tool center point of the robot 1 and the camera center point of the camera 2.
Fig. 4 is a flowchart of another method for determining the position of an operation object, which is shown in the embodiment of the present application, and the method can be applied to the robot 1, and the processing flow of the method will be described in more detail below with reference to the specific implementation, and includes the following steps:
in step 401, a global image of the operating body 3 captured by the camera 2 located on the robot 1 is acquired.
The operation subject 3 is a subject where an operation object 31 of the robot 1 is located, reference signs 32 and at least one operation object 31 are distributed on the same side of the operation subject 3, and the global image is an image including the reference signs 32.
When shooting the global image, the posture of the robot 1 may be adjusted first, so that the posture angles of the tool coordinate system of the robot 1 and the user coordinate system of the operating body 3 are consistent, and it is ensured that the global image is shot when the optical axis of the camera 2 is perpendicular to the operating body 3. When the attitude angles of the tool coordinate system and the user coordinate system coincide, the X-axis, the Y-axis, and the Z-axis of the user coordinate system are parallel to the X-axis, the Y-axis, and the Z-axis of the tool coordinate system, respectively, and the optical axis of the camera 2 is parallel to the Z-axis.
It is assumed that the operation plane of the operation body 3 is a plane in which the X axis and the Y axis are located, and the Z axis is perpendicular to the operation plane. By shooting the global image under the condition that the attitude angles of the tool coordinate system and the user coordinate system are consistent, when the position of the target operation object is determined based on the global image, the coordinate value of the target operation object on the Z axis of the tool coordinate system is not considered (the coordinate value can be directly measured by a distance measuring device), but only the coordinate values of the target operation object on the X axis and the Y axis of the tool coordinate system are calculated, and the number of considered parameters is small, so that the position of the target operation object determined according to the global image in the tool coordinate system is more accurate (the accuracy can reach the level of 0.1 mm), the adopted algorithm is simpler, and the processing speed is higher.
The method for adjusting the posture of the robot 1 is not particularly limited in the embodiment of the present application, and for example, the posture of the robot 1 may be adjusted by using the distance measuring device 4 (laser distance meter). The specific process of posture adjustment may be as follows:
first, by acquiring a laser return distance value in real time while the robot arm is moving, the movement is stopped when a prescribed distance (i.e., the distance to the operation body 3) is reached.
Then, three depth values (distance values from the operation body 3) are scanned and acquired on the operation body 3, and the rotation angle around the base coordinate of the robot tool center point coordinate system is calculated and acquired, so that the posture is corrected.
In addition, in addition to adjusting the posture of the robot 1, in order to enable the operating body 3 to be better presented in the global image, it is possible to determine the first photographing position and photograph the global image at the first photographing position. The first shooting position includes a first shooting distance, which is the distance from the operating body 3 when the camera 2 shoots the global image.
The embodiment of the present application is not limited to the determination method of the first shooting distance. For example, the first photographing distance may ensure that the operating body 3 is completely presented in the global image in a sufficiently large scale, and the first photographing distance may be calculated according to the size information of the operating body 3 and the parameter information of the camera.
In a specific implementation, the first photographing distance may be a predetermined value. It may be an empirical value or a value calculated in advance from the size information of the manipulation body 3. For example, when the robot 1 is applied to some fixed scenes, the size information of the operation body 3 and the operation object 31 is already determined, and the first photographing distance can be calculated directly from the size information and the operating parameter information of the camera 2. Then, each time the robot 1 captures a global image, the global image may be captured at the first capturing distance.
In another specific implementation, the robot 1 may acquire the size information of the operating body 3 before taking the global image each time. Then, based on the size information of the operating body 3 and the parameter information of the camera 2, the first photographing position is determined. The camera 2 is controlled to move to the first photographing position and to photograph the global image at the first photographing position.
In addition, the first photographing position may further include coordinate values of the camera 2 in the X axis and the Y axis, and the coordinate values of the camera 2 in the X axis and the Y axis may be determined when the camera 2 recognizes that the operating body 3 is all present in the global image. It can be understood that, because the coordinate values of the camera 2 on the X axis and the Y axis are not a value with a high requirement for precision, a variety of methods can be adopted to ensure that the operation subject 3 is completely presented in the global image, and details of this embodiment of the present application are not repeated.
The embodiment of the present application is not particularly limited as to the manner of acquiring the size information of the operation body 3.
In a specific implementation manner, first information marks are further distributed on the same side of the operation body 3, the first information marks are marks storing size information of the operation body 3, and the global image further includes the first information marks. The robot 1 may recognize the first information flag and acquire the size information of the manipulating body 3 stored in the first information flag. The first information mark and the reference mark 32 may be respectively distributed on the operation body 31, and the first information mark may also be integrated in the reference mark 32, which is not specifically limited in this embodiment of the application.
In step 402, the positions of the target operation object and the reference mark 32 in the user coordinate system of the operation body 3 are acquired.
The target operation object is an operation object 31 which needs to be operated in at least one operation object 31.
The positions of the target operation object and the reference mark 32 in the user coordinate system of the operation body 3 may also be understood as relative positions of the target operation object and the reference mark 32.
In some cases, the position of the reference mark 32 is determined as a reference position (e.g., the origin of coordinates of the user coordinate system), and the position of the target operation object in the user coordinate system is acquired at this time, which should be considered as the position of the target operation object and the reference mark 32 in the user coordinate system. It is understood that the positions of the target operation object and the reference mark 32 in the user coordinate system of the operation body 3 are physical positions.
The embodiment of the present application is not limited to the method for acquiring the positions of the target operation object and the reference mark 32 in the user coordinate system. In a specific implementation manner, the positions of the target operation object and the reference mark 32 in the user coordinate system may be acquired from an upper computer.
For example, the process of acquiring the position of the target operation object and the reference mark 32 in the user coordinate system from the upper computer may be as follows, where the robot 1 first acquires the identifier of the operation subject 3, and then sends an operation instruction acquisition request carrying the identifier of the operation subject 3 to the upper computer. After receiving the operation instruction acquisition request, the upper computer sends an operation instruction to the robot 1 based on the identifier of the operation main body 3, wherein the operation instruction carries the positions of the target operation object and the reference mark 32 in the user coordinate system. Then, the robot receives the operation instruction, and can acquire the positions of the reference mark 32 and the target operation object in the user coordinate system.
It should be noted that the determination of the target operation object and the operation type that needs to operate on the target operation object may also be determined when the positions of the target operation object and the reference mark 32 in the user coordinate system are acquired. Before acquiring the positions of the target operation object and the reference mark 32 in the user coordinate system, the robot 1 does not determine the target operation object nor the operation type requiring the operation on the target operation object. The information can be carried in the operation instruction of the upper computer.
The embodiment of the present application is not limited to the manner of obtaining the identifier of the operation body 3.
In a specific implementation manner, the same side of the operation body 3 is further distributed with a second information mark, and the second information mark is a mark in which the identifier of the operation body 3 is stored. The robot 1 can acquire the identification of the operation subject 3 by recognizing the second information flag. The identification of the second information indicator may be performed in the global image, or may be performed during the scanning process of the camera 2 before the global image is captured. The second information flag may be the same information flag as the first information flag (e.g., the information flag 33 in fig. 2), or may be a separate flag from the second information flag.
Of course, the above manner of acquiring the positions of the target operation object and the reference mark 32 in the user coordinate system and the manner of acquiring the identifier of the operation body 3 are only exemplary and do not limit the embodiments of the present application. In practical applications, the above content may also be obtained in other ways.
Step 403, determining the target operation object in the global image based on the positions of the target operation object and the reference mark 32 in the user coordinate system of the operation body 3 and the mapping ratio of the physical size and the pixel size corresponding to the global image, and identifying whether the target operation object can be operated by the robot 1.
The global image is an image including the target operation object and the reference mark 32. The mapping ratio of the physical size and the pixel size corresponding to the global image may be calculated by referring to the physical size and the pixel size of the mark 32.
After the positions of the target operation object and the reference mark 32 in the user coordinate system are acquired and the global image is obtained by shooting, the relative pixel positions of the target operation object and the reference mark 32 in the global image can be calculated and obtained according to the mapping proportion and the relative physical positions of the target operation object and the reference mark 32.
Then, based on the pixel position of the reference mark 32 in the global image and the relative pixel position, the operation object 31 satisfying the relative pixel position is found in the global image, and the operation object 31 is determined as the target operation object.
Then, an image recognition algorithm is adopted to recognize whether the target operation object can be operated by the robot, for example, whether the target operation object is blocked or damaged. If the target operation object is recognized to be operated by the robot 1, the step is switched to the step 404; otherwise, the subsequent processing may not be performed so as to save computer resources, because in this case, even if the position of the target operation object in the tool coordinate system is determined, the subsequent operation cannot be performed.
Step 404, determining a first position of the target operation object in the tool coordinate system of the robot 1 based on the positions of the target operation object and the reference mark 32 in the user coordinate system, the global image, and the relative physical positions of the tool center point of the robot 1 and the camera center point of the camera 2.
Wherein the relative physical position of the tool centre point of the robot 1 and the camera centre point of the camera 2, which can be determined after the camera 2 has been mounted, can be stored in the robot 1.
The determination process for determining the first position of the target operation object in the tool coordinate system of the robot 1 may be as follows:
first, the pixel position and pixel size of the reference marker 32 are determined in the global image.
Then, based on the pixel size of the reference mark 32 and the physical size of the reference mark 32, the mapping ratio of the physical size and the pixel size is determined.
Finally, a first position of the target operation object in the tool coordinate system is determined based on the positions of the target operation object and the reference mark 32 in the user coordinate system, the pixel positions of the reference mark 32 in the global image, the mapping ratio, and the relative physical positions of the tool center point and the camera center point.
For a better understanding of the present application, the following is a more detailed exemplary description of the determination of the first position of the target object in the tool coordinate system:
one possible implementation of determining the first position of the target operational object in the tool coordinate system is provided, as shown in fig. 5.
In step 404a, the relative physical positions of the camera center point and the reference mark 32 are determined based on the image center point of the global image, the pixel position of the reference mark 32 in the global image, and the mapping ratio.
The image center point is a mapping point of the camera center point in the global image.
In implementation, based on the image center point of the global image and the pixel location of the reference marker 32 in the global image, the relative pixel location of the image center point and the reference marker 32 can be determined. Then, according to the mapping ratio, the relative pixel position is mapped to the relative physical position, that is, the relative physical position of the camera center point and the reference mark 32 is obtained.
Step 404b determines the relative physical location of the tool center point and the reference marker 32 based on the relative physical location of the camera center point and the reference marker 32 and the relative physical location of the tool center point and the camera center point.
Step 404c determines a first position of the target object of operation in the tool coordinate system based on the relative physical positions of the tool center point and the reference marker 32 and the positions of the target object of operation and the reference marker 32 in the user coordinate system.
As shown in fig. 6, another possible implementation of determining the first position of the target operational object in the tool coordinate system is provided.
Step 404A, determining the pixel position of the target operation object in the global image based on the positions of the target operation object and the reference mark 32 in the user coordinate system, the pixel position of the reference mark 32 in the global image, and the mapping scale.
And step 404B, determining the relative physical positions of the camera center point and the target operation object based on the image center point of the global image, the pixel position of the target operation object in the global image and the mapping proportion.
The image center point is a mapping point of the camera center point in the global image.
Step 404C, determining a first position of the target operation object in the tool coordinate system based on the relative physical positions of the camera center point and the target operation object and the relative physical positions of the tool center point and the camera center point.
It should be noted that the above two implementation manners are not all implementation manners, and those skilled in the art can understand that other implementation manners may also be adopted to determine the first position of the target operation object in the tool coordinate system based on the positions of the target operation object and the reference mark 32 in the user coordinate system, the pixel positions of the reference mark 32 in the global image, the mapping ratio, and the relative physical positions of the tool center point and the camera center point.
It is also to be added that the first position comprises the position of the target operation object in the X-axis, Y-axis and Z-axis, and the position of the target operation object in the X-axis and Y-axis can be determined based on the position of the target operation object and the reference mark in the user coordinate system, the global image, and the relative physical positions of the tool center point of the robot and the camera center point of the camera.
And the position of the target object in the Z-axis of the tool coordinate system can be measured by the distance measuring device 4.
For example, in a specific implementation manner, when the distance measuring device 4 measures the position of the target operation object in the Z axis, the robot 1 drives the distance measuring device 4 to move to the position opposite to the target operation object, and then directly measures the distance to the target operation object (i.e. the position of the target operation object in the Z axis). In another specific implementation manner, the distance measuring device 4 may measure a distance from the operation plane of the operation body 3, and then determine the position of the target operation object in the Z axis based on a pre-stored distance of the target operation object relative to the operation plane (for example, the target operation object may protrude a certain length relative to the operation plane).
After determining the first position of the target object in the tool coordinate system, in a specific implementation, the working position may be determined, and the robot 1 may be controlled to move to the working position and operate on the target object at the working position.
In another specific implementation, in some scenarios with very high accuracy requirement, for example, a scenario with accuracy requirement of 0.1mm, it may be possible to determine that the error of the first position is large, and then go to perform steps 405 and 406 to determine a more accurate position of the target operation object in the tool coordinate system.
Step 405, determining a second shooting position based on the first position of the target operation object in the tool coordinate system of the robot 1, controlling the camera 2 to move to the second shooting position, and shooting a local image containing the target operation object at the second shooting position.
In the embodiment of the present application, the coordinates of the center point of the camera on the X axis and the Y axis of the second shooting position may be the same as the coordinates of the center point of the target operation object on the X axis and the Y axis, where the X axis and the Y axis are parallel to the operation plane of the operation body 3, and the Z axis is perpendicular to the operation plane of the operation body 3. That is, it is ensured as much as possible that the camera 2 is taking a partial image of the target operation object.
It should be noted that the coordinates of the center point of the camera in the second shooting position in the X axis and the Y axis determined in step 405 are the same as the coordinates of the center point of the target operation object in the X axis and the Y axis, which does not mean that the actual coordinate values of the center point of the camera and the center point of the target operation object in the X axis and the Y axis are the same, because the determined coordinates of the center point of the target operation object in the X axis and the Y axis may have errors and are not true values.
In addition, the determined second photographing position may determine a coordinate value (or referred to as a second photographing distance) on the Z axis in addition to the coordinates on the X and Y axes. For example, the second shooting distance may be calculated according to the size information of the target operation object and the parameter information of the camera 2, so as to ensure that the target operation object is presented in the partial image in a sufficiently large scale.
In a specific implementation, the second shooting distance may be an empirical value or a predetermined value. The second photographing distance stored in advance may be acquired when the second photographing position is determined.
In another specific implementation manner, the second shooting distance may also be calculated in real time, and the processing procedure may be as follows: size information of the target operation object is acquired, and the second photographing position is determined based on the size information of the target operation object, the parameter information of the camera 2, and the first position.
The size information of the target operation object may be sent by the upper computer, or may be in the presence information flag, which is not limited in the embodiment of the present application. When the size information of the target operation object is stored in the information flag, the information flag may be the same information flag as the first information flag and the second information flag, and may be integrated in the reference flag 32.
Step 406, determining a second position of the target operation object in the tool coordinate system of the robot 1 based on the local image.
The processing procedure of step 406 will be described by taking as an example that the coordinate values of the center point of the camera on the X axis and the Y axis at the second photographing position are the same as the coordinate values of the center point of the target manipulation object on the X axis and the Y axis (determined in step 405).
First, the relative pixel positions of the center point of the target operation object and the image center point are determined in the partial image.
And then, determining a second position of the target operation object in the tool coordinate system based on the relative pixel positions of the central point of the target operation object and the central point of the image and the mapping proportion of the corresponding physical size and the pixel size of the local image.
If the relative pixel positions of the target operation object and the image center point are the same, the camera center point is opposite to the center point of the target operation object, and the determined first position has no error. If the relative pixel positions of the target operation object and the central point of the image are different, the central point of the camera deviates from the target operation object, the error exists in the first position determined before, and the relative physical position of the central point of the target operation object and the central point of the image is the error value.
For example, the relative physical positions of the center point of the target operation object and the center point of the camera may be calculated by the following formula:
Figure BDA0002808057690000131
wherein, as shown in FIG. 8, O' (x)o′,yo′) Is the pixel position of the central point of the target operation object in the local image, O (x)o,yo) Is the image center point of the partial image, δ x and δ y are the determined relative physical positions, Δ'xAnd delta'yThe mapping proportion of the corresponding physical size and the pixel size of the local image.
The embodiment of the present application is not limited to a method for determining a mapping ratio between a physical size corresponding to a local image and a pixel size. In a specific implementation manner, the process of the method for determining the mapping ratio may be as follows:
first, based on the pixel size of the reference mark 32 in the global image and the physical size of the reference mark 32, the mapping ratio of the physical size and the pixel size corresponding to the global image is determined.
For example, the mapping proportion corresponding to the global image may be determined according to the following formula:
Figure BDA0002808057690000141
wherein, XoAnd YoIs the known physical dimension of the reference mark 32, as shown in FIG. 7, a (x)a,ya)、b(xb,yb)、c(xc,yc) Is the pixel coordinate, Δ, of the reference mark 32 in the global imagexAnd ΔyIs the mapping ratio of the physical size and the pixel size corresponding to the global image.
Then, the physical size of the target operation object is determined based on the pixel size of the target operation object in the global image and the mapping ratio of the corresponding physical size of the global image and the pixel size.
Figure BDA0002808057690000142
Wherein, XpAnd YpIs the calculated physical size of the target operation object, and the calculated physical size is different from the theoretical physical size stored in the database. a' (x)a′,ya′)、b′(xb′,yb′)、c′(xc′,yc′) Is the pixel coordinates of the target operand in the global image.
And finally, determining the mapping proportion of the physical size and the pixel size corresponding to the local image based on the physical size of the target operation object and the pixel size of the target operation object in the local image.
Figure BDA0002808057690000143
Wherein, delta'xAnd delta'yIs the mapping ratio of the physical size and the pixel size corresponding to the local image, a '(x'a′,y′a′),b′(x′b′,y′b′),c′(x′c′,y′c′) Is the pixel coordinates of the target object in the local image.
In another possible implementation manner, the prestored theoretical physical size of the target operation object may be divided by the pixel size of the target operation object to obtain the mapping ratio between the physical size and the pixel size corresponding to the local image.
It should be noted that the second position includes the positions of the target operation object in the X-axis, the Y-axis and the Z-axis, and the positions of the target operation object in the X-axis and the Y-axis can be determined based on the positions of the target operation object and the reference mark in the user coordinate system, the global image, and the relative physical positions of the tool center point of the robot and the camera center point of the camera.
And the position of the target object in the Z-axis of the tool coordinate system can be measured by the distance measuring device 4.
For example, in a specific implementation manner, when the distance measuring device 4 measures the position of the target operation object in the Z axis, the robot 1 drives the distance measuring device 4 to move to the position opposite to the target operation object, and then directly measures the distance to the target operation object (i.e. the position of the target operation object in the Z axis).
It should be noted that step 403, step 405 and step 406 are optional steps, and those skilled in the art can understand that when the first position of the determined target operation object in the tool coordinate system meets the accuracy requirement, the subsequent steps 405 and 406 may not be necessary. In some cases, it may be possible to recognize whether or not the target manipulation object can be manipulated by the robot without performing the process of step 403. It should also be noted that the order of step 401 and step 402 may be interchanged.
In summary, the method for determining the position of the operation object provided by the embodiment of the present application has at least the following beneficial effects:
the relative position from the center point of the tool to the reference mark is obtained through vision, and the efficiency is high. The high-definition monocular camera is utilized, the micron-level guiding precision can be obtained under the condition of a specific gesture, the monocular camera is low in price, and the corresponding algorithm is simple. The target operation object is locally shot, the accurate position of the target operation object is calculated based on the local image, the operation accuracy is improved, and the complexity of the method of local shooting and priori knowledge is lower than that of a global shooting algorithm.
The following describes embodiments of the present application with reference to specific scenarios.
The method for determining the position of the operation object provided by the embodiment of the application can be applied to a transmission network of an operator and an access network machine room. For various machine frames of a transport network or an access network machine room of an operator, including a lucent connector or Local Connector (LC) head oblique-insertion machine frame, a Square Connector (SC) head straight-insertion machine frame, an SC head oblique-insertion machine frame, and the like, as long as a plane (for example, a plane of 35mmx35 mm) capable of arranging the reference mark 31 exists on an operation plane of the machine frame or on a parallel operation plane (such as a rack) adjacent to the operation plane of the machine frame, and the machine frame has template prior knowledge, the method provided by the embodiment of the application can accurately guide movable operation equipment such as a mechanical arm and the like to a target position, and the accuracy reaches the mum level.
The following describes a method for determining a position of an operation object according to an embodiment of the present application:
(1) the robot 1 is powered on and drives the camera 2 to move to the position close to the reference mark 32 of the machine frame, the movement is stopped within the range of 200mm-300mm away from the machine frame, and the camera 2 shoots a global image;
(2) acquiring a frame number, a corresponding physical distance between adjacent pixels and a pixel position of the reference mark 32 by identifying the reference mark 32;
(3) interacting with an upper computer by using a machine frame number to acquire the relative physical positions of a port to be operated and the reference mark 32, and acquiring specific operation types such as optical fiber plugging and unplugging, port detection, port cleaning operation and the like;
(4) judging whether the port is shielded or not according to image identification, if so, ending the program, and if not, carrying out the next step of processing;
(5) determining a first position according to the relative physical position of the port and the reference mark 32, the position difference value from the camera center point to the reference mark 32 and the relative position of the camera center point and the tool center point, and moving to the vicinity of the port to be operated according to the first position;
(6) taking a local image (taken within a range of 180mm-220mm from the port), determining a second position of the port in the tool coordinate system;
(7) and (4) completing the distance measurement of the port from the center point of the tool according to the distance measuring device 4, and finally, moving the robot 1 to a target operation position to complete the operation action.
In addition, the method for determining the position of the operation object provided by the embodiment of the application can also be used for guiding and positioning high-precision automatic operation on a laboratory bench, and comprises the steps of adding reagents one by one for a reagent kit, replacing reagent tubes, plugging and unplugging ports of experimental instruments and the like.
A rectangular operation space for each experimental operation is planned on the experiment table, a reference mark 32 is arranged at a marked position in the operation space, and a two-dimensional code in the reference mark 32 stores an operation space number, an operation space scale, a position coordinate of the reference mark 32 in the operation space, the reference mark 32 scale, the scale of an operation object such as a kit and the like, and template prior knowledge of the operation object.
The following describes a scheme for determining the position of an operation object according to an embodiment of the present application:
(1) the robot 1 is electrified and moves to the position near the reference mark 32, the shooting distance is adjusted to identify the information mark in the reference mark 32, the operation space scale, the reference mark scale and the operation object scale are obtained, the optimal shooting distance is calculated according to the three scales and the robot moves, so that the operation subject completely appears in the shot image in a proper proportion. Meanwhile, the focal length of the camera is set according to the shooting distance, so that high-definition images are guaranteed to be shot;
(2) acquiring the prior knowledge of the operation object template and the pixel position corresponding to the reference mark 32, and calculating the corresponding physical distance between adjacent pixels;
(3) interacting with an upper computer by using the operating space number, acquiring the coordinate value of the operating object to be operated from the reference mark 32, and acquiring a specific instruction corresponding to the operating object;
(4) the robot 1 moves to the vicinity of an operation object according to the operation object corresponding to the operation instruction, the position difference value from the camera center point to the reference mark 32, the position from the camera center point to the tool center point and other information, calculates the optimal shooting distance of the front end of the port by combining the scale of the operation object, and moves to the shooting port at the distance;
(5) performing precision interpolation on the first position of the operation object according to the shot local image;
(6) and finally, the robot 1 moves to a working position, finishes the operation action and restores to the initial position.
The embodiment of the application also provides a robot, which comprises a processor and a memory. The memory stores one or more programs configured to be executed by the processor for implementing the method for determining the position of the operation object provided by the embodiment of the present application.
The embodiment of the application also provides a robot system, and as shown in fig. 1, the robot system comprises a robot 1, a camera 2 and a distance measuring device 4. The camera 2 and the distance measuring device 4 are fixed to the arm of the robot 1.
The embodiment of the present application further provides an automation system, as shown in fig. 1, the automation system includes an operation subject 3 and the robot system. At least one operation object 31 and a reference mark 32 are distributed on the same side of the operation body 3.
In a specific implementation manner, information flags are further distributed on the same side of the operation body 3, and the information flags store therein one or more of the identifier of the operation body 3, the size information of the operation body 3, and the size information of the operation object 31.
The embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium includes instructions, and when the computer-readable storage medium is run on a robot, the robot is caused to execute the method for determining the position of the operation object provided in the embodiment of the present application.
The embodiment of the application also provides a computer program product containing instructions, and when the computer program product runs on a robot, the robot executes the method for determining the position of the operation object provided by the embodiment of the application.
The embodiment of the present application further provides a chip, where the chip includes a programmable logic circuit and/or program instructions, and when the chip runs, the chip is configured to implement the method for determining the position of the operation object provided in the embodiment of the present application.
In the embodiments of the present application, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The above description is only an example of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the principles of the present application should be included in the scope of the present application.

Claims (19)

1. A method for determining the position of an operation object, characterized in that the method is applied in a robot (1), the method comprising:
acquiring a global image of an operation subject (3) shot by a camera (2) positioned on the robot (1), wherein the operation subject (3) is a subject where an operation object (31) of the robot (1) is positioned, reference signs (32) and at least one operation object (31) are distributed on the same side of the operation subject (3), and the global image is an image comprising the reference signs (32);
acquiring the positions of a target operation object and the reference mark (32) in a user coordinate system of the operation body (3), wherein the target operation object is an operation object (31) needing to be operated in at least one operation object (31);
determining a first position of the target operation object in the tool coordinate system of the robot (1) based on the positions of the target operation object and the reference markers (32) in the user coordinate system, the global image, and the relative physical positions of the tool center point of the robot (1) and the camera center point of the camera (2).
2. The method according to claim 1, characterized in that said acquiring a global image of an operating body (3) taken by a camera (2) located on said robot (1) comprises:
adjusting the pose of the robot (1) such that the pose angle of the tool coordinate system of the robot (1) coincides with the pose angle of the user coordinate system of the operating body (3);
controlling the camera (2) to take a global image of the operating body (3).
3. The method according to claim 2, wherein the tool coordinate system comprises an X-axis, a Y-axis and a Z-axis, the X-axis, the Y-axis being parallel to an operation plane in which the operation object (31) and the reference marker (32) are located, the Z-axis being perpendicular to the operation plane, the determining the first position of the target operation object in the tool coordinate system of the robot (1) based on the positions of the target operation object and the reference marker (32) in the user coordinate system, the global image, and the relative physical positions of the tool center point of the robot (1) and the camera center point of the camera (2) comprising:
determining the position of the target operation object in the X-axis and the Y-axis based on the positions of the target operation object and the reference marker (32) in the user coordinate system, the global image, and the relative physical positions of the tool center point of the robot (1) and the camera center point of the camera (2);
and acquiring the position of the target operation object on the Z axis measured by a distance measuring device (4) on the robot (1).
4. The method according to claim 2 or 3, characterized in that said controlling said camera (2) to take a global image of said operating body (3) comprises:
acquiring size information of the operating body (3);
determining a first photographing position based on the size information of the operating body (3) and the parameter information of the camera (2);
controlling the camera (2) to move to the first photographing position and photographing the global image at the first photographing position.
5. The method according to claim 4, wherein a first information flag is further distributed on the same side of the operation body (3), the first information flag is a flag storing size information of the operation body (3), and the obtaining the size information of the operation body (3) comprises:
and identifying the first information mark and acquiring the size information of the operation body (3).
6. The method according to any one of claims 1-5, wherein the obtaining of the positions of the target operation object and the reference marker (32) in the user coordinate system of the operation body (3) comprises:
acquiring an identification of the operating body (3);
sending an operation instruction acquisition request carrying the identifier of the operation main body (3) to an upper computer;
and receiving an operation instruction sent by the upper computer, wherein the operation instruction carries the positions of the target operation object and the reference mark (32) in the user coordinate system.
7. The method according to claim 6, wherein a second information flag is further distributed on the same side of the operation main body (3), the second information flag is a flag storing an identifier of the operation main body (3), and the obtaining of the identifier of the operation main body (3) comprises:
and identifying the second information mark and acquiring the identification of the operation main body (3).
8. The method according to any one of claims 1-7, wherein said determining a first position of the target operational object in the tool coordinate system of the robot (1) based on the positions of the target operational object and the reference marker (32) in the user coordinate system, the global image, and the relative physical positions of the tool center point of the robot (1) and the camera center point of the camera (2) comprises:
-determining the pixel position and pixel size of the reference marker (32) in the global image;
determining a mapping ratio of a physical size and a pixel size based on the pixel size of the reference mark (32) and the physical size of the reference mark (32);
determining a first position of the target operation object in the tool coordinate system based on the positions of the target operation object and the reference marker (32) in the user coordinate system, the pixel positions of the reference marker (32) in the global image, the mapping scale, and the relative physical positions of the tool center point and the camera center point.
9. The method of claim 8, wherein determining the first position of the target object of operation in the tool coordinate system based on the positions of the target object of operation and the reference marker (32) in the user coordinate system, the pixel positions of the reference marker (32) in the global image, the mapping scale, and the relative physical positions of the tool center point and the camera center point comprises:
determining a relative physical location of the camera center point and the reference marker (32) based on an image center point of the global image, a pixel location of the reference marker (32) in the global image, and the mapping scale, wherein the image center point is a mapping point of the camera center point in the global image;
determining a relative physical location of the tool center point and the reference marker (32) based on the relative physical location of the camera center point and the reference marker (32), and the relative physical location of the tool center point and the camera center point;
determining a first position of the target operational object in the tool coordinate system based on the relative physical positions of the tool center point and the reference marker (32), and the positions of the target operational object and the reference marker (32) in the user coordinate system.
10. The method according to any of the claims 1-9, wherein after said determining the first position of the target operational object in the tool coordinate system of the robot (1), the method further comprises:
determining a second shooting position based on a first position of the target operation object in a tool coordinate system of the robot (1);
controlling the camera (2) to move to the second shooting position, and shooting a partial image containing the target operation object at the second shooting position;
based on the local image, a second position of the target operation object in a tool coordinate system of the robot (1) is determined.
11. The method according to claim 10, wherein said determining a second position of the target operational object in a tool coordinate system of the robot (1) based on the local image comprises:
determining the position of the target operation object on the X axis and the Y axis of the tool coordinate system based on the local image;
and acquiring the position of the target operation object measured by a distance measuring device (4) on the robot (1) on the Z axis of the tool coordinate system.
12. The method according to claim 10 or 11, wherein determining a second photographing position based on a first position of the target operation object in a tool coordinate system of the robot (1) comprises:
acquiring size information of the target operation object;
determining the second photographing position based on the size information of the target operation object, the parameter information of the camera (2), and the first position.
13. The method according to any of the claims 10-12, wherein said determining a second position of the target operation object in the tool coordinate system of the robot (1) based on the local image comprises:
determining the relative pixel positions of the central point of the target operation object and the central point of the image in the local image;
and determining a second position of the target operation object in the tool coordinate system based on the relative pixel positions of the central point of the target operation object and the central point of the image and the mapping proportion of the physical size and the pixel size corresponding to the local image.
14. The method of claim 13, wherein the global image further comprises the target operation object, and wherein before determining the second position of the target operation object in the tool coordinate system based on the relative pixel positions of the center point of the target operation object and the image center point and the mapping ratio of the corresponding physical size and pixel size of the local image, the method further comprises:
determining a mapping proportion of a corresponding physical size and a pixel size of the global image based on the pixel size of the reference mark (32) in the global image and the physical size of the reference mark (32);
determining the physical size of the target operation object based on the pixel size of the target operation object in the global image and the mapping proportion of the physical size and the pixel size corresponding to the global image;
and determining the mapping proportion of the physical size and the pixel size corresponding to the local image based on the physical size of the target operation object and the pixel size of the target operation object in the local image.
15. The method according to any one of claims 1-14, wherein the global image further comprises the target operational object, and after the obtaining of the positions of the target operational object and the reference marker (32) in the user coordinate system of the operational subject (3), the method further comprises:
determining the target operation object in the global image based on the positions of the target operation object and the reference mark (32) in a user coordinate system of the operation body (3) and the mapping proportion of the corresponding physical size and pixel size of the global image;
if it is recognized that the target operation object can be operated by the robot (1), the process of determining the first position of the target operation object in the tool coordinate system of the robot (1) is performed.
16. A robot, characterized in that the robot comprises a processor and a memory;
the memory stores one or more programs configured to be executed by the processor for implementing the method of any of claims 1-15.
17. A robot system, characterized in that the robot system comprises a robot (1), a camera (2) and a distance measuring device (4), the robot (1) being a robot according to claim 16;
the camera (2) and the distance measuring device (4) are fixed on a mechanical arm of the robot (1).
18. An automation system, characterized in that it comprises an operational agent (3) and a robot system according to claim 17;
at least one operation object (31) and a reference mark (32) are distributed on the same side of the operation main body (3).
19. The automation system according to claim 18, characterised in that the same side of the operating body (3) is further distributed with information flags in which one or more of an identification of the operating body (3), dimensional information of the operating object (31) are stored.
CN202011379352.4A 2020-11-30 2020-11-30 Method for determining the position of an operating object, robot and automation system Pending CN112529856A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011379352.4A CN112529856A (en) 2020-11-30 2020-11-30 Method for determining the position of an operating object, robot and automation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011379352.4A CN112529856A (en) 2020-11-30 2020-11-30 Method for determining the position of an operating object, robot and automation system

Publications (1)

Publication Number Publication Date
CN112529856A true CN112529856A (en) 2021-03-19

Family

ID=74995505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011379352.4A Pending CN112529856A (en) 2020-11-30 2020-11-30 Method for determining the position of an operating object, robot and automation system

Country Status (1)

Country Link
CN (1) CN112529856A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991742A (en) * 2021-04-21 2021-06-18 四川见山科技有限责任公司 Visual simulation method and system for real-time traffic data
CN115127452A (en) * 2022-09-02 2022-09-30 苏州鼎纳自动化技术有限公司 Notebook computer shell size detection method, system and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991742A (en) * 2021-04-21 2021-06-18 四川见山科技有限责任公司 Visual simulation method and system for real-time traffic data
CN115127452A (en) * 2022-09-02 2022-09-30 苏州鼎纳自动化技术有限公司 Notebook computer shell size detection method, system and storage medium
CN115127452B (en) * 2022-09-02 2022-12-09 苏州鼎纳自动化技术有限公司 Notebook computer shell size detection method, system and storage medium

Similar Documents

Publication Publication Date Title
US11049280B2 (en) System and method for tying together machine vision coordinate spaces in a guided assembly environment
US11911914B2 (en) System and method for automatic hand-eye calibration of vision system for robot motion
US9124873B2 (en) System and method for finding correspondence between cameras in a three-dimensional vision system
CN111775146A (en) Visual alignment method under industrial mechanical arm multi-station operation
CN110276799B (en) Coordinate calibration method, calibration system and mechanical arm
CN105547153B (en) Plug-in element stitch vision positioning method and device based on binocular vision
CN112529856A (en) Method for determining the position of an operating object, robot and automation system
US10591289B2 (en) Method for measuring an artefact
CN111482964A (en) Novel robot hand-eye calibration method
CN107328358B (en) The measuring system and measurement method of aluminium cell pose
CN112958960A (en) Robot hand-eye calibration device based on optical target
CN116276938B (en) Mechanical arm positioning error compensation method and device based on multi-zero visual guidance
CN115393299A (en) Monocular vision-based assembly line workpiece distance measuring method and device
Cheng et al. Integration of 3D stereo vision measurements in industrial robot applications
CN113858214A (en) Positioning method and control system for robot operation
CN116619350A (en) Robot error calibration method based on binocular vision measurement
CN113302027B (en) Job coordinate generating device
JP2010214546A (en) Device and method for assembling
CN116758160B (en) Method for detecting pose of optical element assembly process based on orthogonal vision system and assembly method
CN114248293B (en) 2D laser profiler and 2D camera-based perforated part grabbing method and system
TW201923498A (en) Control method of self-propelled equipment achieving the aim of improving the location precision of the self-propelled equipment by utilizing an optical tracing technology
CN115018935B (en) Calibration method and device for camera and vehicle, electronic equipment and storage medium
EP3708309A1 (en) A method for determining positional error within a robotic cell environment
CN111474935B (en) Mobile robot path planning and positioning method, device and system
CN115136198A (en) Coordinate system calibration method, teaching board and protruding part

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination