CN113858214B - Positioning method and control system for robot operation - Google Patents

Positioning method and control system for robot operation Download PDF

Info

Publication number
CN113858214B
CN113858214B CN202111329759.0A CN202111329759A CN113858214B CN 113858214 B CN113858214 B CN 113858214B CN 202111329759 A CN202111329759 A CN 202111329759A CN 113858214 B CN113858214 B CN 113858214B
Authority
CN
China
Prior art keywords
robot
camera
coordinate system
image
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111329759.0A
Other languages
Chinese (zh)
Other versions
CN113858214A (en
Inventor
李明洋
许雄
邵威
戚祯祥
王家鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jieka Robot Co ltd
Original Assignee
Jieka Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jieka Robot Co ltd filed Critical Jieka Robot Co ltd
Priority to CN202111329759.0A priority Critical patent/CN113858214B/en
Publication of CN113858214A publication Critical patent/CN113858214A/en
Application granted granted Critical
Publication of CN113858214B publication Critical patent/CN113858214B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a positioning method and a control system for robot operation, wherein one specific implementation mode of the method comprises the following steps: controlling a camera at the working end of the robot to shoot an image mark, and obtaining a photo comprising the image mark, wherein the image mark is arranged at a fixed position; analyzing the photo to obtain the pose of the image mark in a robot base coordinate system; constructing a user coordinate system according to the pose of the image mark in the robot base coordinate system; and positioning the robot operation according to the user coordinate system. According to the method, when an operator controls the robot to move, the user coordinate system constructed based on the image identification can be used for positioning and controlling the robot to move, so that the operation control difficulty of the robot which uses the camera for accurate positioning is reduced.

Description

Positioning method and control system for robot operation
Technical Field
The invention relates to the field of robot control, in particular to a positioning method and a control system for robot operation.
Background
Robots including robotic arms and movable chassis are often used for machine tool operations, such as for material handling and extensive object gripping.
In the operation process of the robot, an object to be grasped needs to be positioned, and the current common mode is to use a camera fixed on the robot for auxiliary positioning. The base coordinate system of the robot is specifically applied by using the camera to assist in positioning, and the base coordinate system of the robot moves along with the robot, so that the robot moving site is positioned by using the coordinate system, an operator of the robot is required to perform multiple coordinate system conversions in the actual operation process, and the conversions of the coordinate systems are not intuitive, have high difficulty and are not beneficial to the operation of the operator.
For operators, the robots can be correspondingly controlled by converting various coordinate systems, the process is time-consuming and labor-consuming, and is easy to make mistakes, the whole operation flow is embodied, the work progress is dragged, and the overall work efficiency is reduced.
Disclosure of Invention
An object of the present invention is to provide a positioning method and a control system for robot operation, which are used for reducing operation control difficulty of accurately positioning a robot with the assistance of a camera.
In a first aspect, an embodiment of the present invention provides a positioning method for a robot job, the method including: controlling a camera at the working end of the robot to shoot an image mark, and obtaining a photo comprising the image mark, wherein the image mark is arranged at a fixed position; analyzing the photo to obtain the pose of the image mark in a robot base coordinate system; constructing a user coordinate system according to the pose of the image mark in the robot base coordinate system; and positioning the robot operation according to the user coordinate system.
According to the technical scheme provided by the embodiment, the camera is controlled to photograph the image mark, the pose of the image mark in the robot base coordinate system is obtained by analyzing the image mark photograph, the user coordinate system of the robot is further arranged at the image mark, and the user coordinate system is constructed according to the image mark, so that an operator can position and control the movement of the robot by using the user coordinate system constructed based on the image mark when controlling the movement operation of the robot.
The user coordinate system is understood here to be a coordinate system that is built up on the operating interface that is actually required by the robot. In the user coordinate system, the image identifier used for constructing the user coordinate system has definite coordinates in the user coordinate system; also, the target object to be acquired or operated by the robot is on the operation interface, which also has well-defined coordinates in the user coordinate system, between which there is only a simple horizontal and vertical translational relationship. Therefore, after the user coordinate system is built through the image identification, the image identification is fixed and visual in the user coordinate system for an operator, the target workpiece to be operated or acquired by the robot is fixed and visual in the user coordinate system, and the two target workpieces are identified in the same coordinate system, so that the operation of the robot to acquire the target workpiece at the operator does not involve the conversion of the coordinate system, and the control difficulty of the operator to control the operation of the robot is greatly reduced through the design.
Further, before the camera at the working end of the robot is controlled to shoot the image identifier, the method further comprises: controlling the robot to move the camera to a preset position; wherein the preset position is located at a certain distance above the image mark. The focal length of the camera generally has a certain variation range, but the focal length can be set to a fixed value according to the requirement when the camera is used, so that a fixed clear imaging object distance exists, a preset position where the distance between the camera and the image mark is the object distance is determined, the camera is directly moved to the preset position by controlling the robot, photographing is carried out at the preset position, the photographed picture can be ensured to be clear and available, and subsequent analysis is facilitated.
Further, in the method, the image identifier includes a non-rotationally symmetrical logo, and the fixed location includes a table surface on which the robot needs to work. The analysis difficulty can be reduced by designing a specific non-rotationally symmetrical pattern which is convenient to analyze; the image mark is arranged on the surface of the workbench, so that a control person of the robot can conveniently position the robot by taking the image mark as a reference and control the robot to finish related operations.
Further, after analyzing the photograph, the method further comprises: judging whether the camera is at a preset position or not; if the camera is at the preset position, acquiring and recording the current joint pose of the robot as a photographing pose. By analyzing the photo of the image mark, whether the relative position between the camera and the image mark accords with the preset condition or not can be obtained according to the analysis result, if the relative position between the camera and the image mark accords with the preset condition, the camera is at the preset position, the joint pose of the robot is obtained and recorded as a photographing pose, the robot can be controlled to move to the photographing pose by directly utilizing the recorded joint pose when the robot needs to be controlled again to photograph the image mark in the future, and the photographing control step is simplified.
Further, in the method, the analyzing the photograph includes: judging whether the photo comprises the image identifier or not to obtain a first judging result; judging whether the non-rotationally symmetrical mark of the image mark in the photo is deformed or not to obtain a second judging result; analyzing the corresponding pixel points of the image marks in the photo to obtain a third judging result; the determining whether the camera is at a preset position includes: judging whether the camera shoots the image identifier according to the first judging result; judging whether the front end face of the camera is parallel to the image mark or not according to the second judging result; and judging whether the distance between the front end face of the camera and the image identifier is a preset value or not according to the third judging result. And if the camera is judged to shoot the image identifier according to the first judging result, the front end face of the camera is judged to be parallel to the image identifier according to the second judging result, and the distance between the front end face of the camera and the image identifier is judged to be a preset value according to the third judging result, the camera is judged to be at a preset position. Whether the camera is at the preset position or not is judged by analyzing the photo, and whether the relative position between the camera and the image identifier meets the preset condition or not is further determined.
Further, in the method, the obtaining the pose of the image identifier in a robot base coordinate system includes: acquiring the pose of the camera in a robot base coordinate system and the pose of the image mark in the camera coordinate system; and obtaining the pose of the image mark in the robot base coordinate system according to the pose of the camera in the robot base coordinate system and the pose of the image mark in the camera coordinate system. The pose of the image mark in the camera coordinate system can be obtained by analyzing the photo, the pose of the camera in the robot base coordinate system belongs to a known quantity, the pose of the camera in the robot base coordinate system and the pose of the image mark in the camera coordinate system are obtained through conversion, and then the pose of the image mark in the robot base coordinate system can be obtained, so that the user coordinate system is arranged at the image mark.
Further, in the method, the positioning the robot job according to the user coordinate system includes: performing point position teaching on the robot according to the user coordinate system to obtain a working point; storing the working site; and controlling the robot to move to the working site to perform work. The point location teaching is carried out under the user coordinate system set according to the image identification, the working points are obtained and stored, and by the aid of the setting, workers can directly use the working points set according to the image identification to control when working of the robot is carried out in the follow-up operation, so that operation difficulty is reduced, and working efficiency is improved.
In a second aspect, an embodiment of the present invention provides an external control system for robot job positioning, the external control system including: an image processing module and a communication module. The image processing module is used for acquiring a photo comprising an image identifier, analyzing the photo, judging whether a camera is positioned at a preset position, and acquiring and recording the current joint pose of the robot as a photographing pose if the camera is positioned at the preset position.
Wherein in a preferred embodiment, the analyzing of the photograph comprises: judging whether the photo comprises the image identifier or not to obtain a first judging result; judging whether the non-rotationally symmetrical mark of the image mark in the photo is deformed or not to obtain a second judging result; and analyzing the corresponding pixel points of the image mark in the photo to obtain a third judging result. The judging whether the camera is positioned at the preset position comprises the following steps: judging whether the camera shoots the image identifier according to the first judging result; judging whether the front end face of the camera is parallel to the image mark or not according to the second judging result; and judging whether the distance between the front end face of the camera and the image identifier is a preset value or not according to the third judging result.
The external control system for robot operation positioning further comprises a communication module, wherein the communication module is used for communicating with the robot control system, the communication module sends a first control instruction for controlling the robot to move to the photographing posture to the robot, and the first control instruction controls the robot to move the joint posture to the photographing posture; the communication module is further used for sending a second control instruction for controlling the robot to set a user coordinate system to the robot, and the second control instruction controls the robot to set the user coordinate system according to the image identification.
The external control system is responsible for processing a photo comprising an image identifier, judging whether the camera is at a preset position, recording the shooting gesture, and meanwhile, the external control system is provided with a communication module, and can control the robot to move to the shooting gesture by sending a control instruction to a control system of the robot and set a user coordinate system according to the image identifier. The robot is controlled in an external control mode, the work of setting the user coordinate system at the image identification position is completed by matching with the robot, the work of staff is simplified, and the difficulty of controlling the robot to photograph the image identification and setting the user coordinate system at the image identification position is reduced. Meanwhile, the original robot control system is not required to be changed, so that the technical scheme of the invention can be widely applied to various robots.
Further, in a preferred embodiment, the external control system further comprises a coordinate system construction module for constructing a user coordinate system from the pose of the image identification in the robot base coordinate system. The coordinate system construction module firstly acquires the pose of the camera in the robot base coordinate system and the pose of the image mark in the camera coordinate system, then obtains the pose of the image mark in the robot base coordinate system according to the pose of the camera in the robot base coordinate system and the pose of the image mark in the camera coordinate system, and finally completes construction of a user coordinate system according to the pose of the obtained image mark in the robot base coordinate system.
Further, in a preferred embodiment, the external control system further includes a positioning unit, which positions the robot job according to the user coordinate system, including: performing point position teaching on the robot according to the user coordinate system to obtain a working point; and the robot is controlled to move to the working site to perform work by sending the working site to a robot control system.
Further, in a preferred embodiment, the external control system further includes a storage module, where the storage module is configured to store the working site and the shooting pose. And the subsequent positioning unit call point position and pose data are convenient to control the movement of the robot operation.
All modules of the external control system are provided with data connecting lines, and data communication can be realized between the modules.
In a third aspect, an embodiment of the present invention provides a robot control system, including: an external communication module, an instruction execution module and a storage module. The external communication module is configured to communicate with an external control system and to receive instructions from the external control system, the instructions including first and second control instructions and a third control instruction that controls movement of the robot to the work site.
The command receiving module is configured to execute control commands, the control commands comprise commands received by the external communication module, the commands comprise the first control command, the second control command and the third control command, the first control command controls the robot to move to a photographing gesture, and the second control command controls the robot to set a user coordinate system.
The storage module is configured to store the working site obtained by performing point location teaching on the robot according to the user coordinate system and the working site data received by the external communication module. The robot control system is provided with a communication module capable of receiving instructions of an external control system, an execution module capable of executing related instructions from the external control system, and a storage module capable of storing a working site obtained by teaching according to a user coordinate system. Such an arrangement enables the method of the invention to be applied during the operation of the robot in practical applications.
Further, in a preferred embodiment, the robot control system further includes a coordinate system setting module, where the coordinate system setting module is configured to set a user coordinate system, and in particular, the coordinate system setting module is configured to set the user coordinate system at the image identifier. After the coordinate system setting module obtains the pose of the image mark in the robot base coordinate system, the user coordinate system can be set according to the pose of the image coordinate system.
Further, in a preferred embodiment, the robot control system further includes a pose acquisition module, where the pose acquisition module is configured to acquire poses of the camera, each joint of the robot, and the tool in a robot base coordinate system, and each pose may be sent to the external control system through the external communication module.
All modules of the robot control system are provided with data connecting lines, and data communication can be realized between the modules.
In a fourth aspect, an embodiment of the present invention provides a positioning module for positioning a robot, where the positioning module includes a camera, an external control system, a first control button, a second control button, and an indicator light; the positioning module is arranged at the working end of the robot; the camera is configured to shoot the image identifier and obtain a photo comprising the image identifier; wherein the external control system is the external control system described in the second aspect; the first control button is configured to control the camera to shoot according to the operation of the first control button and trigger a user coordinate system setting instruction; the second control button is configured to record a photographing pose of the robot according to the operation of the second control button; the indicator light is configured to indicate whether the user coordinate system is successfully set according to the image identification. The positioning module realizes interaction between the robot system and the user, and the arrangement enables a worker to set the user coordinate system at the image identification position through simply operating and controlling the robot, so that the operation is simple, and the working efficiency is improved.
In a fifth aspect, an embodiment of the present invention provides an electronic device, which may be a robot or a server, the device including a processor and a memory, the memory storing computer readable instructions which, when executed by the processor, perform the steps of the method as provided in the first aspect above.
In a sixth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which when executed by a processor performs steps in the method as provided in the first aspect above.
According to the embodiment of the invention, the camera at the working end of the robot is controlled to shoot the image mark, a photo comprising the image mark is obtained, the photo is analyzed, the pose of the image mark in the robot base coordinate system is obtained, and a user coordinate system is constructed according to the pose of the image mark in the robot base coordinate system; the operator controlling the robot can intuitively position the robot operation according to the image identification and the user coordinate system arranged at the image identification. In addition, the embodiment of the invention ensures that a worker controlling the robot to operate can successfully set the user coordinate system at the image mark through simple operation through reasonable setting. The operation difficulty of operating the robot is reduced, and the working efficiency of an operator is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the embodiments of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments of the present invention will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and should not be considered as limiting the scope, and other related drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a robot operation scene provided in an embodiment of the present invention;
fig. 2 is a first flowchart of a positioning method for robot operation according to an embodiment of the present invention;
FIG. 3a is a schematic diagram of a first image identifier according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of a second image identifier according to an embodiment of the present invention;
FIG. 3c is a schematic diagram of a third image identifier according to an embodiment of the present invention;
fig. 4 is a second flowchart of a positioning method for robot operation according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an external control system for positioning a robot operation according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a robot control system according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a positioning module for positioning a robot according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device for performing a positioning method for a robot job according to an embodiment of the present application.
Icon: 10-an external control system, 12-an image processing module, 14-a communication module;
20-a robot control system, 21-an external communication module, 23-an instruction execution module and 25-a storage module;
30-positioning module, 31-indicator lamp, 33-camera, 35-first control button, 37-second control button;
300-application scenes, 301-robot bases, 303-mechanical arms, 305-end effectors, 311-cameras, 313-camera front end faces, 315-a certain distance, 321-a workbench and 323-target workpieces;
400-image identification, 401-image marking;
410-image identification, 411-image flag;
420-image identification, 421-image marking;
500-electronic device, 501-processor, 502-communication interface, 503-memory, 504-communication bus.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
It should be noted that embodiments or technical features of embodiments in the present application may be combined without conflict.
In the related art, for a composite robot that uses a camera to perform accurate positioning, in the actual operation process, an operator needs to perform various conversions on a camera coordinate system, a robot base coordinate system, a working coordinate system of the robot, and other coordinate systems to perform corresponding control, which is very time-consuming and also very prone to error. Meanwhile, the requirements for robot operators are high, the operators are required to understand mechanical operation, program editing and setting and visual application, and the requirements enable personnel who understand the robot operation to be scarce, so that labor cost is high.
As described above, in the related art, the problem of difficult coordinate system conversion, time-consuming operation, and easy error in the robot control process exists. In order to solve the problem, the invention provides a positioning method and a control system for robot operation, and the setting process of a working coordinate system of a robot, namely a user coordinate system, is programmed by setting image identifiers, so that a robot operator can complete the visual debugging work of setting the user coordinate system only by the most basic mechanical related knowledge.
In some application scenarios, the method for positioning a robot job may be applied to a processor/server/upper computer outside the robot, or may be directly applied to a processor or the like disposed inside the robot, where an application may be installed on the processor/server/upper computer or an internal processor, where the application is used to control a machine, and the application may request the processor/server/upper computer or the internal processor to implement a corresponding function according to a relevant instruction of a user. The invention is exemplified by application to an external server.
The above related art solutions have drawbacks, which are results obtained by the inventor after practice and careful study, and therefore, the discovery process of the above problems and the solutions proposed by the embodiments of the present invention hereinafter for the above problems should be all contributions of the inventor to the present invention in the process of the present invention.
Please refer to fig. 1, which illustrates a scenario in which the method of the present invention is applied to a robot operation. As shown in fig. 1, an application scenario 300 of the method of the present invention includes: robot base 301, robotic arm 303, end effector 305, camera 311, stage 321, workpiece 323, and image identifier 400. Wherein the camera 311 is arranged beside the working end of the robot, i.e. the end effector 305 of the manipulator 303, the position of the camera 311 with respect to the end effector 305 of the robot may be fixed by a bracket or by other means (not shown in the figures), such that when the camera 311 is mounted at the end of the robot, the position of the front end face 313 of the camera and the center of the optical axis (not shown in the figures) with respect to the end effector 305 of the robot is a known fixed value; in a preferred embodiment, the camera may have a self-contained light supplementing light source to ensure clarity of the captured image when capturing. The image tag 400 is provided at a fixed position, and a graphic or pattern for recognition should be provided above the image tag 400.
In a first aspect, an embodiment of the present invention provides a positioning method for robotic work.
Fig. 2 is a first flowchart of a positioning method for robot operation according to an embodiment of the present invention, as shown in fig. 2, the method includes steps S101 to S107.
Step S101: and controlling a camera at the working end of the robot to shoot the image mark, and obtaining a photo comprising the image mark, wherein the image mark is arranged at a fixed position.
Before a camera controlling the working end of the robot shoots the image identifier, the camera is required to be installed at the working end of the robot, and an image identifier is set at a fixed position. After receiving the control instruction, step S101 is executed to control a camera mounted on the working end of the robot to take the image identifier, and obtain a photograph including the image identifier. The control command can be triggered by a button, or can be sent by an operator in a mode of clicking a mouse or the like.
Step S103: and analyzing the photo to obtain the pose of the image mark in a robot substrate coordinate system.
After obtaining a photo comprising the image identifier, the external server analyzes the photo to obtain the pose of the image identifier in the robot base coordinate system. Wherein the pose represents a position and a posture, and any one rigid body can accurately and uniquely represent the position state by the position and the posture in a space coordinate system (OXYZ). The pose description and the coordinate transformation of the robot are the basis for the kinematic and dynamic analysis of the industrial robot. Theoretically, a six-axis robot has 6 degrees of freedom, i.e., X, Y, Z, yaw (Pitch angle), pitch (yaw angle), roll (Roll angle); the first three represent the position, i.e. the coordinates in three dimensions in space, while the last three represent the pose, i.e. the rotation angle about the X Y Z axis in the current pose. Therefore, after step S103 is performed, the position and posture of the image mark in the robot base coordinate system are obtained, and by this posture, the image mark can be accurately positioned in the robot base coordinate system.
Step S105: and constructing a user coordinate system according to the pose of the image mark in the robot base coordinate system.
After obtaining the pose of the image identifier in the robot base coordinate system, the external server can continue to execute step S105, in which the external server constructs a user coordinate system according to the pose of the image identifier in the robot base coordinate system, where the user coordinate system is the working coordinate system of the robot, that is, the coordinate system used by the operator in actually operating the robot to perform the work, as described above. Because the user coordinate system is constructed based on the pose of the image mark in the robot base coordinate system, for the robot, the point positions under the user coordinate system can be converted into the point positions under the base coordinate system.
Step S107: and positioning the robot operation according to the user coordinate system.
After the external server successfully sets the user coordinate system at the image identifier, the external server has the condition of executing step S107, and can position the robot job according to the user coordinate system. The user coordinate system is arranged at the image mark, and as the position of the image mark is a fixed position, the robot can obtain the positions under the user coordinate system according to the user coordinate system at the image mark, so that each position can be accurately positioned. The user coordinate system is understood here to be a coordinate system that is built up on the operating interface that is actually required by the robot. In the user coordinate system, the image identifier used for constructing the user coordinate system has definite coordinates in the user coordinate system; also, the target object to be acquired or operated by the robot is on the operation interface, which also has definite coordinates in the user coordinate system, and the two coordinates are directly only a simple translation relationship in the horizontal and vertical directions. Therefore, after the user coordinate system is built through the image identification, the image identification is fixed and visual in the user coordinate system for an operator, the target workpiece to be operated or acquired by the robot is fixed and visual in the user coordinate system, and the two target workpieces are identified in the same coordinate system, so that the operation of the robot to acquire the target workpiece at the operator does not involve the conversion of the coordinate system, and the control difficulty of the operator to control the operation of the robot is greatly reduced through the design.
The positioning method for robot work provided in the first aspect of the present application as described above may be exemplarily performed by the external server mentioned in the foregoing, but may also be performed by other execution bodies or electronic devices in the art, such as a processor, a personal computer, etc. It will be appreciated by persons skilled in the art that the above description of the related embodiments using an external server is intended to clearly illustrate the related embodiments, not to be limiting.
Compared with the traditional method for directly positioning by using the robot base coordinate system, the method for positioning by using the user coordinate system arranged at the image mark overcomes the problem that the robot base coordinate system is in a motion state because of motion in the operation process, so that quick and accurate positioning is difficult to realize, and the user coordinate system with a fixed position arranged at the image mark is used for positioning in the operation process of the robot, so that the image mark can be used as a mobile reference for operators, and the positioning method is simple and visual, reduces the operation difficulty of the robot, and improves the operation efficiency of the robot.
With continued reference to fig. 2, in some alternative implementations, before step S101, step S100 is further included: and controlling the robot to move the camera to a preset position.
The preset position is located at a certain distance above the image mark. Referring to fig. 1, the front end face 313 of the camera is spaced from the image identifier 400 by a distance 315, that is, a vertical distance from the image identifier at a predetermined position. When the camera 311 is moved such that the front face 313 of the camera is a distance 315 from the top of the image identifier 400, optionally, a measuring tool such as a pendant or a tape measure with a fixed line length, or other distance measuring device may be used to assist. The camera is provided with an adjustable optical lens, the focal length of the lens can be adjusted, the view angle of the camera can be adjusted, and after the view angle of the camera is determined, the observed range and the object distance are in a direct proportion relation. For example, when the clear imaging object distance of a certain focal length comprises 40cm, the distance value of the certain distance is set to be 40cm, at this time, that is, when the camera can clearly image, the narrow side length of the camera field of view is about 10cm, if the positioning precision of the AGV is + -2cm and the side length of the known image mark is 2cm, the allowance of 3cm is reserved, and after the robot base is guaranteed to move in place, the camera can certainly shoot the image mark, and the field of view configuration with the highest positioning precision can be performed.
Referring to fig. 1 and 3a, 3b, 3c, in some alternative implementations, the image identification includes a non-rotationally symmetrical logo, and the fixed location includes a table surface on which the robot needs to work.
In some application scenarios, including the image identification with a non-rotationally symmetric logo may facilitate image identification. After the image identifier is set, the size of the image identifier and the specific size of the corresponding mark in the image identifier may be determined. Three image identifiers are shown in fig. 3a, 3b and 3c, and the image identifier 400 shown in fig. 3a includes a mark 401, where the mark 401 is a right-angle shape formed by two mutually perpendicular line segments. The image identifier 410 shown in fig. 3b includes a logo 411, where the logo 411 is in a cross shape formed by two mutually perpendicular line segments, and has four right angles, where the lengths of the two mutually perpendicular line segments are different. The image identifier 420 shown in fig. 3c includes a mark 421, where the mark 421 is a right-angle shape formed by two mutually perpendicular line segments, and each line segment has a scale. The three image identifications are only examples, and alternatively, the marks on the image identifications can be other patterns or shapes, such as a plurality of parallel lines, boxes and the like. Referring to fig. 1, the image identifier 400 may be fixed to the surface of the table 321, and only one fixed position of the image identifier 400 is shown in fig. 1, and optionally, the fixed position may be a surface of various wall surfaces or other places. Alternatively, the image stamp 400 may be a metal sheet that is easily machined, may be easily fixed to various surfaces by screws or the like, and may be disposed on various surfaces by a laser marker, a sticker, or the like.
In some alternative implementations, after analyzing the photograph, the method further includes: judging whether the camera is at a preset position or not; if the camera is at the preset position, acquiring and recording the current joint pose of the robot as a photographing pose.
Fig. 4 is a second flowchart of a positioning method for robot operation according to an embodiment of the present invention, as shown in fig. 4, the method includes steps S201 to S207:
step S201: and controlling a camera at the working end of the robot to shoot the image mark, and obtaining a photo comprising the image mark, wherein the image mark is arranged at a fixed position.
Step S202: and analyzing the photo, and judging whether the camera is at a preset position or not.
If the camera is not at the preset position, returning to step S201; if the camera is at the preset position, the following steps S203 to S207 are continued.
Step S203: and acquiring and recording the current joint pose of the robot as a photographing pose.
Step S204: and obtaining the pose of the image mark in a robot substrate coordinate system.
Step S205 constructs a user coordinate system according to the pose of the image mark in the robot base coordinate system.
Step S207: and positioning the robot operation according to the user coordinate system.
In some application scenarios, the external server may determine whether the camera is at the preset position according to the analysis result of the photo, and if the determination result is that the camera is not at the preset position, the external server re-controls the camera to take a picture of the image identifier, and then performs steps S201 and S202 again until the determination result is that the camera is at the preset position. If the camera is at the preset position, step S203 is continuously executed, and the current joint pose of the robot is obtained and recorded as a photographing pose. The robot joint pose at this time can only include the pose of the robot terminal joint, namely the pose of the joint of the control camera, because the camera is fixed relative to the joint, when the joint of the control camera is in the shooting pose, the camera can be just above the image mark for a certain distance, and the image mark can be clearly shot; at this time, other joint postures of the robot can be automatically adjusted according to the condition that the joints of the control camera move to the photographing posture, specific requirements are not required, and photographing can be achieved as long as the camera is in a preset position. Alternatively, the joint pose at this time may include the pose of all joints of the robot, that is, the entire state of the current robot is acquired and recorded, because in this state, the robot may determine that the camera can be controlled to be at a preset position for shooting.
In some application scenarios, after the photographing pose is recorded, the robot can be directly controlled to move to the photographing pose according to the joint pose, that is, the camera is controlled to move to the preset position, so that there is no condition that the camera cannot photograph or clearly photograph the image mark, and a controller of the robot needs to readjust each Guan Jiwei pose of the robot to perform re-photographing, and then the steps of judging in the step S202 and re-recording the joint pose in the step S203 can be omitted as the photographing pose. Of course, if the steps S202 and S203 are retained, the determining process in step S202 may be used as a confirmation to confirm that the camera is at the preset position, without affecting the following steps due to the fact that the camera is not at the preset position due to misoperation or other reasons, step S203 is used as a correction, and if the quality of the photographed picture is higher, the current joint pose is re-recorded as the photographing pose.
In some alternative implementations, wherein the analyzing the photograph includes: judging whether the photo comprises the image identifier or not to obtain a first judging result; judging whether the non-rotationally symmetrical mark of the image mark in the photo is deformed or not to obtain a second judging result; analyzing the corresponding pixel points of the image marks in the photo to obtain a third judging result;
The determining whether the camera is at a preset position includes: judging whether the camera shoots the image identifier according to the first judging result; judging whether the front end face of the camera is parallel to the image mark or not according to the second judging result; and judging whether the distance between the front end face of the camera and the image identifier is a preset value or not according to the third judging result. And if the camera is judged to shoot the image identifier according to the first judging result, the front end face of the camera is judged to be parallel to the image identifier according to the second judging result, and the distance between the front end face of the camera and the image identifier is judged to be a preset value according to the third judging result, the camera is judged to be at a preset position. .
In some application scenarios, when analyzing a photo, an external server needs to analyze and judge whether the photo includes an image identifier, whether a non-rotationally symmetrical mark in the image identifier is in the photo or not, and whether corresponding pixel points of the image identifier in the photo are respectively, and respectively obtain a first judgment result, a second judgment result and a third judgment result. The external server can judge whether the camera can shoot the image identifier according to the first judging result, and can roughly judge the position of the camera. According to the second judgment result, whether the front end face of the camera is parallel to the image mark or not can be judged, and according to the imaging principle, the photographed object obtained by photographing can not be deformed only when the imaging face of the camera is parallel to the photographed object. For example, if the mark identifying the image mark in the photo includes mutually perpendicular lines which still intersect perpendicularly in the photo, it can be determined that the front face of the camera and the image mark are mutually parallel; optionally, if the image identifier does not include a right angle, adjusting analysis content according to the graphic content on the image identifier, and judging whether the front end face of the camera is parallel to the image identifier by other analysis methods. According to the third judging result, whether the distance between the front end face of the camera and the image mark is a preset value can be judged, and by judging the size of the pattern in the photo, namely the size of the non-rotationally symmetrical mark, the actual specific distance between the front end face of the camera and the image mark can be known according to the known fixed values such as the focal length of the camera and the distance between the imaging face and the lens, so that whether the actual distance is consistent with the preset distance is judged, and whether the camera is at the preset position relative to the image mark is judged. Judging whether the camera is at the preset position according to the three judging standards, and judging that the camera is at the preset position if the three judging standards are met after judging the analysis result of the photo.
In some optional implementations, if the first determination result obtained when determining whether the photo includes the image identifier does not include the image identifier, or if the second determination result obtained when determining whether the non-rotationally symmetric flag of the image identifier in the photo is deformed, or if the third determination result obtained by analyzing the corresponding pixel point of the image identifier in the photo does not conform to the preset value, the next determination is not continued, and the process returns to step S201 to take a photograph again.
In some optional implementations, when the camera is judged to be not in the preset position for photographing according to the first judgment result, the second judgment result and the third judgment result obtained through analysis, the position of the camera is adjusted according to the analysis result, so that the camera is in the preset position for photographing again. If the image mark is shot, but the mark in the image mark is not vertical, the front end face of the camera is only required to be adjusted to be parallel to the image mark for re-shooting. If the image mark is shot, the image mark is parallel to the front end face of the camera, but according to analysis, the distance between the image mark and the front end face of the camera is not a preset value, and according to the fact that the distance is larger than or smaller than the preset value, the camera is correspondingly moved for a corresponding distance in a direction parallel to or far away from the image mark, so that the camera can be re-shot.
In some optional implementations, the obtaining the pose of the image identification in a robot base coordinate system includes: acquiring the pose of the camera in a robot base coordinate system and the pose of the image mark in the camera coordinate system; and obtaining the pose of the image mark in the robot base coordinate system according to the pose of the camera in the robot base coordinate system and the pose of the image mark in the camera coordinate system.
In some application scenes, the method for obtaining the pose of the image mark in the robot base coordinate system by the external server comprises the steps of firstly obtaining the pose of the camera in the robot base coordinate system and the pose of the image mark in the camera coordinate system, and then obtaining the pose of the image mark in the robot base coordinate system according to the pose of the camera in the robot base coordinate system and the pose of the image mark in the camera coordinate system. This is a problem of coordinate scaling, in this embodiment, the pose of the image identifier in the base coordinate system needs to be obtained by analyzing the photo or obtaining the pose of the image identifier in the camera coordinate system according to the preset position between the camera and the image identifier, and since the pose of the camera in the base coordinate system of the robot is a known value, the pose of the image identifier in the base coordinate system of the robot can be obtained by scaling.
In some optional implementations, the positioning the robot job according to the user coordinate system includes: performing point position teaching on the robot according to the user coordinate system to obtain a working point under the user coordinate system; storing the working site; and controlling the robot to move to the working site to perform work.
In some application scenarios, the external control system positions the robot task according to the user coordinate system, including teaching the working point location under the user coordinate system arranged at the image identifier, and stores the obtained working point location, so that the robot can be operated to move to the teaching point location to work according to the stored working point location during actual task, the operator is not required to reposition, and the working efficiency is improved.
In a second aspect, an embodiment of the present invention provides an external control system for robotic work positioning.
Fig. 5 is a schematic structural diagram of an external control system for robot job positioning according to an embodiment of the present invention, and as shown in fig. 5, the external control system 10 includes an image processing module 12 and a communication module 14. The external control system may be a module, program segment, or code on the electronic device. It should be understood that, in the external control system 10 corresponding to the embodiment of the method of fig. 2 and fig. 4 described above, each step involved in the embodiment of the method of fig. 2 and fig. 4 can be performed, and specific functions of the external control system 10 may be referred to the above description, and detailed descriptions are omitted herein as appropriate to avoid redundancy.
Optionally, the image processing module 12 is configured to obtain a photo including an image identifier, analyze the photo, determine whether the camera is located at a preset position, and if the camera is located at the preset position, obtain and record a current joint pose of the robot as a photographing pose. Wherein the analyzing of the photograph comprises: judging whether the photo comprises the image identifier or not to obtain a first judging result; judging whether the non-rotationally symmetrical mark of the image mark in the photo is deformed or not to obtain a second judging result; and analyzing the corresponding pixel points of the image mark in the photo to obtain a third judging result. The judging whether the camera is positioned at the preset position comprises the following steps: judging whether the camera shoots the image identifier according to the first judging result; judging whether the front end face of the camera is parallel to the image mark or not according to the second judging result; and judging whether the distance between the front end face of the camera and the image identifier is a preset value according to the third judging result, and judging whether the camera is at a preset position according to the judging standard to obtain a final judging result.
The communication module 14 is configured to communicate with a robot control system, where the communication module 14 sends a first control instruction to the robot to control the robot to move to a photographing posture, and in addition, the communication module 14 is further configured to send a second control instruction to the robot to control the robot to set a user coordinate system, where the second control instruction controls the robot to set the user coordinate system according to the image identifier.
Optionally, the external control system 10 further includes a storage module (not shown in the figure), where the storage module is configured to store the shooting pose and the working point obtained by teaching.
Optionally, the external control system 10 further comprises a coordinate system construction module (not shown in the figure) for constructing a user coordinate system based on the pose of the image identifier in the robot base coordinate system. The method comprises the steps that a coordinate system construction module firstly obtains the pose of a camera in a robot base coordinate system and the pose of an image mark in the camera coordinate system, then obtains the pose of the image mark in the robot base coordinate system according to the pose of the camera in the robot base coordinate system and the pose of the image mark in the camera coordinate system, and finally completes construction of a user coordinate system according to the pose of the obtained image mark in the robot base coordinate system.
Optionally, the external control system 10 further includes a positioning unit (not shown in the figure), which positions the robot job according to a user coordinate system, including: performing point position teaching on the robot according to a user coordinate system to obtain a working point; the robot is controlled to move to the working site for operation by sending the working site to the robot control system.
All modules of the external system are provided with data connecting lines, and data communication can be realized between the modules.
In a third aspect, an embodiment of the present invention provides a robot control system.
Fig. 6 is a schematic structural diagram of a robot control system according to an embodiment of the present invention, and as shown in fig. 6, the robot control system 20 includes an external communication module 21, an instruction execution module 23, and a storage module 25. The robot control system 20 may be a module, program segment, or code on an electronic device, including a robot. It should be understood that the robot control system 20 and the above-mentioned external control system 10 correspond to each other in terms of functions, and the detailed descriptions of the specific functions of the robot control system 20 can be omitted herein for avoiding repetition, in cooperation with the external control system 10 to perform the method embodiments of fig. 2 and fig. 4, and in cooperation with the external control system 10 to perform the steps involved in the method embodiments of fig. 2 and fig. 4.
Optionally, the external communication module 21 is configured to communicate with an external control system and receive instructions from the external control system 10, including the first and second control instructions and the third control instruction of the working site data and the instructions for controlling the robot to move to the working site.
The command receiving module is configured to execute a control command, where the control command includes a command received by the external communication module 21, where the received command includes the first control command, the second control command, and the third control command, the first control command controls the robot to move to a shooting pose, and the second control command controls the robot to set a user coordinate system.
The storage module 25 is configured to store the working site obtained by performing the point teaching on the robot according to the user coordinate system and the working site data received by the external communication module 21.
Optionally, the robot control system 20 further includes a coordinate system setting module (not shown in the figure) for setting a user coordinate system, specifically, the coordinate system setting module is used for setting the user coordinate system at the image identifier. After the coordinate system setting module obtains the pose of the image mark in the robot base coordinate system, the user coordinate system can be set according to the pose of the image coordinate system.
Optionally, the robot control system 20 further includes a pose acquisition module (not shown in the figure), where the pose acquisition module is configured to acquire poses of the camera, each joint of the robot, and the tool in a robot base coordinate system, where each pose may be sent to the external control system through the external communication module.
In a fourth aspect, an embodiment of the present invention provides a positioning module for robot positioning.
Fig. 7 is a schematic structural diagram of a positioning module for positioning a robot according to an embodiment of the present invention; as shown in fig. 7, the positioning module 30 includes an indicator lamp 31, a camera 33, an external control system 10, a first control button 35, and a second control button 37. The positioning module 30 is part of a robot control system for robot positioning. It should be understood that the positioning module 30 corresponds to the embodiment of the method of fig. 2 and fig. 4, and is capable of performing the steps involved in the embodiment of the method of fig. 2 and fig. 4, and specific functions of the positioning module 30 may be referred to the above description, and detailed descriptions thereof are omitted herein for avoiding repetition.
Optionally, a positioning module 30 is mounted to the working end of the robot. The camera 33 is configured to take a picture of the image identifier and obtain a photograph comprising the image identifier; wherein the external control system 10 is the external control system shown in the corresponding embodiment of fig. 6; the first control button 35 is configured to control the camera to take a picture according to an operation thereto, and trigger a set user coordinate system instruction; the second control button 37 is configured to record a photographing pose of the robot according to an operation thereto; the indicator lamp 31 is configured to indicate whether or not the user coordinate system is successfully set according to the image identification. Optionally, the indication includes: if the setting is successful, the indicator lamp flashes green once; if the setting fails, the indicator lights flash red once.
In a fifth aspect, embodiments of the present application provide an electronic device. Referring to fig. 8, fig. 8 is a schematic structural diagram of an electronic device for performing a positioning method of a robot job according to an embodiment of the present application, where the electronic device 500 may include: at least one processor 501, such as a CPU, at least one communication interface 502, at least one memory 503, and at least one communication bus 504. Wherein the communication bus 504 is used to enable direct connection communication for these components. The communication interface 502 of the device in the embodiment of the present application is used for performing signaling or data communication with other node devices. The memory 503 may be a high-speed RAM memory or a nonvolatile memory (non-volatile memory), such as at least one magnetic disk memory. The memory 503 may also optionally be at least one storage device located remotely from the aforementioned processor. The memory 503 has stored therein computer readable instructions which, when executed by the processor 501, enable the electronic device to perform the method processes described above with reference to fig. 2 and 4.
It will be appreciated that the configuration shown in fig. 8 is merely illustrative, and that the electronic device may also include more or fewer components than shown in fig. 8, or have a different configuration than shown in fig. 8. The components shown in fig. 8 may be implemented in hardware, software, or a combination thereof.
In a sixth aspect, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, may perform a positioning method for a robot job as provided in the first aspect of the present application, and/or a method procedure performed by an electronic device in the method embodiments shown in fig. 2 and 4.
In a seventh aspect, embodiments of the present application provide a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the performance of the method provided by the above-described method embodiments, for example, the method may comprise: controlling a camera at the working end of the robot to shoot an image mark, and obtaining a photo comprising the image mark, wherein the image mark is arranged at a fixed position; analyzing the photo to obtain the pose of the image mark in a robot base coordinate system; constructing a user coordinate system according to the pose of the image mark in the robot base coordinate system; and positioning the robot operation according to the user coordinate system.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
Further, the units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Furthermore, functional modules in various embodiments of the present invention may be integrated together to form a single portion, or each module may exist alone, or two or more modules may be integrated to form a single portion.
It should be noted that the functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM) random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present invention and is not intended to limit the scope of the present invention, and various modifications and variations will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. A positioning method for robotic work, the method comprising:
controlling a camera at the working end of the robot to shoot an image mark, and obtaining a photo comprising the image mark, wherein the image mark is arranged at a fixed position; the image mark comprises a non-rotationally symmetrical mark, and the fixed position comprises a workbench surface of the robot needing to work;
analyzing the photo to obtain the pose of the image mark in a robot base coordinate system; wherein said analyzing said photograph comprises: judging whether the photo comprises the image identifier or not to obtain a first judging result; judging whether the non-rotationally symmetrical mark of the image mark in the photo is deformed or not to obtain a second judging result; analyzing the corresponding pixel points of the image mark in the photo to obtain a third judging result;
Constructing a user coordinate system according to the pose of the image mark in the robot base coordinate system; and
positioning the robot operation according to the user coordinate system;
wherein after said analyzing the photograph, the method further comprises: judging whether the camera is at a preset position or not; if the camera is at the preset position, acquiring and recording the current joint pose of the robot as a photographing pose; the determining whether the camera is at a preset position includes: judging whether the camera shoots the image identifier according to the first judging result; judging whether the front end face of the camera is parallel to the image mark or not according to the second judging result; judging whether the distance between the front end face of the camera and the image identifier is a preset value or not according to the third judging result; and if the camera is judged to shoot the image identifier according to the first judging result, the front end face of the camera is judged to be parallel to the image identifier according to the second judging result, and the distance between the front end face of the camera and the image identifier is judged to be a preset value according to the third judging result, the camera is judged to be at a preset position.
2. The positioning method according to claim 1, characterized in that before the camera controlling the working end of the robot takes the image identification, the method further comprises:
controlling the robot to move the camera to a preset position;
wherein the preset position is located at a certain distance above the image mark.
3. The positioning method of claim 1, wherein the obtaining the image identifies a pose in a robot base coordinate system, comprising:
acquiring the pose of the camera in a robot base coordinate system and the pose of the image mark in the camera coordinate system; and
and obtaining the pose of the image mark in the robot base coordinate system according to the pose of the camera in the robot base coordinate system and the pose of the image mark in the camera coordinate system.
4. The positioning method according to claim 1, wherein the positioning the robot job according to the user coordinate system includes:
performing point position teaching on the robot according to the user coordinate system to obtain a working point under the user coordinate system;
Storing the working site;
and controlling the robot to move to the working site to perform work.
5. An external control system for robotic work positioning, the external control system comprising:
the image processing module is used for acquiring a photo comprising an image identifier, analyzing the photo, judging whether a camera is positioned at a preset position, and acquiring and recording the current joint pose of the robot as a photographing pose if the camera is positioned at the preset position; the image mark is arranged at a fixed position, the image mark comprises a non-rotationally symmetrical mark, and the fixed position comprises a workbench surface of the robot needing to work; the analyzing of the photograph includes: judging whether the photo comprises the image identifier or not to obtain a first judging result; judging whether the non-rotationally symmetrical mark of the image mark in the photo is deformed or not to obtain a second judging result; analyzing the corresponding pixel points of the image marks in the photo to obtain a third judging result; and
the communication module is used for communicating with a robot control system, and is used for sending a first control instruction for controlling the robot to move to the photographing posture to the robot, and the first control instruction is used for controlling the robot to move the joint posture to the photographing posture;
The communication module is further used for sending a second control instruction for controlling the robot to set a user coordinate system to the robot, and the second control instruction controls the robot to set the user coordinate system according to the image identification;
the image processing module is specifically configured to: judging whether the camera shoots the image identifier according to the first judging result; judging whether the front end face of the camera is parallel to the image mark or not according to the second judging result; judging whether the distance between the front end face of the camera and the image identifier is a preset value or not according to the third judging result; and if the camera is judged to shoot the image identifier according to the first judging result, the front end face of the camera is judged to be parallel to the image identifier according to the second judging result, and the distance between the front end face of the camera and the image identifier is judged to be a preset value according to the third judging result, the camera is judged to be at a preset position.
6. A robot control system, wherein the robot control system corresponds to the function of the external control system according to claim 5, the robot control system comprising:
An external communication module configured to communicate with an external control system and to receive first and second control instructions from the external control system;
the instruction execution module is configured to execute the first control instruction and the second control instruction, the first control instruction controls the robot to move to a photographing posture, and the second control instruction controls the robot to set a user coordinate system; and
and the storage module is configured to store a working site obtained by performing point location teaching on the robot according to the user coordinate system.
7. A positioning module for robot positioning, wherein the positioning module comprises a camera, an external control system, a first control button, a second control button and an indicator light; the positioning module is arranged at the working end of the robot;
the camera is configured to shoot the image identifier and obtain a photo comprising the image identifier;
wherein the external control system is the external control system of claim 5;
the first control button is configured to control the camera to shoot according to the operation of the first control button and trigger a user coordinate system setting instruction;
The second control button is configured to record a photographing pose of the robot according to the operation of the second control button;
the indicator light is configured to indicate whether the user coordinate system is successfully set according to the image identification.
CN202111329759.0A 2021-11-11 2021-11-11 Positioning method and control system for robot operation Active CN113858214B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111329759.0A CN113858214B (en) 2021-11-11 2021-11-11 Positioning method and control system for robot operation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111329759.0A CN113858214B (en) 2021-11-11 2021-11-11 Positioning method and control system for robot operation

Publications (2)

Publication Number Publication Date
CN113858214A CN113858214A (en) 2021-12-31
CN113858214B true CN113858214B (en) 2023-06-09

Family

ID=78987850

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111329759.0A Active CN113858214B (en) 2021-11-11 2021-11-11 Positioning method and control system for robot operation

Country Status (1)

Country Link
CN (1) CN113858214B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114986522B (en) * 2022-08-01 2022-11-08 季华实验室 Mechanical arm positioning method, mechanical arm grabbing method, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016001124A (en) * 2014-06-11 2016-01-07 キヤノン株式会社 Information processing device, photographing guidance method for target calibration, and computer program
CN113232015A (en) * 2020-05-27 2021-08-10 杭州中为光电技术有限公司 Robot space positioning and grabbing control method based on template matching

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH085021B2 (en) * 1990-02-01 1996-01-24 川崎重工業株式会社 Workpiece positioning method
KR100785784B1 (en) * 2006-07-27 2007-12-13 한국전자통신연구원 System and method for calculating locations by landmark and odometry
CN106625676B (en) * 2016-12-30 2018-05-29 易思维(天津)科技有限公司 Three-dimensional visual accurate guiding and positioning method for automatic feeding in intelligent automobile manufacturing
CN107297399B (en) * 2017-08-08 2018-10-16 南京埃斯顿机器人工程有限公司 A kind of method of robot Automatic-searching bending position
CN109059922B (en) * 2018-06-29 2020-10-16 北京旷视机器人技术有限公司 Mobile robot positioning method, device and system
CN110262507B (en) * 2019-07-04 2022-07-29 杭州蓝芯科技有限公司 Camera array robot positioning method and device based on 5G communication
CN110480642A (en) * 2019-10-16 2019-11-22 遨博(江苏)机器人有限公司 Industrial robot and its method for utilizing vision calibration user coordinate system
CN110774319B (en) * 2019-10-31 2021-07-23 深圳市优必选科技股份有限公司 Robot and positioning method and device thereof
CN110853102B (en) * 2019-11-07 2023-11-03 深圳市微埃智能科技有限公司 Novel robot vision calibration and guide method and device and computer equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016001124A (en) * 2014-06-11 2016-01-07 キヤノン株式会社 Information processing device, photographing guidance method for target calibration, and computer program
CN113232015A (en) * 2020-05-27 2021-08-10 杭州中为光电技术有限公司 Robot space positioning and grabbing control method based on template matching

Also Published As

Publication number Publication date
CN113858214A (en) 2021-12-31

Similar Documents

Publication Publication Date Title
JP6527178B2 (en) Vision sensor calibration device, method and program
CN107756408B (en) Robot track teaching device and method based on active infrared binocular vision
CN110125926B (en) Automatic workpiece picking and placing method and system
CN111452040B (en) System and method for associating machine vision coordinate space in a pilot assembly environment
WO2018043525A1 (en) Robot system, robot system control device, and robot system control method
CN111369625B (en) Positioning method, positioning device and storage medium
JP2016099257A (en) Information processing device and information processing method
JP2005342832A (en) Robot system
JP2005201824A (en) Measuring device
JP2005300230A (en) Measuring instrument
JP2005074600A (en) Robot and robot moving method
CN113379849A (en) Robot autonomous recognition intelligent grabbing method and system based on depth camera
CN113858214B (en) Positioning method and control system for robot operation
JP6860735B1 (en) Transport system, transport system control method, and transport system control program
JP2019098409A (en) Robot system and calibration method
JPWO2018043524A1 (en) Robot system, robot system control apparatus, and robot system control method
JP2004243215A (en) Robot teaching method for sealer applicator and sealer applicator
JP2014149182A (en) Method of positioning relative to workpiece
JP2024096756A (en) Robot-mounted mobile device and control method thereof
JP7112528B2 (en) Work coordinate creation device
TWI807990B (en) Robot teaching system
JP7366264B2 (en) Robot teaching method and robot working method
JPH09323280A (en) Control method and system of manupulator
CN112454363A (en) Control method of AR auxiliary robot for welding operation
US11054802B2 (en) System and method for performing operations of numerical control machines

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Building 6, 646 Jianchuan Road, Minhang District, Shanghai 201100

Applicant after: Jieka Robot Co.,Ltd.

Address before: Building 6, 646 Jianchuan Road, Minhang District, Shanghai 201100

Applicant before: SHANGHAI JAKA ROBOTICS Ltd.

GR01 Patent grant
GR01 Patent grant