CN113858214A - Positioning method and control system for robot operation - Google Patents

Positioning method and control system for robot operation Download PDF

Info

Publication number
CN113858214A
CN113858214A CN202111329759.0A CN202111329759A CN113858214A CN 113858214 A CN113858214 A CN 113858214A CN 202111329759 A CN202111329759 A CN 202111329759A CN 113858214 A CN113858214 A CN 113858214A
Authority
CN
China
Prior art keywords
robot
camera
coordinate system
image identifier
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111329759.0A
Other languages
Chinese (zh)
Other versions
CN113858214B (en
Inventor
李明洋
许雄
邵威
戚祯祥
王家鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jaka Robotics Ltd
Original Assignee
Shanghai Jaka Robotics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jaka Robotics Ltd filed Critical Shanghai Jaka Robotics Ltd
Priority to CN202111329759.0A priority Critical patent/CN113858214B/en
Publication of CN113858214A publication Critical patent/CN113858214A/en
Application granted granted Critical
Publication of CN113858214B publication Critical patent/CN113858214B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a positioning method and a control system for robot operation, and one specific implementation mode of the method comprises the following steps: controlling a camera at the operation end of the robot to shoot an image identifier to obtain a picture comprising the image identifier, wherein the image identifier is arranged at a fixed position; analyzing the picture to obtain the pose of the image identifier in a robot base coordinate system; constructing a user coordinate system according to the pose of the image identifier in the robot base coordinate system; and positioning the robot operation according to the user coordinate system. According to the method, when an operator controls the robot to move, the user coordinate system constructed based on the image identification can be used for positioning and controlling the robot to move, and the operation control difficulty of the robot which uses the camera for accurate positioning is reduced.

Description

Positioning method and control system for robot operation
Technical Field
The invention relates to the field of robot control, in particular to a positioning method and a control system for robot operation.
Background
Robots comprising a robot arm and a movable chassis are often used for machine tool operations, such as material handling and large-scale object grasping.
The robot needs to position an object to be grabbed in the operation process, and the current common mode is auxiliary positioning by utilizing a camera fixed on the robot. The specific application of camera-assisted positioning is a base coordinate system of a robot, and the base coordinate system of the robot moves along with the robot, so that the coordinate system is used for positioning a moving position of the robot, an operator of the robot needs to perform coordinate system conversion for many times in the actual operation process, and the coordinate system conversion is not intuitive and difficult, and is not beneficial to the operation of the operator.
For an operator, the robot can be correspondingly controlled only by converting various coordinate systems, the process is time-consuming, labor-consuming and error-prone, the whole operation process is reflected, the work progress can be slowed, and the whole work efficiency is reduced.
Disclosure of Invention
An object of an embodiment of the present invention is to provide a positioning method and a control system for robot operation, so as to reduce operation control difficulty in performing accurate positioning of a robot with the aid of a camera.
In a first aspect, an embodiment of the present invention provides a positioning method for robot work, where the method includes: controlling a camera at the operation end of the robot to shoot an image identifier to obtain a picture comprising the image identifier, wherein the image identifier is arranged at a fixed position; analyzing the picture to obtain the pose of the image identifier in a robot base coordinate system; constructing a user coordinate system according to the pose of the image identifier in the robot base coordinate system; and positioning the robot operation according to the user coordinate system.
In the technical scheme provided by this embodiment, the camera is controlled to photograph the image identifier, the pose of the image identifier in the robot base coordinate system is obtained by analyzing the image identifier photograph, and then the user coordinate system of the robot is set at the image identifier, and the user coordinate system is constructed according to the image identifier, so that when an operator controls the robot to move, the operator can use the user coordinate system constructed based on the image identifier to position and control the robot to move.
Here, the user coordinate system may be understood as a coordinate system constructed on an operation interface actually required by the robot. In the user coordinate system, the image identifier used for constructing the user coordinate system has definite coordinates in the user coordinate system; and the target workpiece to be acquired or operated by the robot is also on the operation interface, and the target workpiece also has definite coordinates in the user coordinate system, and the two coordinates are only in a simple translation relation in the horizontal direction and the vertical direction. Therefore, after the user coordinate system is constructed by the image identifier, the image identifier is fixed and intuitive in the user coordinate system for the operator, the target workpiece to be operated or acquired by the robot is also fixed and intuitive in the user coordinate system, and both are identified in the same coordinate system, so that the operation of the robot to acquire the target workpiece does not involve conversion of the coordinate system at the operator, and the design greatly reduces the operation difficulty of the operator in controlling the operation of the robot.
Further, before the camera controlling the working end of the robot shoots the image identifier, the method further comprises: controlling the robot to move the camera to a preset position; wherein the preset position is located at a certain distance above the image identifier. The focal length of the camera generally has a certain variation range, but the focal length can be set to be a fixed value according to needs when the camera is used, so that a fixed clear imaging object distance exists, a preset position for moving the camera to the distance between the camera and the image mark to be the object distance is determined, the camera is directly moved to the preset position by controlling the robot, and the camera is shot at the preset position, so that the shot picture is clear and available, and subsequent analysis is facilitated.
Further, in the method, the image identifier includes a non-rotationally symmetric mark, and the fixed position includes a table surface on which the robot needs to work. By designing a specific non-rotationally symmetrical pattern which is convenient to analyze, the analysis difficulty can be reduced; the image identification is arranged on the surface of the workbench, so that an operator of the robot can conveniently position the robot by taking the image identification as a reference and operate the robot to complete related operations.
Further, after analyzing the photograph, the method further comprises: judging whether the camera is at a preset position or not; and if the camera is at a preset position, acquiring and recording the current joint pose of the robot as a photographing gesture. The method comprises the steps of analyzing pictures of image identification, obtaining whether the relative position between a camera and the image identification meets a preset condition or not according to an analysis result, if the relative position between the camera and the image identification meets the preset condition, enabling the camera to be located at the preset position, obtaining and recording the joint pose of the robot at the moment as a photographing gesture, and enabling the robot to be controlled to move to the photographing gesture by directly utilizing the recorded joint pose when the robot needs to be operated again to photograph the image identification later, so that the photographing control step is simplified.
Further, in the method, the analyzing the photograph includes: judging whether the picture comprises the image identifier or not to obtain a first judgment result; judging whether the non-rotational symmetric mark of the image identifier in the picture is deformed or not to obtain a second judgment result; analyzing corresponding pixel points of the image identification in the picture to obtain a third judgment result; the judging whether the camera is at a preset position comprises: judging whether the camera shoots the image identifier according to the first judgment result; judging whether the front end face of the camera is parallel to the image identifier or not according to the second judgment result; and judging whether the distance between the front end face of the camera and the image identifier is a preset value or not according to the third judgment result. And if the image identifier shot by the camera is judged according to the first judgment result, the front end face of the camera is judged to be parallel to the image identifier according to the second judgment result, and the distance between the front end face of the camera and the image identifier is judged to be a preset value according to the third judgment result, the camera is judged to be at a preset position. Whether the camera is located at the preset position or not is judged by analyzing the photos, and whether the relative position between the camera and the image identifier meets the preset condition or not is further determined.
Further, in the method, the obtaining the pose of the image identifier in the robot base coordinate system includes: acquiring the pose of the camera in a robot base coordinate system and the pose of the image identifier in a camera coordinate system; and obtaining the pose of the image identifier in the robot base coordinate system according to the pose of the camera in the robot base coordinate system and the pose of the image identifier in the camera coordinate system. The pose of the image identifier in the camera coordinate system can be obtained by analyzing the photo, and the pose of the camera in the robot base coordinate system belongs to known quantity, so that after the pose of the camera in the robot base coordinate system and the pose of the image identifier in the camera coordinate system are obtained, the pose of the image identifier in the robot base coordinate system can be obtained by conversion, and the user coordinate system is arranged at the image identifier.
Further, in the method, the positioning the robot work according to the user coordinate system includes: point location teaching is carried out on the robot according to the user coordinate system to obtain a working point; storing the work site; and controlling the robot to move to the working site for operation. The working sites are obtained by point teaching under the user coordinate system set according to the image identification and stored, and due to the arrangement, when a worker works in a follow-up operation robot, the working sites set according to the image identification can be directly used for control, so that the operation difficulty is reduced, and the working efficiency is improved.
In a second aspect, embodiments of the present invention provide an external control system for robotic work positioning, the external control system comprising: an image processing module and a communication module. The image processing module is used for acquiring a photo comprising an image identifier, analyzing the photo, judging whether a camera is located at a preset position, and if the camera is located at the preset position, acquiring and recording the current joint pose of the robot as a photographing gesture.
Wherein, in a preferred embodiment, the analyzing the photograph comprises: judging whether the picture comprises the image identifier or not to obtain a first judgment result; judging whether the non-rotational symmetric mark of the image identifier in the picture is deformed or not to obtain a second judgment result; and analyzing corresponding pixel points of the image identification in the picture to obtain a third judgment result. The judging whether the camera is located at the preset position comprises: judging whether the camera shoots the image identifier according to the first judgment result; judging whether the front end face of the camera is parallel to the image identifier or not according to the second judgment result; and judging whether the distance between the front end face of the camera and the image identifier is a preset value or not according to the third judgment result.
The external control system for robot operation positioning further comprises a communication module, wherein the communication module is used for communicating with a robot control system, the communication module sends a first control instruction for controlling the robot to move to the photographing posture to the robot, and the first control instruction controls the robot to move the joint posture to the photographing posture; the communication module is further used for sending a second control instruction for controlling the robot to set the user coordinate system to the robot, and the second control instruction controls the robot to set the user coordinate system according to the image identifier.
The external control system is responsible for processing the pictures including the image identifications, judging whether the camera is located at a preset position or not, and recording the photographing gesture, and meanwhile, the external control system is provided with a communication module and can control the robot to move to the photographing gesture and set a user coordinate system according to the image identifications by sending a control command to the control system of the robot. The robot is controlled in an external control mode, the work of setting the user coordinate system at the image identification position is completed by matching with the robot, the work of workers is simplified, and the difficulty of controlling the robot to shoot the image identification and setting the user coordinate system at the image identification position is reduced. Meanwhile, the original robot control system is not required to be changed by the arrangement, so that the technical scheme of the invention can be widely applied to various robots.
Further, in a preferred embodiment, the external control system further comprises a coordinate system construction module for constructing a user coordinate system from the pose of the image identification in the robot base coordinate system. The coordinate system construction module firstly acquires the pose of the camera in a robot base coordinate system and the pose of the image identifier in the camera coordinate system, then acquires the pose of the image identifier in the robot base coordinate system according to the pose of the camera in the robot base coordinate system and the pose of the image identifier in the camera coordinate system, and finally completes construction of a user coordinate system according to the acquired pose of the image identifier in the robot base coordinate system.
Further, in a preferred embodiment, the external control system further includes a positioning unit, and the positioning unit positions the robot according to the user coordinate system, including: point location teaching is carried out on the robot according to the user coordinate system to obtain a working point; and controlling the robot to move to the working site to carry out operation by sending the working site to a robot control system.
Further, in a preferred embodiment, the external control system further includes a storage module, and the storage module is configured to store the work site and the photographing gesture. And the subsequent positioning unit can call point position and pose data to control the movement of the robot operation conveniently.
All modules of the external control system are provided with data connecting lines, and data communication can be realized among the modules.
In a third aspect, an embodiment of the present invention provides a robot control system, including: the device comprises an external communication module, an instruction execution module and a storage module. The external communication module is configured to communicate with an external control system and receive instructions from the external control system, the instructions including first and second control instructions and work site data and third control instructions that control movement of the robot to the work site.
The instruction receiving module is configured to execute a control instruction, the control instruction comprises an instruction received by the external communication module, the instruction comprises a first control instruction, a second control instruction and a third control instruction, the first control instruction controls the robot to move to a photographing gesture, and the second control instruction controls the robot to set a user coordinate system.
The storage module is configured to store a working site obtained by point location teaching of the robot according to the user coordinate system and working site data received by the external communication module. The robot control system is provided with a communication module capable of receiving an instruction of an external control system, an execution module capable of executing a related instruction from the external control system, and a storage module capable of storing a working site taught according to a user coordinate system. Such an arrangement enables, in practical use, the method of the invention to be applied during operation of the robot.
Further, in a preferred embodiment, the robot control system further includes a coordinate system setting module, where the coordinate system setting module is configured to set a user coordinate system, and specifically, the coordinate system setting module is configured to set the user coordinate system at the image identifier. After the coordinate system setting module acquires the pose of the image identifier in the robot base coordinate system, the coordinate system setting module can set the user coordinate system according to the pose of the image coordinate system.
Further, in a preferred embodiment, the robot control system further comprises a pose acquisition module, wherein the pose acquisition module is used for acquiring poses of the camera, each joint of the robot and the tool in a robot base coordinate system, and each pose can be sent to the external control system through the external communication module.
All modules of the robot control system are provided with data connecting lines, and data communication can be realized among the modules.
In a fourth aspect, an embodiment of the present invention provides a positioning module for positioning a robot, where the positioning module includes a camera, an external control system, a first control button, a second control button, and an indicator light; the positioning module is arranged at the operation end of the robot; the camera is configured to shoot an image identifier and obtain a photo comprising the image identifier; wherein the external control system is the external control system of the second aspect; the first control button is configured to control the camera to shoot according to the operation of the first control button and trigger the setting of a user coordinate system instruction; the second control button is configured to record the photographing pose of the robot according to the operation on the second control button; the indicator light is configured to indicate whether the setting of the user coordinate system according to the image identification is successful. The positioning module realizes interaction between the robot system and a user, and the arrangement enables a worker to simply operate and control the robot to arrange the user coordinate system at the image identification position, so that the operation is simple, and the working efficiency is improved.
In a fifth aspect, an embodiment of the present invention provides an electronic device, which may be a robot or a server, and the device includes a processor and a memory, where the memory stores computer-readable instructions, and when the computer-readable instructions are executed by the processor, the steps in the method provided in the first aspect are executed.
In a sixth aspect, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps in the method as provided in the first aspect.
The method comprises the steps of shooting an image identifier by controlling a camera at a working end of the robot, obtaining a picture comprising the image identifier, analyzing the picture, obtaining a pose of the image identifier in a robot base coordinate system, and constructing a user coordinate system according to the pose of the image identifier in the robot base coordinate system; therefore, an operator controlling the robot can visually position the robot according to the image identifier and the user coordinate system arranged at the image identifier. In addition, through reasonable arrangement, the embodiment of the invention enables a worker controlling the robot to operate to successfully arrange the user coordinate system at the image identifier through simple operation. The operation difficulty of operating the robot is reduced, and the working efficiency of an operator is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments of the present invention will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic view of a robot working scene according to an embodiment of the present invention;
fig. 2 is a first flowchart of a positioning method for robot work according to an embodiment of the present invention;
FIG. 3a is a schematic diagram of a first image identifier according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of a second image identifier according to an embodiment of the present invention;
FIG. 3c is a schematic diagram of a third image identifier according to an embodiment of the present invention;
fig. 4 is a second flowchart of a positioning method for robot work according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an external control system for positioning a robot in operation according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a robot control system according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a positioning module for positioning a robot according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device for performing a positioning method for robot work according to an embodiment of the present disclosure.
Icon: 10-external control system, 12-image processing module, 14-communication module;
20-a robot control system, 21-an external communication module, 23-an instruction execution module, 25-a storage module;
30-positioning module, 31-indicator light, 33-camera, 35-first control button, 37-second control button;
300-application scene, 301-robot base, 303-mechanical arm, 305-end effector, 311-camera, 313-camera front end face, 315-certain distance, 321-workbench, 323-target workpiece;
400-image identification, 401-image logo;
410-image identification, 411-image identification;
420-image identification, 421-image logo;
500-electronic device, 501-processor, 502-communication interface, 503-memory, 504-communication bus.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
It should be noted that the embodiments or technical features of the embodiments in the present application may be combined without conflict.
In the related art, for a compound robot that uses a camera to perform accurate positioning, in an actual operation process, an operator needs to perform various transformations among coordinate systems such as a camera coordinate system, a robot base coordinate system, and a working coordinate system of the robot to perform corresponding control, which is time-consuming and prone to error. Meanwhile, the requirements for robot operators are high, and the operators are required to understand not only the mechanical operation but also the program editing and setting and the visual application, so that the requirements make the personnel who understand the operation of the robot scarce and the labor cost is high.
As described above, in the related art, there are problems that the difficulty of coordinate system conversion is high, the operation is time-consuming, and errors are easily caused in the robot control process. In order to solve the problem, the invention provides a positioning method and a control system for robot operation, which program the setting process of a working coordinate system, namely a user coordinate system, of a robot by setting an image identifier, so that a robot operator can complete the visual debugging work of setting the user coordinate system only by most basic mechanical related knowledge.
In some application scenarios, the method for positioning robot work may be applied to a processor/server/upper computer outside the robot, or may be directly applied to a processor and the like arranged inside the robot, where the processor/server/upper computer or an internal processor may have an application installed thereon, the application is used to control the robot, and the application may request the processor/server/upper computer or the internal processor to implement a corresponding function according to a relevant instruction of a user. Illustratively, the present invention is exemplified as applied to an external server.
The above solutions in the related art are all the results of practical and careful study of the inventor, and therefore, the discovery process of the above problems and the solutions proposed by the following embodiments of the present invention to the above problems should be the contribution of the inventor to the present invention in the course of the present invention.
Referring to fig. 1, a scenario in which the method of the present invention is applied to a robot operation is shown. As shown in fig. 1, an application scenario 300 of the method of the present invention includes: robot base 301, robotic arm 303, end effector 305, camera 311, stage 321, workpiece 323, and image identifier 400. Wherein the camera 311 is disposed at the working end of the robot, that is, beside the end effector 305 of the robot arm 303, and the position of the camera 311 relative to the robot end effector 305 can be fixed by a bracket or other means (not shown in the figure), so that when the camera 311 is mounted at the end of the robot, the positions of the camera front end surface 313 and the center of the optical axis (not shown in the figure) relative to the robot end effector 305 are fixed values; in a preferred embodiment, the camera can be provided with a supplementary lighting source, and the definition of a shot image is guaranteed during shooting. The image indicator 400 is disposed at a fixed position, and a figure or a pattern for recognition should be provided above the image indicator 400.
In a first aspect, an embodiment of the present invention provides a positioning method for robot work.
Fig. 2 is a first flowchart of a positioning method for robot work according to an embodiment of the present invention, and as shown in fig. 2, the method includes steps S101 to S107.
Step S101: and controlling a camera at the operation end of the robot to shoot an image identifier to obtain a picture comprising the image identifier, wherein the image identifier is arranged at a fixed position.
Before a camera controlling the working end of the robot shoots an image identifier, the camera is mounted at the working end of the robot, and an image identifier is set at a fixed position. After receiving the control instruction, step S101 is executed to control the camera mounted at the working end of the robot to capture the image identifier and obtain a photo including the image identifier. The control instruction can be triggered by a button, and can also be sent by an operator in a mouse click mode and the like.
Step S103: and analyzing the picture to obtain the pose of the image identifier in the robot base coordinate system.
And after the external server obtains the photo comprising the image identifier, analyzing the photo to obtain the pose of the image identifier in the robot base coordinate system. The pose represents a position and a posture, and any rigid body can accurately and uniquely represent the position state of the rigid body by the position and the posture in a space coordinate system (OXYZ). The description of the pose and the coordinate transformation of the robot are the basis for the kinematic and dynamic analysis of the industrial robot. Theoretically, a six-axis robot has 6 degrees of freedom, namely X, Y, Z, Yaw (Pitch angle), Pitch (yaw angle), Roll (Roll angle); the first three represent the position, i.e. the coordinates in three dimensions of space, while the last three represent the attitude, i.e. the angle of rotation about the X, Y and Z axes in the current attitude. Therefore, after step S103 is executed, the position and the posture of the image identifier in the robot base coordinate system are obtained, and by this posture, the image identifier can be accurately positioned in the robot base coordinate system.
Step S105: and constructing a user coordinate system according to the pose of the image identification in the robot base coordinate system.
After obtaining the pose of the image identifier in the robot base coordinate system, the external server can proceed to step S105, in which the external server constructs a user coordinate system according to the pose of the image identifier in the robot base coordinate system, where the user coordinate system is a working coordinate system of the robot, that is, a coordinate system used by an operator in actually operating the robot to perform a task. Because the user coordinate system is constructed based on the pose of the image identifier in the base coordinate system of the robot, for the robot, the point positions in the user coordinate system can be converted into the point positions in the base coordinate system.
Step S107: and positioning the robot operation according to the user coordinate system.
After the user coordinate system is successfully set at the image identifier, the external server has the condition for executing step S107, and can position the robot according to the user coordinate system. The user coordinate system is arranged at the image identifier, and the robot can obtain the position points under the user coordinate system according to the user coordinate system at the image identifier because the position of the image identifier is a fixed position, so that each position point can be accurately positioned. Here, the user coordinate system may be understood as a coordinate system constructed on an operation interface actually required by the robot. In the user coordinate system, the image identifier used for constructing the user coordinate system has definite coordinates in the user coordinate system; and the target workpiece to be acquired or operated by the robot is also on the operation interface, and the target workpiece also has definite coordinates in the user coordinate system, and the two coordinates are directly just a simple translation relation in the horizontal direction and the vertical direction. Therefore, after the user coordinate system is constructed by the image identifier, the image identifier is fixed and intuitive in the user coordinate system for the operator, the target workpiece to be operated or acquired by the robot is also fixed and intuitive in the user coordinate system, and both are identified in the same coordinate system, so that the operation of the robot to acquire the target workpiece does not involve conversion of the coordinate system at the operator, and the design greatly reduces the operation difficulty of the operator in controlling the operation of the robot.
The positioning method for robot work provided by the first aspect of the present application as described above may be performed by the aforementioned external server, or may be performed by other execution subjects or electronic devices in the art, such as a processor, a personal computer, and the like. It will be appreciated by those skilled in the art that the foregoing description of the use of an external server is intended to clearly illustrate the relevant art and is not intended to be limiting.
Compared with the traditional method of directly positioning by using a robot base coordinate system, the method of positioning by using the user coordinate system arranged at the image identifier overcomes the problem that the robot base coordinate system is also in a motion state due to motion in the operation process and quick and accurate positioning is difficult to realize.
With continued reference to fig. 2, in some alternative implementations, before step S101, step S100 is further included: and controlling the robot to move the camera to a preset position.
Wherein the preset position is located at a distance above the image identifier. Referring to fig. 1, the camera front surface 313 is spaced apart from the image tag 400 by a distance 315, i.e., a vertical distance from the image tag to a predetermined position. When the camera 311 is moved so that the front end surface 313 of the camera is a certain distance 315 from the top of the image identifier 400, optionally, the assistance may be performed by a measuring tool such as a pendant or a tape measure with a fixed line length, or other distance measuring equipment. The camera is provided with an adjustable optical lens, the focal length of the lens can be adjusted, the view angle of the camera can be further adjusted, and after the view angle of the camera is determined, the range which can be observed is in direct proportion to the object distance. For example, when the distance between a clearly imaged object with a certain focal length and a clearly imaged object with a certain focal length comprises 40cm, the distance value of the certain distance is set to 40cm, at this time, that is, when the camera can clearly image, the length of the narrow side of the camera field of view is about 10cm, if the positioning accuracy of the AGV is + -2cm, and the side length of the known image identifier is 2cm, a margin of 3cm is reserved, and after the robot base is ensured to move in place, the camera can capture the image identifier, and the field of view configuration with the highest positioning accuracy can be performed.
Referring to fig. 1 and 3a, 3b, 3c, in some alternative implementations, the image representation includes non-rotationally symmetric landmarks, and the fixed position includes a table surface on which the robot needs to work.
In some application scenarios, having the image identification include rotationally asymmetric landmarks may facilitate image recognition. After the image identifier is set, the size of the image identifier and the specific size of the corresponding mark in the image identifier can be determined. Fig. 3a, 3b and 3c show schematic diagrams of three image markers, and the image marker 400 shown in fig. 3a comprises a marker 401, and the marker 401 is a right-angle shape formed by two mutually perpendicular line segments. The image identifier 410 shown in fig. 3b comprises an identifier 411, and the identifier 411 is a cross shape formed by two mutually perpendicular line segments, and has four right angles, wherein the length of the two mutually perpendicular line segments is different. Fig. 3c shows an image marker 420, which includes a marker 421, where the marker 421 is a rectangular shape formed by two mutually perpendicular line segments, and each line segment has a scale. The three image identifiers are only used as examples, and alternatively, the marks on the image identifiers can also be other patterns or shapes, such as a plurality of parallel lines, a box and the like. Referring to fig. 1, the image identifier 400 can be fixed on the surface of the workbench 321, and only one fixed position of the image identifier 400 is shown in fig. 1, and optionally, the fixed position can be various wall surfaces and other surfaces. Alternatively, the image identifier 400 may be a metal sheet that is easy to machine, and may be easily fixed to various surfaces by screws, and may be disposed on various surfaces in the form of a laser marker, a sticker, or the like.
In some optional implementations, after analyzing the photograph, the method further comprises: judging whether the camera is at a preset position or not; and if the camera is at a preset position, acquiring and recording the current joint pose of the robot as a photographing gesture.
Fig. 4 is a second flowchart of a positioning method for robot work according to an embodiment of the present invention, and as shown in fig. 4, the method includes steps S201 to S207:
step S201: and controlling a camera at the operation end of the robot to shoot an image identifier to obtain a picture comprising the image identifier, wherein the image identifier is arranged at a fixed position.
Step S202: and analyzing the picture and judging whether the camera is at a preset position.
If the camera is not at the preset position, returning to the step S201; if the camera is at the preset position, the following steps S203 to S207 are continued.
Step S203: and acquiring and recording the current joint pose of the robot as a photographing gesture.
Step S204: and acquiring the pose of the image identifier in a robot base coordinate system.
Step S205 is to construct a user coordinate system according to the pose of the image identifier in the robot base coordinate system.
Step S207: and positioning the robot operation according to the user coordinate system.
In some application scenarios, the external server may determine whether the camera is located at the preset position according to the analysis result of the photo, and if the determination result is that the camera is not located at the preset position, the external server controls the camera to photograph the image identifier again, and steps S201 and S202 are executed again until the determination result is that the camera is located at the preset position. And if the camera is at the preset position, continuing to execute the step S203, and acquiring and recording the current joint pose of the robot as a photographing gesture. The robot joint pose can only comprise the pose of a robot tail end joint, namely the pose of a joint of the control camera, and the camera is fixed relative to the joint, so that when the joint of the control camera is in a photographing pose, the camera can be just positioned at a certain distance above the image identifier, and the image identifier can be clearly photographed; at the moment, other joint poses of the robot can be automatically adjusted according to the joint motion of the camera to the photographing pose, specific requirements are not required, and photographing can be achieved as long as the camera is located at the preset position. Alternatively, the joint positions may include positions of all joints of the robot, that is, the entire current state of the robot is obtained and recorded, because in this state, the robot may determine that the camera can be controlled to be in the preset position for shooting.
In some application scenarios, after the photographing pose is recorded, the robot may be directly controlled to move to the photographing pose according to the joint pose, that is, the camera is controlled to move to the preset position, so that there is no situation that the camera cannot photograph or clearly photographs the image identifier, and a controller of the robot needs to readjust each shutdown pose of the robot to re-photograph, and at this time, the determination process in step S202 and the step of re-recording the joint pose as the photographing pose in step S203 may be omitted. Of course, if the above steps S202 and S203 are retained, the determination process in step S202 may be regarded as a confirmation to confirm that the camera is determined to be at the preset position without influencing the next step due to an operation error or other reasons, step S203 may be regarded as a correction, and if the quality of the captured image is higher, the current joint pose is recorded again as the photographing pose.
In some optional implementations, wherein the analyzing the photograph comprises: judging whether the picture comprises the image identifier or not to obtain a first judgment result; judging whether the non-rotational symmetric mark of the image identifier in the picture is deformed or not to obtain a second judgment result; analyzing corresponding pixel points of the image identification in the picture to obtain a third judgment result;
the judging whether the camera is at a preset position comprises: judging whether the camera shoots the image identifier according to the first judgment result; judging whether the front end face of the camera is parallel to the image identifier or not according to the second judgment result; and judging whether the distance between the front end face of the camera and the image identifier is a preset value or not according to the third judgment result. And if the image identifier shot by the camera is judged according to the first judgment result, the front end face of the camera is judged to be parallel to the image identifier according to the second judgment result, and the distance between the front end face of the camera and the image identifier is judged to be a preset value according to the third judgment result, the camera is judged to be at a preset position. .
In some application scenarios, when the external server analyzes the photo, it is necessary to analyze and judge whether the photo includes the image identifier, whether the non-rotational symmetric mark in the image identifier is in the photo, and a corresponding pixel point of the image identifier in the photo, and obtain a first determination result, a second determination result, and a third determination result, respectively. The external server can judge whether the camera can shoot the image identifier according to the first judgment result, and can roughly judge the position of the camera. Whether the front end face of the camera is parallel to the image identifier or not can be judged according to the second judgment result, and according to the imaging principle, the shot object obtained through shooting can not deform only when the imaging face of the camera is parallel to the shot object. For example, if the mark of the image identifier in the photo is recognized to comprise mutually perpendicular lines, and the mutually perpendicular lines still vertically intersect in the photo, then the front end face of the camera and the image identifier can be judged to be mutually parallel; optionally, if the image identifier does not include a right angle, adjusting the analysis content according to the graphic content on the image identifier, and determining whether the front end face of the camera is parallel to the image identifier by using other analysis methods. Whether the distance between the front end face of the camera and the image identifier is a preset value or not can be judged according to the third judgment result, the actual specific distance between the front end face of the camera and the image identifier can be known according to known fixed values such as the focal length of the camera, the distance between an imaging plane and a lens and the like by judging the size of the pattern in the picture, namely the size of the non-rotationally symmetrical mark, so that whether the actual distance is consistent with the preset distance or not is judged, and whether the camera is located at the preset position relative to the image identifier or not is judged. And judging whether the camera is in the preset position according to the three judgment standards, and judging that the camera is in the preset position if the three judgment standards are met after the analysis result of the picture is judged.
In some optional implementation manners, if the first determination result obtained when determining whether the photo includes the image identifier is that the photo does not include the image identifier, or if the non-rotationally symmetric flag of the image identifier in the photo is deformed, the second determination result obtained when determining whether the non-rotationally symmetric flag of the image identifier in the photo is deformed, or if the third determination result obtained by analyzing the corresponding pixel point of the image identifier in the photo is that the distance between the front end face of the camera and the image identifier does not conform to the preset value, the next determination is not continued, and the step S201 is returned to perform the photographing again.
In some optional implementation manners, when it is determined that the camera is not in the preset position for taking a picture according to the first determination result, the second determination result, and the third determination result obtained through the analysis, the position of the camera is adjusted according to the analysis result, so that the camera is in the preset position for taking a picture again. If the image identifier is shot but the mark in the image identifier is not vertical, the camera can shoot again only by adjusting the front end face of the camera to be parallel to the image identifier. If the image identification is shot and the image identification is parallel to the front end face of the camera, the distance between the image identification and the front end face of the camera is not a preset value according to analysis, and the camera is parallelly far away from or is close to the image identification by a corresponding distance according to the fact that the distance is larger than or smaller than the preset value, and then the camera can shoot again.
In some optional implementations, the obtaining the pose of the image identification in the robot base coordinate system includes: acquiring the pose of the camera in a robot base coordinate system and the pose of the image identifier in a camera coordinate system; and obtaining the pose of the image identifier in the robot base coordinate system according to the pose of the camera in the robot base coordinate system and the pose of the image identifier in the camera coordinate system.
In some application scenes, the method for the external server to obtain the pose of the image identifier in the robot base coordinate system includes the steps of firstly obtaining the pose of a camera in the robot base coordinate system and the pose of the image identifier in the camera coordinate system, and then obtaining the pose of the image identifier in the robot base coordinate system according to the pose of the camera in the robot base coordinate system and the pose of the image identifier in the camera coordinate system. This is a problem of coordinate conversion, in this embodiment, obtaining the pose of the image identifier in the base coordinate system requires two poses, namely, the pose of the camera in the base coordinate system of the robot and the pose of the image identifier in the camera coordinate system, and obtains the pose of the image identifier in the camera coordinate system by analyzing a photo or according to a preset position between the camera and the image identifier.
In some optional implementations, the locating the robot task according to the user coordinate system includes: point position teaching is carried out on the robot according to the user coordinate system, and a working point under the user coordinate system is obtained; storing the work site; and controlling the robot to move to the working site for operation.
In some application scenarios, the external control system positions the robot operation according to the user coordinate system, including teaching the working point location under the user coordinate system arranged at the image identification position, and stores the obtained working point location, so that the robot can be operated according to the stored working point location to move to the teaching point location for working during actual operation, and the operator is not required to reposition, thereby improving the working efficiency.
In a second aspect, embodiments of the present invention provide an external control system for robotic work positioning.
Fig. 5 is a schematic structural diagram of an external control system for positioning a robot work according to an embodiment of the present invention, and as shown in fig. 5, the external control system 10 includes an image processing module 12 and a communication module 14. The external control system may be a module, a program segment, or code on the electronic device. It should be understood that the external control system 10 corresponds to the method embodiment of fig. 2 and 4, and can perform the steps related to the method embodiment of fig. 2 and 4, and the specific functions of the external control system 10 can be referred to the description above, and the detailed description is omitted here to avoid repetition.
Optionally, the image processing module 12 is configured to obtain a photo including an image identifier, analyze the photo, determine whether the camera is located at a preset position, and if the camera is located at the preset position, obtain and record a current joint pose of the robot as a photographing gesture. Wherein the analyzing the photograph comprises: judging whether the picture comprises the image identifier or not to obtain a first judgment result; judging whether the non-rotational symmetric mark of the image identifier in the picture is deformed or not to obtain a second judgment result; and analyzing corresponding pixel points of the image identification in the picture to obtain a third judgment result. The judging whether the camera is located at the preset position comprises: judging whether the camera shoots the image identifier according to the first judgment result; judging whether the front end face of the camera is parallel to the image identifier or not according to the second judgment result; and judging whether the distance between the front end face of the camera and the image identifier is a preset value according to the third judgment result, and judging whether the camera is at a preset position according to the judgment standard to obtain a final judgment result.
The communication module 14 is configured to communicate with a robot control system, the communication module 14 sends a first control instruction for controlling the robot to move to a photographing posture to the robot, the first control instruction controls the robot to move the joint posture to the photographing posture, and in addition, the communication module 14 is further configured to send a second control instruction for controlling the robot to set a user coordinate system to the robot, and the second control instruction controls the robot to set the user coordinate system according to the image identifier.
Optionally, the external control system 10 further includes a storage module (not shown in the figure), and the storage module is configured to store the photographing posture and the taught working point location.
Optionally, the external control system 10 further comprises a coordinate system construction module (not shown in the figure) for constructing a user coordinate system according to the pose identified in the robot base coordinate system by the image. The coordinate system construction module firstly acquires the pose of the camera in the robot base coordinate system and the pose of the image identifier in the camera coordinate system, then acquires the pose of the image identifier in the robot base coordinate system according to the pose of the camera in the robot base coordinate system and the pose of the image identifier in the camera coordinate system, and finally completes construction of the user coordinate system according to the pose of the acquired image identifier in the robot base coordinate system.
Optionally, the external control system 10 further comprises a positioning unit (not shown in the figure) for positioning the robot according to the user coordinate system, comprising: point location teaching is carried out on the robot according to a user coordinate system to obtain a working point; and controlling the robot to move to the working site to carry out operation by sending the working site to the robot control system.
All modules of the external system are provided with data connecting lines, and data communication can be realized among the modules.
In a third aspect, an embodiment of the present invention provides a robot control system.
Fig. 6 is a schematic structural diagram of a robot control system according to an embodiment of the present invention, and as shown in fig. 6, the robot control system 20 includes an external communication module 21, an instruction execution module 23, and a storage module 25. The robot control system 20 may be a module, program segment or code on an electronic device that includes a robot. It should be understood that the robot control system 20 corresponds to the external control system 10 in function, and can cooperate with the external control system 10 to complete the method embodiments of fig. 2 and 4, and cooperate with the external control system 10 to perform the steps related to the method embodiments of fig. 2 and 4, and the specific functions of the robot control system 20 can be referred to the above description, and the detailed description is omitted here to avoid repetition.
Optionally, the external communication module 21 is configured to communicate with an external control system and receive instructions from the external control system 10, the instructions including the first and second control instructions and the work site data and third control instructions for controlling the robot to move to the work site.
The instruction receiving module is configured to execute a control instruction, the control instruction includes an instruction received by the external communication module 21, where the received instruction includes the first, second, and third control instructions, the first control instruction controls the robot to move to the photographing posture, and the second control instruction controls the robot to set a user coordinate system.
The storage module 25 is configured to store a work site obtained by performing point location teaching on the robot according to the user coordinate system and work site data received by the external communication module 21.
Optionally, the robot control system 20 further includes a coordinate system setting module (not shown in the figure) configured to set a user coordinate system, and in particular, the coordinate system setting module is configured to set the user coordinate system at the image identifier. After the coordinate system setting module acquires the pose of the image identifier in the robot base coordinate system, the coordinate system setting module can set the user coordinate system according to the pose of the image coordinate system.
Optionally, the robot control system 20 further includes a pose acquisition module (not shown in the figure) for acquiring poses of the camera, the joints of the robot and the tool in the robot base coordinate system, and the poses can be sent to the external control system through the external communication module.
In a fourth aspect, embodiments of the present invention provide a positioning module for robot positioning.
Fig. 7 is a schematic structural diagram of a positioning module for positioning a robot according to an embodiment of the present invention; as shown in fig. 7, the positioning module 30 includes an indicator lamp 31, a camera 33, the external control system 10, a first control button 35, and a second control button 37. The positioning module 30 is part of a robot control system for robot positioning. It should be understood that the positioning module 30 corresponds to the method embodiment of fig. 2 and 4 described above, and is capable of executing the steps related to the method embodiment of fig. 2 and 4, and the specific functions of the positioning module 30 can be referred to the description above, and the detailed description is appropriately omitted here to avoid repetition.
Optionally, the positioning module 30 is mounted at the working end of the robot. The camera 33 is configured to take an image identification and obtain a picture including the image identification; wherein, the external control system 10 is the external control system shown in the corresponding embodiment of fig. 6; the first control button 35 is configured to control the camera to shoot according to the operation thereof, and trigger setting of a user coordinate system instruction; the second control button 37 is configured to record the photographing pose of the robot according to the operation thereof; the indicator lamp 31 is configured to indicate whether the setting of the user coordinate system according to the image identification is successful. Optionally, the indication includes: if the setting is successful, the indicator light flickers for green once; and if the setting fails, the indicator light flickers red once.
In a fifth aspect, an embodiment of the present application provides an electronic device. Referring to fig. 8, fig. 8 is a schematic structural diagram of an electronic device for performing a positioning method of a robot job according to an embodiment of the present disclosure, where the electronic device 500 may include: at least one processor 501, such as a CPU, at least one communication interface 502, at least one memory 503, and at least one communication bus 504. Wherein the communication bus 504 is used to enable direct connection communication of these components. The communication interface 502 of the device in the embodiment of the present application is used for performing signaling or data communication with other node devices. The memory 503 may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 503 may optionally be at least one storage device located remotely from the aforementioned processor. The memory 503 stores computer readable instructions, and when the computer readable instructions are executed by the processor 501, the electronic device can execute the method processes shown in fig. 2 and 4.
It will be appreciated that the configuration shown in fig. 8 is merely illustrative and that the electronic device may include more or fewer components than shown in fig. 8 or have a different configuration than shown in fig. 8. The components shown in fig. 8 may be implemented in hardware, software, or a combination thereof.
In a sixth aspect, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, can perform the positioning method for robot work as provided in the first aspect of the present application, and/or the method processes performed by an electronic device in the method embodiments shown in fig. 2 and 4.
In a seventh aspect, the present application provides a computer program product, the computer program product includes a computer program stored on a computer-readable storage medium, the computer program includes program instructions, when the program instructions are executed by a computer, the computer can execute the method provided by the above method embodiments, for example, the method can include: controlling a camera at the operation end of the robot to shoot an image identifier to obtain a picture comprising the image identifier, wherein the image identifier is arranged at a fixed position; analyzing the picture to obtain the pose of the image identifier in a robot base coordinate system; constructing a user coordinate system according to the pose of the image identifier in the robot base coordinate system; and positioning the robot operation according to the user coordinate system.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
It should be noted that the functions, if implemented in the form of software functional modules and sold or used as independent products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present invention, and is not intended to limit the scope of the present invention, and it will be apparent to those skilled in the art that various modifications and variations can be made in the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A positioning method for robotic work, the method comprising:
controlling a camera at the operation end of the robot to shoot an image identifier to obtain a picture comprising the image identifier, wherein the image identifier is arranged at a fixed position;
analyzing the picture to obtain the pose of the image identifier in a robot base coordinate system;
constructing a user coordinate system according to the pose of the image identifier in the robot base coordinate system; and
and positioning the robot operation according to the user coordinate system.
2. The positioning method according to claim 1, wherein before the camera controlling the working end of the robot captures an image identification, the method further comprises:
controlling the robot to move the camera to a preset position;
wherein the preset position is located at a certain distance above the image identifier.
3. The method of claim 1, wherein the image representation includes non-rotationally symmetric landmarks and the fixed position includes a table surface on which the robot is to operate.
4. The method of claim 3, wherein after said analyzing said photograph, said method further comprises: judging whether the camera is at a preset position or not;
and if the camera is at a preset position, acquiring and recording the current joint pose of the robot as a photographing gesture.
5. The positioning method according to claim 4,
wherein the analyzing the photograph comprises:
judging whether the picture comprises the image identifier or not to obtain a first judgment result;
judging whether the non-rotational symmetric mark of the image identifier in the picture is deformed or not to obtain a second judgment result; and
analyzing corresponding pixel points of the image identification in the picture to obtain a third judgment result;
the judging whether the camera is at a preset position comprises:
judging whether the camera shoots the image identifier according to the first judgment result;
judging whether the front end face of the camera is parallel to the image identifier or not according to the second judgment result; and
judging whether the distance between the front end face of the camera and the image identifier is a preset value or not according to the third judgment result;
and if the image identifier shot by the camera is judged according to the first judgment result, the front end face of the camera is judged to be parallel to the image identifier according to the second judgment result, and the distance between the front end face of the camera and the image identifier is judged to be a preset value according to the third judgment result, the camera is judged to be at a preset position.
6. The localization method according to claim 1, wherein the obtaining the pose of the image marker in a robot base coordinate system comprises:
acquiring the pose of the camera in a robot base coordinate system and the pose of the image identifier in a camera coordinate system; and
and obtaining the pose of the image identifier in the robot base coordinate system according to the pose of the camera in the robot base coordinate system and the pose of the image identifier in the camera coordinate system.
7. The method of claim 1, wherein said positioning a robot task according to the user coordinate system comprises:
point position teaching is carried out on the robot according to the user coordinate system, and a working point under the user coordinate system is obtained;
storing the work site;
and controlling the robot to move to the working site for operation.
8. An external control system for robotic work positioning, the external control system comprising:
the image processing module is used for acquiring a photo comprising an image identifier, analyzing the photo, judging whether a camera is located at a preset position, and if the camera is located at the preset position, acquiring and recording the current joint pose of the robot as a photographing gesture; and
the communication module is used for communicating with a robot control system, sending a first control instruction for controlling the robot to move to the photographing posture to the robot, and controlling the robot to move the joint pose to the photographing posture by the first control instruction;
the communication module is further used for sending a second control instruction for controlling the robot to set the user coordinate system to the robot, and the second control instruction controls the robot to set the user coordinate system according to the image identifier.
9. A robot control system, characterized in that the robot control system comprises:
the external communication module is configured to communicate with an external control system and receive first and second control instructions from the external control system;
the instruction receiving module is configured to execute the first control instruction and the second control instruction, the first control instruction controls the robot to move to a photographing gesture, and the second control instruction controls the robot to set a user coordinate system; and
and the storage module is configured to store the working sites obtained by point location teaching of the robot according to the user coordinate system.
10. A positioning module for robot positioning, characterized in that the positioning module comprises a camera, an external control system, a first control button, a second control button and an indicator light; the positioning module is arranged at the operation end of the robot;
the camera is configured to shoot an image identifier and obtain a photo comprising the image identifier;
wherein the external control system is the external control system of claim 8;
the first control button is configured to control the camera to shoot according to the operation of the first control button and trigger the setting of a user coordinate system instruction;
the second control button is configured to record the photographing pose of the robot according to the operation on the second control button;
the indicator light is configured to indicate whether the setting of the user coordinate system according to the image identification is successful.
CN202111329759.0A 2021-11-11 2021-11-11 Positioning method and control system for robot operation Active CN113858214B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111329759.0A CN113858214B (en) 2021-11-11 2021-11-11 Positioning method and control system for robot operation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111329759.0A CN113858214B (en) 2021-11-11 2021-11-11 Positioning method and control system for robot operation

Publications (2)

Publication Number Publication Date
CN113858214A true CN113858214A (en) 2021-12-31
CN113858214B CN113858214B (en) 2023-06-09

Family

ID=78987850

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111329759.0A Active CN113858214B (en) 2021-11-11 2021-11-11 Positioning method and control system for robot operation

Country Status (1)

Country Link
CN (1) CN113858214B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114986522A (en) * 2022-08-01 2022-09-02 季华实验室 Mechanical arm positioning method, mechanical arm grabbing method, electronic equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03228589A (en) * 1990-02-01 1991-10-09 Kawasaki Heavy Ind Ltd Positioning method for work
KR100785784B1 (en) * 2006-07-27 2007-12-13 한국전자통신연구원 System and method for calculating locations by landmark and odometry
JP2016001124A (en) * 2014-06-11 2016-01-07 キヤノン株式会社 Information processing device, photographing guidance method for target calibration, and computer program
CN106625676A (en) * 2016-12-30 2017-05-10 易思维(天津)科技有限公司 Three-dimensional visual accurate guiding and positioning method for automatic feeding in intelligent automobile manufacturing
CN109059922A (en) * 2018-06-29 2018-12-21 北京艾瑞思机器人技术有限公司 Method for positioning mobile robot, device and system
US20190240838A1 (en) * 2017-08-08 2019-08-08 Nanjing Estun Robotics Co., Ltd Method for robot to automatically find bending position
CN110262507A (en) * 2019-07-04 2019-09-20 杭州蓝芯科技有限公司 A kind of camera array robot localization method and device based on 5G communication
CN110480642A (en) * 2019-10-16 2019-11-22 遨博(江苏)机器人有限公司 Industrial robot and its method for utilizing vision calibration user coordinate system
CN110774319A (en) * 2019-10-31 2020-02-11 深圳市优必选科技股份有限公司 Robot and positioning method and device thereof
CN110853102A (en) * 2019-11-07 2020-02-28 深圳市微埃智能科技有限公司 Novel robot vision calibration and guide method, device and computer equipment
CN113232015A (en) * 2020-05-27 2021-08-10 杭州中为光电技术有限公司 Robot space positioning and grabbing control method based on template matching

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03228589A (en) * 1990-02-01 1991-10-09 Kawasaki Heavy Ind Ltd Positioning method for work
KR100785784B1 (en) * 2006-07-27 2007-12-13 한국전자통신연구원 System and method for calculating locations by landmark and odometry
JP2016001124A (en) * 2014-06-11 2016-01-07 キヤノン株式会社 Information processing device, photographing guidance method for target calibration, and computer program
CN106625676A (en) * 2016-12-30 2017-05-10 易思维(天津)科技有限公司 Three-dimensional visual accurate guiding and positioning method for automatic feeding in intelligent automobile manufacturing
US20190240838A1 (en) * 2017-08-08 2019-08-08 Nanjing Estun Robotics Co., Ltd Method for robot to automatically find bending position
CN109059922A (en) * 2018-06-29 2018-12-21 北京艾瑞思机器人技术有限公司 Method for positioning mobile robot, device and system
CN110262507A (en) * 2019-07-04 2019-09-20 杭州蓝芯科技有限公司 A kind of camera array robot localization method and device based on 5G communication
CN110480642A (en) * 2019-10-16 2019-11-22 遨博(江苏)机器人有限公司 Industrial robot and its method for utilizing vision calibration user coordinate system
CN110774319A (en) * 2019-10-31 2020-02-11 深圳市优必选科技股份有限公司 Robot and positioning method and device thereof
CN110853102A (en) * 2019-11-07 2020-02-28 深圳市微埃智能科技有限公司 Novel robot vision calibration and guide method, device and computer equipment
CN113232015A (en) * 2020-05-27 2021-08-10 杭州中为光电技术有限公司 Robot space positioning and grabbing control method based on template matching

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114986522A (en) * 2022-08-01 2022-09-02 季华实验室 Mechanical arm positioning method, mechanical arm grabbing method, electronic equipment and storage medium
CN114986522B (en) * 2022-08-01 2022-11-08 季华实验室 Mechanical arm positioning method, mechanical arm grabbing method, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113858214B (en) 2023-06-09

Similar Documents

Publication Publication Date Title
US11207781B2 (en) Method for industrial robot commissioning, industrial robot system and control system using the same
JP6527178B2 (en) Vision sensor calibration device, method and program
TWI670153B (en) Robot and robot system
US8095237B2 (en) Method and apparatus for single image 3D vision guided robotics
JP6855492B2 (en) Robot system, robot system control device, and robot system control method
JP2016000442A (en) Robot, robotic system, and control device
JP2012254518A (en) Robot control system, robot system and program
JP2016099257A (en) Information processing device and information processing method
JP2005074600A (en) Robot and robot moving method
JP6885856B2 (en) Robot system and calibration method
CN111225143B (en) Image processing apparatus, control method thereof, and program storage medium
CN109715307A (en) Bending machine with workspace image capture device and the method for indicating workspace
JPWO2018043524A1 (en) Robot system, robot system control apparatus, and robot system control method
CN112529856A (en) Method for determining the position of an operating object, robot and automation system
CN113858214B (en) Positioning method and control system for robot operation
CN111993420A (en) Fixed binocular vision 3D guide piece feeding system
JP7112528B2 (en) Work coordinate creation device
TWI807990B (en) Robot teaching system
JP7366264B2 (en) Robot teaching method and robot working method
CN112643718B (en) Image processing apparatus, control method therefor, and storage medium storing control program therefor
JP2015058488A (en) Robot control system, robot, robot control method, and program
JPH09323280A (en) Control method and system of manupulator
CN112184819A (en) Robot guiding method and device, computer equipment and storage medium
WO2019176450A1 (en) Information processing device, information processing method, and program
CN113297952B (en) Measuring method and system for rope-driven flexible robot in complex environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Building 6, 646 Jianchuan Road, Minhang District, Shanghai 201100

Applicant after: Jieka Robot Co.,Ltd.

Address before: Building 6, 646 Jianchuan Road, Minhang District, Shanghai 201100

Applicant before: SHANGHAI JAKA ROBOTICS Ltd.

GR01 Patent grant
GR01 Patent grant