CN113409394A - Intelligent forking method and system - Google Patents

Intelligent forking method and system Download PDF

Info

Publication number
CN113409394A
CN113409394A CN202110944132.XA CN202110944132A CN113409394A CN 113409394 A CN113409394 A CN 113409394A CN 202110944132 A CN202110944132 A CN 202110944132A CN 113409394 A CN113409394 A CN 113409394A
Authority
CN
China
Prior art keywords
coordinate
coordinates
camera
image
path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110944132.XA
Other languages
Chinese (zh)
Inventor
蒋世奇
张林帅
李浩麟
李以澄
顾硕鑫
张雪原
叶茂
肖地波
王婷婷
王林
严嘉嘉
王裕鑫
李飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu New Meteorological Technology Industry Co ltd
Chengdu University of Information Technology
Original Assignee
Chengdu New Meteorological Technology Industry Co ltd
Chengdu University of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu New Meteorological Technology Industry Co ltd, Chengdu University of Information Technology filed Critical Chengdu New Meteorological Technology Industry Co ltd
Priority to CN202110944132.XA priority Critical patent/CN113409394A/en
Publication of CN113409394A publication Critical patent/CN113409394A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • G06Q10/047Optimisation of routes or paths, e.g. travelling salesman problem
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/40Business processes related to the transportation industry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computational Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Data Mining & Analysis (AREA)
  • Marketing (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Operations Research (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Databases & Information Systems (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Algebra (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Manipulator (AREA)

Abstract

The invention provides an intelligent forking method and an intelligent forking system, wherein the method comprises the following steps: acquiring a working area picture in real time by using a camera, extracting features, obtaining a first image coordinate of an object to be conveyed and a third image coordinate of an obstacle in a working area, and obtaining a second coordinate of a conveying terminal point; acquiring an initial position coordinate of the fork claw, converting a first image coordinate and a third image coordinate into a first coordinate and a third coordinate through conversion from the image coordinate to an actual three-dimensional coordinate, acquiring a real-time position coordinate of the fork claw, and acquiring a conversion relation between the fork claw and a flange coordinate system of a camera to plan a first path from the fork claw to an object to be transported; the fork claw reaches an object point to be carried according to the first path updated in real time to fork; and planning a second path to the conveying terminal point in real time according to the second coordinate and the third coordinate, and finishing conveying according to the second path. The method can fully automatically fork the object to be transported and transport the object to the destination point.

Description

Intelligent forking method and system
Technical Field
The invention belongs to the field of intelligent transportation, and particularly relates to an intelligent forking method and an intelligent forking system.
Background
Material cargo handling in large enterprise plants is an important part of industrial production. The loading, unloading and carrying of materials are auxiliary links in the production process of manufacturing enterprises, but the loading, unloading and carrying of materials are indispensable links for mutual connection among processes, workshops and factories.
The conventional industry often uses a manual forklift to carry goods, so that the efficiency is low, the potential safety hazard is large, and a series of problems of low labor efficiency, large potential safety hazard and the like exist. In recent years, based on the establishment of an intelligent production factory, the research direction of material handling has been developed towards computers and automatic identification robots, so that the design and manufacturing level of material handling and transportation reaches a new level.
Forking is a common method for goods transportation, the intelligent forklift has great advantages in the aspects of sorting speed, a management software platform, warehouse entry and exit speed, management efficiency and user experience, the intelligent forklift is guided into more and more in an intelligent transportation scene, manual driving is not needed for the intelligent forklift, a bar code technology, a wireless local area network technology and a data acquisition technology are combined, and meanwhile navigation modes such as electromagnetic induction, machine vision, laser radar and the like are adopted as auxiliary RFID identification, so that the intelligent forklift can run on a complex path and can perform reliable tracking on multiple stations, and the operation is convenient. The intelligent forklift is more intelligent, flexible and flexible, and has the basic characteristics of low cost, high efficiency and safe operation. The customized requirements of enterprises can be met, manual carrying or manual forklifts are replaced, and the advantages are obvious.
Based on the technical scheme, the intelligent robot forking method is designed based on economic, practical and appropriate advance as basic positioning, based on relevant products such as artificial intelligence, computer control, intelligent mobile robots, visual servos and advanced technologies, and realizes the automatic processes of stable clamping, loading, carrying, unloading and warehousing of products through positioning navigation and image recognition, so that the production efficiency is improved, and the safety accident probability is reduced.
Disclosure of Invention
In view of the above, an objective of the present invention is to provide an intelligent forking method, which can fully automatically fork an object to be transported and transport the object to a destination.
In order to achieve the purpose, the technical scheme of the invention is as follows: an intelligent forking method comprises the following steps:
acquiring a working area picture by using a camera, extracting features, obtaining a first image initial coordinate of an object to be conveyed and a third initial image coordinate of an obstacle in a working area, and obtaining a second coordinate of a conveying terminal point;
acquiring initial position coordinates of the fork claw, converting the first image initial coordinates and the third initial image coordinates into first initial coordinates and third initial coordinates through conversion from image coordinates to actual three-dimensional coordinates, and planning a first path from the fork claw to an object to be conveyed;
the fork claw runs according to the first path, a second picture of a working area and real-time position coordinates of the fork claw are collected in real time, the robot drives the camera to move together, so that the relative relation between a camera coordinate system and a world coordinate system of the robot is always changed, and the relative position relation between the camera and the fork claw of the robot is kept unchanged due to the rigid connection of the camera. Therefore, the installation position of the camera at the tail end of the robot is obtained by a hand-eye calibration method, namely the transformation relation between a camera coordinate system and a robot fork claw flange coordinate system, the relative position relation between the camera coordinate and a world coordinate system is determined, and the pose of the fork claw in the robot world coordinate system is determined through transformation. (ii) a
Extracting features according to the second picture acquired in real time to acquire a first image coordinate of the object to be conveyed, a second image coordinate of a conveying terminal point and a third image coordinate of an obstacle in a working area;
respectively converting the first image coordinate, the second image coordinate and the third image coordinate into a first coordinate, a second coordinate and a third coordinate, and updating the first path in real time;
the fork claw reaches an object point to be carried according to the first path updated in real time to fork;
and planning a second path to the conveying terminal point in real time according to the second coordinate and the third coordinate, and finishing conveying according to the second path.
Further, the conversion of the image coordinates to actual three-dimensional coordinates specifically includes the steps of:
converting the image coordinates to cartesian coordinates:
Figure 871406DEST_PATH_IMAGE002
establishing a camera coordinate system by using a video camera, and converting the Cartesian coordinates into camera coordinates:
Figure 225027DEST_PATH_IMAGE004
converting the camera coordinates to actual three-dimensional coordinates:
Figure 492060DEST_PATH_IMAGE006
wherein (u, v) is an image coordinate, (x, y) is a two-dimensional cartesian coordinate, (x, y, z) is a three-dimensional cartesian coordinate, and (c), (d) and (d) are combined to form a three-dimensional cartesian coordinateu 0,v 0) Initial coordinates in image coordinates, (x)C,yC,zC) As camera coordinates: (x w ,y w ,z w ) The coordinates of the point w in the actual three-dimensional coordinate system are shown, the R matrix is a rotation matrix, the T matrix is a translation matrix, and f is the focal length of the camera.
Further, path planning is carried out through a SLAM optimization method based on a probability model, wherein the probability model comprises the following steps:
Figure 97485DEST_PATH_IMAGE008
wherein,p(A k ) Is an eventA k The probability of (a) of (b) being, p(B│A k ) Is an eventA k Conditional event has occurredBThe probability of (a) of (b) being,p(A k B) Is an eventBOccurrence of conditional eventsA k The probability of (a) of (b) being,mis all possible numbers of occurrences of an event.
Further, the video camera is an industrial camera with a CCD and/or CMOS photosensitive chip.
Further, the camera is mounted on the prongs.
The invention also aims to provide an intelligent forking system which can be used for automatically transporting cargoes in a factory.
In order to achieve the purpose, the technical scheme of the invention is as follows: an intelligent forking system comprising:
the data acquisition module is used for acquiring a working area picture in real time by using the camera, extracting features, acquiring a first image coordinate of an object to be conveyed and a third image coordinate of an obstacle in the working area in real time, acquiring position coordinates of the fork claw and the camera in real time, acquiring a transformation relation of a flange coordinate system of the fork claw and the camera, and acquiring a second coordinate of a conveying terminal point;
the coordinate conversion module is connected with the data acquisition module and used for converting the first image coordinate and the third image coordinate into a first coordinate and a third coordinate through conversion from the image coordinate to an actual three-dimensional coordinate;
and the path planning module is connected with the coordinate conversion module and the data acquisition module, and is used for planning a first path from the fork claw to the object to be transported according to the first coordinate and the third coordinate which are updated in real time and the transformation relation between the fork claw and the flange coordinate system of the camera, and planning a second path from the object to be transported to the transport destination according to the third coordinate, the second coordinate and the transformation relation between the fork claw and the flange coordinate system of the camera after the fork claw is forked according to the point to be transported.
Further, the conversion of the image coordinates to the actual three-dimensional coordinates in the coordinate conversion module specifically includes the following steps:
converting the image coordinates to cartesian coordinates:
Figure 100002_DEST_PATH_IMAGE010
establishing a camera coordinate system by using a video camera, and converting the Cartesian coordinates into camera coordinates:
Figure 100002_DEST_PATH_IMAGE012
converting the camera coordinates to actual three-dimensional coordinates:
Figure 100002_DEST_PATH_IMAGE014
wherein (u, v) is an image coordinate, (x, y) is a two-dimensional cartesian coordinate, (x, y, z) is a three-dimensional cartesian coordinate, and (c), (d) and (d) are combined to form a three-dimensional cartesian coordinateu 0,v 0) Initial coordinates in image coordinates, (x)C,yC,zC) As camera coordinates: (x w ,y w ,z w ) The coordinates of the point w in the actual three-dimensional coordinate system are shown, the R matrix is a rotation matrix, the T matrix is a translation matrix, and f is the focal length of the camera.
Further, the path planning module plans the path by using a SLAM optimization method based on a probabilistic model, where the probabilistic model is:
Figure 100002_DEST_PATH_IMAGE016
wherein,p(A k ) Is an eventA k The probability of (a) of (b) being, p(B│A k ) Is an eventA k Conditional event has occurredBThe probability of (a) of (b) being,p(A k B) Is an eventBOccurrence of conditional eventsA k The probability of (a) of (b) being,mis all possible numbers of occurrences of an event.
Further, the video camera is an industrial camera with a CCD and/or CMOS photosensitive chip.
Further, the camera is mounted on the prongs.
Compared with the prior art, the invention has the following advantages:
the invention provides an intelligent forking method and an intelligent forking system, which can freely move in a working area, can avoid obstacles in real time, can run on a complex path and reliably track multiple stations, are convenient to operate, and simultaneously realize the requirements of automatically grabbing objects to be transported, safely transferring the objects to a temporary storage point, placing the objects to an appointed position and the like.
Drawings
To more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. The drawings in the following description are examples of the invention and it will be clear to a person skilled in the art that other drawings can be derived from them without inventive exercise.
FIG. 1 is a block diagram of an intelligent forking system of the present invention;
FIG. 2 is a schematic diagram of the transformation of image coordinates to actual three-dimensional coordinates according to the present invention;
FIG. 3 is a process diagram of the transformation relationship between the fork and the flange coordinate system of the camera according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the scope of protection of the present invention.
The examples are given for the purpose of better illustration of the invention and are not intended to limit the invention to the examples. Therefore, those skilled in the art should make insubstantial modifications and adaptations to the embodiments of the present invention in light of the above teachings and remain within the scope of the invention.
Example 1
Referring to fig. 1, a diagram of an intelligent forking system according to the present invention is shown, the system including: the data acquisition module 1 is used for acquiring a working area picture in real time by using a camera, extracting features, acquiring a first image coordinate of an object to be conveyed and a third image coordinate of an obstacle in the working area in real time, acquiring position coordinates of a fork claw and the camera in real time, acquiring a transformation relation of the fork claw and a flange coordinate system of the camera, and acquiring a second coordinate of a conveying terminal point;
in this embodiment, the camera is an industrial camera with a CCD and/or CMOS photosensitive chip, and the camera is mounted on the fork claw.
The coordinate conversion module 2 is connected with the data acquisition module 1 and is used for converting the first image coordinate and the third image coordinate into a first coordinate and a third coordinate through the conversion from the image coordinate to the actual three-dimensional coordinate;
in this embodiment, the conversion from the image coordinate to the actual three-dimensional coordinate specifically includes the following steps:
converting the image coordinates to cartesian coordinates:
Figure DEST_PATH_IMAGE018
establishing a camera coordinate system by using a video camera, and converting the Cartesian coordinates into camera coordinates:
Figure DEST_PATH_IMAGE020
converting the camera coordinates to actual three-dimensional coordinates:
Figure 235205DEST_PATH_IMAGE022
wherein (u, v) is an image coordinate, (x, y) is a two-dimensional cartesian coordinate, (x, y, z) is a three-dimensional cartesian coordinate, and (c), (d) and (d) are combined to form a three-dimensional cartesian coordinateu 0,v 0) Initial coordinates in image coordinates, (x)C,yC,zC) As camera coordinates: (x w ,y w ,z w ) The coordinates of the point w in the actual three-dimensional coordinate system are shown, the R matrix is a rotation matrix, the T matrix is a translation matrix, and f is the focal length of the camera.
And the path planning module 3 is connected with the coordinate conversion module 2 and the data acquisition module 1, and is used for planning a first path from the fork claw to the object to be transported according to the first coordinate and the third coordinate updated in real time and the transformation relation between the fork claw and the flange coordinate system of the camera, and planning a second path from the object to be transported to a transport destination according to the third coordinate, the second coordinate and the transformation relation between the fork claw and the flange coordinate system of the camera after the fork claw is forked according to the point to be transported.
Further, the path planning module plans the path by using a SLAM optimization method based on a probability model, wherein the probability model is as follows:
Figure DEST_PATH_IMAGE024
wherein,p(A k ) Is an eventA k The probability of (a) of (b) being, p(B│A k ) Is an eventA k Conditional event has occurredBThe probability of (a) of (b) being,p(A k B) Is an eventBOccurrence of conditional eventsA k The probability of (a) of (b) being,mis all possible numbers of occurrences of an event.
Example 2
Based on the system of embodiment 1, the present embodiment provides an intelligent forking method, including the following steps:
s1: acquiring a working area picture by using a camera, extracting features, obtaining a first image initial coordinate of an object to be conveyed and a third initial image coordinate of an obstacle in a working area, and obtaining a second coordinate of a conveying terminal point;
in this embodiment, the camera is an industrial camera with a CCD and/or CMOS photosensitive chip, and the camera is mounted on the fork claw, which may be a mechanical fork claw commonly used at present.
S2: acquiring initial position coordinates of the fork claw, converting the first image initial coordinates and the third initial image coordinates into first initial coordinates and third initial coordinates through conversion from image coordinates to actual three-dimensional coordinates, and planning a first path from the fork claw to an object to be conveyed;
in this embodiment, a camera is used to collect a scene image of a field work, feature extraction is performed on the image, a deviation of a workpiece coordinate system is calculated through an internal algorithm, and then data is transmitted to a robot, the data guides the robot to establish a new workpiece coordinate system, and a specific process principle can refer to fig. 2:
image coordinate system (u,v) Is a two-dimensional plane coordinate system defined on an image, which is mainly calculated in units of pixels in image description, and can also be calculated in units of actual physical length, i.e. applied to cartesian coordinates, as shown in fig. 2, an image coordinate system (c: (c) (c))u,v) Has an initial coordinate ofu 0,v 0) The coordinate axis direction is shown in the figure; actual physical coordinate system (x,y) The origin of (a) is at the center point O of the physical size of the image, and corresponds to the median of the two-axis maximum values in pixel units (cu 0,v 0) If the coordinate axis direction is consistent with the pixel coordinate axis direction and the cartesian coordinate has a negative value, the transformation relationship between two coordinates of any point in the image is as follows:
Figure DEST_PATH_IMAGE026
the above formula can be expressed as a homogeneous coordinate matrix:
Figure DEST_PATH_IMAGE028
then, a camera coordinate system (c) can be establishedX C ,Y C ,Z C ) The camera coordinate system is the optical center point of the optical lensO C Is the origin of coordinates of a system of coordinatesZ C The axis (lens axis) is perpendicular to the image plane and passes through the central O point of the image coordinate system, of the camera coordinate systemX C Y C Two axes are respectively flatTravelling in image coordinate systemxyThe coordinate value of the axis, the outside point W in the camera coordinate system is (X C ,Y C ,Z C ) Projected points in the image coordinate systemmThe coordinate value of (A) isu m ,v m ) Or (a)x m ,y m ) And converting the points in the Cartesian coordinate system into points in the camera coordinate system:
the coordinate value of the point W in the camera coordinate system shown in FIG. 2 is: (X C ,Y C ,Z C ) Which maps points in a Cartesian coordinate system of the imagemHas the coordinates of (x,y,z) And obtaining a formula according to the geometrical relation:
Figure DEST_PATH_IMAGE030
whereinfThe focal length of the industrial camera is shown, the coordinate value z = f can be known by the similar triangle principle, and the above formula is expressed as a homogeneous matrix equation:
Figure DEST_PATH_IMAGE032
actual three-dimensional coordinates (X W ,Y W ,Z W ) The system is a reference coordinate system arbitrarily set by a user, generally set at a position where the position of an object and the calculation are convenient to describe in millimeter units, and in order to describe the pose parameters of the object to be transported, the system is adopted in the embodiment to select the actual three-dimensional coordinates as the robot coordinate system, and meanwhile, the transformation calculation between the two coordinate systems is reduced. As shown in the description of point W is (X W ,Y W ,Z W ) The transformation of the camera coordinate system to the actual three-dimensional coordinates is: in the figure, point W is at the actual three-dimensional coordinate (X W ,Y W ,Z W ) The coordinate value of (A) isx w ,y w ,z w ) Converting the coordinate value of the actual three-dimensional coordinate of the point W to a coordinate value of the camera coordinate system (c)X C ,Y C ,Z C ) The coordinate transformation formula is described by a homogeneous equation as follows:
Figure DEST_PATH_IMAGE034
wherein the R matrix is a rotation matrix and the T matrix is a translation matrix, wherein R | T is a 3 × 4 matrix. Then the pixel coordinates of any known point in the image can be obtained by the above equation to obtain the corresponding actual three-dimensional coordinate value:
Figure DEST_PATH_IMAGE036
wherein,z C is a constant, also point W in camera coordinatesz C The values of the coordinates of the axes are,
Figure DEST_PATH_IMAGE038
in this embodiment, the coordinate values of any point in the space described by the two different coordinate systems are different, and the mapping relationship described by converting the coordinate value of the point from one coordinate system to the other coordinate system becomes coordinate transformation.
Further, in the embodiment, an image processing algorithm adopts a Canny algorithm to obtain an edge image of an object to be transported, a Hough transformation algorithm is applied to detect the posture of the object to be transported, a Hu moment algorithm is used to detect the gravity center position coordinate of the object to be transported, the traditional Canny algorithm is improved in self-adaptive edge detection due to the influence of the external environment on a shot image, the obtained image can be processed into a better edge image in real time in a changing environment, finally, the image position and posture parameter of the object to be transported is obtained by processing the image, the actual position and posture parameter of a container in a robot coordinate system is obtained through the coordinate transformation from the image coordinate system to an actual three-dimensional coordinate system, feedback is provided for intelligent transportation of a transportation robot, and an initial first path for obstacle avoidance is planned according to a first coordinate and a third coordinate in the same coordinate system;
further, the SLAM technology is synchronous positioning and map building, in this embodiment, path planning is performed by a SLAM optimization method based on a probabilistic model, and the probabilistic model is:
Figure DEST_PATH_IMAGE040
wherein,p(A k ) Is an eventA k The probability of (a) of (b) being, p(B│A k ) Is an eventA k Conditional event has occurredBThe probability of (a) of (b) being,p(A k B) Is an eventBOccurrence of conditional eventsA k The probability of (a) of (b) being,mis all possible numbers of occurrences of an event.
S3: the fork claw operates according to the first path, a second picture of the working area and real-time position coordinates of the fork claw are collected in real time, and a transformation relation between the fork claw and a flange coordinate system of the camera is obtained;
in practical use, since the robot fork carries the camera to move, the relative relation between the camera coordinate system and the actual three-dimensional coordinate system of the robot is always changed, and the relative position relation between the camera and the robot actuator is kept unchanged due to the rigid connection of the camera. Therefore, the purpose of this step is to obtain the installation position of the camera at the end of the robot, i.e. the transformation relationship between the camera coordinate system and the robot end flange coordinate system, and the different pose relationships between the camera coordinate system and the actual three-dimensional coordinates of the robot can be obtained by the current pose state of the robot end flange coordinate system and the above-mentioned phenotype result, and the calibration method generally adopted is as follows: and adjusting the robot to enable the camera to shoot the same target in different poses, and obtaining transformation parameters of the camera relative to the tail end of the robot according to the pose of the robot and external parameters of the camera relative to the target.
Specifically, four coordinate systems are referenced in this step, namely a base coordinate system, a fork-claw coordinate system, a camera coordinate system, and a calibration object coordinate system, as shown in fig. 3.
Wherein the baseHcal represents the conversion relation from a basic coordinate system to a calibration object coordinate system, and comprises a rotation matrix and a translation vector; camHtool represents the conversion relationship from the camera coordinate system to the fork claw coordinate system; these two transformation relationships are invariant during the movement of the fork jaws; the camHcal can be obtained by camera calibration; the baseHtool can be derived from a robotic system.
The fork jaws are then controlled to move from position 1 to position 2:
base = baseHtool (1)* tool(1)
tool(1) = inv(camHtool)*cam(1)
cam(1) = camHcal(1)*obj
combining the above three formulas:
base = baseHtool (1)* inv(camHtool)* camHcal(1)*obj
after moving to the fork claw to position 2:
base = baseHtool (2)* inv(camHtool)* camHcal(2)*obj
because base and obj are fixed, so:
baseHtool (1)* inv(camHtool)* camHcal(1)=baseHtool (2)* inv(camHtool)* camHcal(2)
the camHcal can be obtained by obtaining external parameters through camera calibration, the baseHtool is known and can be read out from a common robot, the camHtool is unknown, multiple groups of data of different camera positions can be taught through hands and eyes, the cvsolve of opencv can be called to solve multiple groups of linear over-definite equation sets, and a camHtool matrix is solved.
S4: extracting features according to the second picture acquired in real time to acquire a first image coordinate of the object to be conveyed, a second image coordinate of a conveying terminal point and a third image coordinate of an obstacle in a working area;
in the actual carrying process, because the positions of the fork claw and the camera are changed all the time, the obtained picture information is different, so that the picture is updated in real time, and different obstacle image coordinates and the coordinates of the object to be carried are extracted;
s5: respectively converting the first image coordinate, the second image coordinate and the third image coordinate into a first coordinate, a second coordinate and a third coordinate, and updating the first path in real time;
in this step, referring to the steps in steps S2 and S3, different coordinate systems are converted into the same coordinate system, and a first path from the fork to the object to be transported is planned according to the first coordinate, the third coordinate and the transformation relationship between the fork and the flange coordinate system of the camera, which are updated in real time;
s6: the fork claw reaches an object point to be carried according to the first path updated in real time to fork;
s7: and planning a second path to the conveying terminal point in real time according to the second coordinate and the third coordinate, and finishing conveying according to the second path.
Since the object to be transported needs to be transported to the destination after the forking of the object to be transported is completed in step S6, steps S2 and S3 are repeated again, and a second path from the object to be transported to the transport destination is planned based on the third coordinate, the second coordinate, and the conversion relationship between the fork claw and the flange coordinate system of the camera, thereby completing the transportation.
Preferably, the fork jaws can also be moved to the fork jaw assigned idle position by the same procedure after the transport has been completed.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and those skilled in the art can make various modifications without departing from the spirit and scope of the present invention.

Claims (10)

1. An intelligent forking method is characterized by comprising the following steps:
acquiring a working area picture by using a camera, extracting features, obtaining a first image initial coordinate of an object to be conveyed and a third initial image coordinate of an obstacle in a working area, and obtaining a second coordinate of a conveying terminal point;
acquiring initial position coordinates of the fork claw, converting the first image initial coordinates and the third initial image coordinates into first initial coordinates and third initial coordinates through conversion from image coordinates to actual three-dimensional coordinates, and planning a first path from the fork claw to an object to be conveyed;
the fork claw runs according to the first path, a second picture of a working area and real-time position coordinates of the fork claw are collected in real time, the installation position of the camera at the tail end of the robot is obtained by adopting a hand-eye calibration method, namely, the transformation relation between a camera coordinate system and a robot fork claw flange coordinate system is obtained, the relative position relation between the camera coordinate system and a world coordinate system is determined, and the pose of the fork claw under the robot world coordinate system is determined through transformation;
extracting features according to the second picture acquired in real time to acquire a first image coordinate of the object to be conveyed, a second image coordinate of a conveying terminal point and a third image coordinate of an obstacle in a working area;
respectively converting the first image coordinate, the second image coordinate and the third image coordinate into a first coordinate, a second coordinate and a third coordinate, and updating the first path in real time;
the fork claw reaches an object point to be carried according to the first path updated in real time to fork;
and planning a second path to the conveying terminal point in real time according to the second coordinate and the third coordinate, and finishing conveying according to the second path.
2. The method according to claim 1, characterized in that said conversion of image coordinates into actual three-dimensional coordinates comprises in particular the steps of:
converting the image coordinates to cartesian coordinates:
Figure 277050DEST_PATH_IMAGE002
establishing a camera coordinate system by using a video camera, and converting the Cartesian coordinates into camera coordinates:
Figure 194191DEST_PATH_IMAGE004
converting the camera coordinates to actual three-dimensional coordinates:
Figure 442769DEST_PATH_IMAGE006
wherein (u, v) is an image coordinate, (x, y) is a two-dimensional cartesian coordinate, (x, y, z) is a three-dimensional cartesian coordinate, and (c), (d) and (d) are combined to form a three-dimensional cartesian coordinateu 0,v 0) Initial coordinates in image coordinates, (x)C,yC,zC) As camera coordinates: (x w ,y w ,z w ) The coordinates of the point w in the actual three-dimensional coordinate system are shown, the R matrix is a rotation matrix, the T matrix is a translation matrix, and f is the focal length of the camera.
3. The method of claim 1, wherein path planning is performed by a SLAM optimization method based on a probabilistic model, the probabilistic model being:
Figure 837979DEST_PATH_IMAGE008
wherein,p(A k ) Is an eventA k The probability of (a) of (b) being, p(B│A k ) Is an eventA k Conditional event has occurredBThe probability of (a) of (b) being,p(A k │B) Is an eventBOccurrence of conditional eventsA k The probability of (a) of (b) being,mis all possible numbers of occurrences of an event.
4. The method according to claim 1, wherein the video camera is an industrial camera with a CCD and/or CMOS light sensing chip.
5. The method of claim 1, wherein the camera is mounted on a prong.
6. An intelligent forking system, comprising:
the data acquisition module is used for acquiring a working area picture in real time by using the camera, extracting features, acquiring a first image coordinate of an object to be conveyed and a third image coordinate of an obstacle in the working area in real time, acquiring position coordinates of the fork claw and the camera in real time, acquiring a transformation relation of a flange coordinate system of the fork claw and the camera, and acquiring a second coordinate of a conveying terminal point;
the coordinate conversion module is connected with the data acquisition module and used for converting the first image coordinate and the third image coordinate into a first coordinate and a third coordinate through conversion from the image coordinate to an actual three-dimensional coordinate;
and the path planning module is connected with the coordinate conversion module and the data acquisition module, and is used for planning a first path from the fork claw to the object to be transported according to the first coordinate and the third coordinate which are updated in real time and the transformation relation between the fork claw and the flange coordinate system of the camera, and planning a second path from the object to be transported to the transport destination according to the third coordinate, the second coordinate and the transformation relation between the fork claw and the flange coordinate system of the camera after the fork claw is forked according to the point to be transported.
7. The system according to claim 6, wherein the conversion of the image coordinates into actual three-dimensional coordinates in the coordinate conversion module comprises in particular the steps of:
converting the image coordinates to cartesian coordinates:
Figure DEST_PATH_IMAGE010
establishing a camera coordinate system by using a video camera, and converting the Cartesian coordinates into camera coordinates:
Figure DEST_PATH_IMAGE012
converting the camera coordinates to actual three-dimensional coordinates:
Figure DEST_PATH_IMAGE014
wherein (u, v) is an image coordinate, (x, y) is a two-dimensional cartesian coordinate, (x, y, z) is a three-dimensional cartesian coordinate, and (c), (d) and (d) are combined to form a three-dimensional cartesian coordinateu 0,v 0) Initial coordinates in image coordinates, (x)C,yC,zC) As camera coordinates: (x w ,y w ,z w ) The coordinates of the point w in the actual three-dimensional coordinate system are shown, the R matrix is rotation, the T matrix is translation matrix, and f is the focal length of the camera.
8. The system of claim 6, wherein the path planning module performs path planning by a SLAM optimization method based on a probabilistic model, the probabilistic model being:
Figure DEST_PATH_IMAGE016
wherein,p(A k ) Is an eventA k The probability of (a) of (b) being, p(B│A k ) Is an eventA k Conditional event has occurredBThe probability of (a) of (b) being,p(A k │B) Is an eventBOccurrence of conditional eventsA k The probability of (a) of (b) being,mis all possible numbers of occurrences of an event.
9. The system according to claim 6, wherein the camera is an industrial camera with a CCD and/or CMOS light sensing chip.
10. The system of claim 6, wherein the camera is mounted on the prongs.
CN202110944132.XA 2021-08-17 2021-08-17 Intelligent forking method and system Pending CN113409394A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110944132.XA CN113409394A (en) 2021-08-17 2021-08-17 Intelligent forking method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110944132.XA CN113409394A (en) 2021-08-17 2021-08-17 Intelligent forking method and system

Publications (1)

Publication Number Publication Date
CN113409394A true CN113409394A (en) 2021-09-17

Family

ID=77688593

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110944132.XA Pending CN113409394A (en) 2021-08-17 2021-08-17 Intelligent forking method and system

Country Status (1)

Country Link
CN (1) CN113409394A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104476549A (en) * 2014-11-20 2015-04-01 北京卫星环境工程研究所 Method for compensating motion path of mechanical arm based on vision measurement
CN108466268A (en) * 2018-03-27 2018-08-31 苏州大学 A kind of freight classification method for carrying, system and mobile robot and storage medium
CN108499054A (en) * 2018-04-04 2018-09-07 清华大学深圳研究生院 A kind of vehicle-mounted mechanical arm based on SLAM picks up ball system and its ball picking method
US20180283017A1 (en) * 2017-03-31 2018-10-04 Canvas Construction, Inc. Automated drywall planning system and method
CN110605711A (en) * 2018-06-14 2019-12-24 中瑞福宁机器人(沈阳)有限公司 Method, device and system for controlling cooperative robot to grab object
CN111496770A (en) * 2020-04-09 2020-08-07 上海电机学院 Intelligent carrying mechanical arm system based on 3D vision and deep learning and use method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104476549A (en) * 2014-11-20 2015-04-01 北京卫星环境工程研究所 Method for compensating motion path of mechanical arm based on vision measurement
US20180283017A1 (en) * 2017-03-31 2018-10-04 Canvas Construction, Inc. Automated drywall planning system and method
CN108466268A (en) * 2018-03-27 2018-08-31 苏州大学 A kind of freight classification method for carrying, system and mobile robot and storage medium
CN108499054A (en) * 2018-04-04 2018-09-07 清华大学深圳研究生院 A kind of vehicle-mounted mechanical arm based on SLAM picks up ball system and its ball picking method
CN110605711A (en) * 2018-06-14 2019-12-24 中瑞福宁机器人(沈阳)有限公司 Method, device and system for controlling cooperative robot to grab object
CN111496770A (en) * 2020-04-09 2020-08-07 上海电机学院 Intelligent carrying mechanical arm system based on 3D vision and deep learning and use method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LUOWEI_MEMORY: "相机模型详解及标定原理", 《网页在线公开: HTTPS://BLOG.CSDN.NET/QQ_30567891/ARTICLE/DETAILS/79970492》 *
王飞涛: "基于激光SLAM搬运机器人自主导航研究与实现", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Similar Documents

Publication Publication Date Title
CN111496770B (en) Intelligent carrying mechanical arm system based on 3D vision and deep learning and use method
CN111730603B (en) Control device and control method for robot system
US12002007B2 (en) Robotic system with automated package scan and registration mechanism and methods of operating the same
US10518410B2 (en) Object pickup strategies for a robotic device
US11701777B2 (en) Adaptive grasp planning for bin picking
WO2019036929A1 (en) Method for stacking goods by means of robot, system for controlling robot to stack goods, and robot
Chiaravalli et al. Integration of a multi-camera vision system and admittance control for robotic industrial depalletizing
Bogh et al. Integration and assessment of multiple mobile manipulators in a real-world industrial production facility
CN109641706B (en) Goods picking method and system, and holding and placing system and robot applied to goods picking method and system
JP7241374B2 (en) Robotic object placement system and method
CN113409394A (en) Intelligent forking method and system
CN111061228B (en) Automatic container transfer control method based on target tracking
Irawan et al. Vision-based alignment control for mini forklift system in confine area operation
CN114888768A (en) Mobile duplex robot cooperative grabbing system and method based on multi-sensor fusion
Martin et al. An autonomous transport vehicle in an existing manufacturing facility with focus on the docking maneuver task
Rauer et al. An autonomous mobile handling robot using object recognition
Sun et al. A medical garbage bin recycling system based on AGV
KR20210026567A (en) Vision recognition based object placing robot and logistics system using the same
Yeh et al. 3D Cameras and Algorithms for Multi-Angle Gripping and Control of Robotic Arm
JP7492694B1 (en) Robot system transport unit cell and its operating method
WO2023073780A1 (en) Device for generating learning data, method for generating learning data, and machine learning device and machine learning method using learning data
Yesudasu et al. Depalletisation humanoid torso: Real-time cardboard package detection based on deep learning and pose estimation algorithm
Nakao et al. Object position/pose estimation using CAD models for navigation of manipulator with a single CCD camera
Wang et al. Restricted Spatial Perception-Based Robotic Unloading System Using Dynamic Grasping Strategy
KR20240014795A (en) Logistics processing system and method using an object placement robot in a logistics warehouse

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210917

RJ01 Rejection of invention patent application after publication