CN109993763B - Detector positioning method and system based on image recognition and force feedback fusion - Google Patents

Detector positioning method and system based on image recognition and force feedback fusion Download PDF

Info

Publication number
CN109993763B
CN109993763B CN201910243779.2A CN201910243779A CN109993763B CN 109993763 B CN109993763 B CN 109993763B CN 201910243779 A CN201910243779 A CN 201910243779A CN 109993763 B CN109993763 B CN 109993763B
Authority
CN
China
Prior art keywords
detector
mechanical arm
grabbing
image data
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910243779.2A
Other languages
Chinese (zh)
Other versions
CN109993763A (en
Inventor
段星光
田焕玉
靳励行
潘月
温浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN201910243779.2A priority Critical patent/CN109993763B/en
Publication of CN109993763A publication Critical patent/CN109993763A/en
Application granted granted Critical
Publication of CN109993763B publication Critical patent/CN109993763B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G21NUCLEAR PHYSICS; NUCLEAR ENGINEERING
    • G21CNUCLEAR REACTORS
    • G21C17/00Monitoring; Testing ; Maintaining
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E30/00Energy generation of nuclear origin
    • Y02E30/30Nuclear fission reactors

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Plasma & Fusion (AREA)
  • General Engineering & Computer Science (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a detector positioning method and a detector positioning system based on image recognition and force feedback fusion, wherein the method comprises the following steps: the method comprises the steps of carrying out image preprocessing on collected image data, obtaining contour information of a detector in the image data, carrying out primary position fitting on the detector based on the contour information, positioning the position information of the detector, controlling a mechanical arm to grab the detector according to the position information, and adjusting the grabbing state by combining feedback of a force sensor when the mechanical arm grabs the detector. By adopting the invention, the position of the detector is initially determined through image recognition, and then the grabbing state is adjusted along with the force feedback when the detector is grabbed, so that the aim of accurately and automatically replacing the detector can be realized.

Description

Detector positioning method and system based on image recognition and force feedback fusion
Technical Field
The invention relates to the technical field of automatic control, in particular to a detector positioning method and system based on image recognition and force feedback fusion.
Background
In the field of nuclear power robots, detectors are usually replaced by manual alignment and by semi-automatic tools. In this way, operators are not protected from low-dose radiation, and the residual radiation in the body after daily and monthly accumulation seriously affects the physical health of workers. Therefore, an unmanned operation on the biological shielding plate is a research focus in the field of the current nuclear power robot.
Disclosure of Invention
The embodiment of the invention provides a detector positioning method and system based on image recognition and force feedback fusion.
The first aspect of the embodiments of the present invention provides a detector positioning method based on image recognition and force feedback fusion, which may include:
carrying out image preprocessing on the acquired image data to acquire contour information of a detector in the image data;
performing primary position fitting on the detector based on the contour information, and positioning the position information of the detector;
and controlling the mechanical arm to grab the detector according to the position information.
In one possible design, the image data includes at least four detectors, and the positioning method further includes:
sequencing and numbering the detectors according to the distance between the positioned positions of the detectors and the mechanical arm;
and controlling the mechanical arm to grab the detectors at the corresponding numbers according to the sequencing numbers.
In one possible design, the positioning method further includes:
performing primary position fitting by adopting a circle or ellipse detection algorithm;
when four circles or ellipses are fit in the image data, the process of determining the primary position fit ends.
In one possible design, the positioning method further includes:
acquiring grabbing parameters when the mechanical arm grabs the detector based on the force sensor, and judging whether the mechanical arm enters a blocking state or not according to the grabbing parameters, wherein the grabbing parameters comprise the grabbing force and the grabbing direction;
when the mechanical arm enters a blocking state, performing position fitting on the detector again to obtain the relative position and the relative angle between the detector and the mechanical arm;
and adjusting the mechanical arm to exit the blocking state according to the relative position and the relative angle.
In one possible design, the positioning method further includes:
and filtering information noise points in the contour information by adopting a noise point filtering algorithm.
A second aspect of the embodiments of the present invention provides a detector positioning system based on image recognition and force feedback fusion, which may include:
the contour information acquisition module is used for carrying out image preprocessing on the acquired image data to acquire contour information of the detector in the image data;
the position fitting module is used for performing primary position fitting on the detector based on the contour information and positioning the position information of the detector;
and the grabbing control module is used for controlling the mechanical arm to grab the detector according to the position information.
In one possible design, the image data includes at least four detectors, and the positioning system further includes:
the sequencing numbering module is used for sequencing and numbering the detectors according to the distance between the positioned positions of the detectors and the mechanical arm;
and the grabbing control module is also used for controlling the mechanical arm to grab the detector at the corresponding number according to the sequencing number.
In one possible design, the positioning system further includes:
the position fitting module is also used for performing primary position fitting by adopting a circle or ellipse detection algorithm;
and the fitting end determining module is used for determining that the process of fitting the primary position is ended when four circles or ellipses are fitted in the image data.
In one possible design, the positioning system further includes:
the state judgment module is used for acquiring a grabbing parameter when the mechanical arm grabs the detector based on the force sensor, and judging whether the mechanical arm enters a blocking state or not according to the grabbing parameter, wherein the grabbing parameter comprises the grabbing force and the grabbing direction;
the position fitting module is also used for performing position fitting on the detector again when the mechanical arm enters a blocking state to obtain the relative position and the relative angle between the detector and the mechanical arm;
and the state adjusting module is used for adjusting the mechanical arm to exit the blocking state according to the relative position and the relative angle.
In one possible design, the positioning system further includes:
and the noise filtering module is used for filtering the information noise in the contour information by adopting a noise filtering algorithm.
In the embodiment of the invention, the position of the detector is initially determined through image recognition, and then the grabbing state is adjusted along with the force feedback when the detector is grabbed, so that the aim of accurately and automatically replacing the detector is fulfilled.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a schematic flow chart of a detector positioning method based on image recognition and force feedback fusion according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a position fitting effect provided by an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a detector positioning system based on image recognition and force feedback fusion according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
The following describes in detail a method for positioning a detector based on image recognition and force feedback fusion according to an embodiment of the present invention with reference to fig. 1 and 2.
Referring to fig. 1, a schematic flow chart of a detector positioning method based on image recognition and force feedback fusion is provided for an embodiment of the present invention. As shown in fig. 1, the method of the embodiment of the present invention may include the following steps S101 to S103.
S101, image preprocessing is carried out on the acquired image data, and outline information of the detector in the image data is obtained.
It should be noted that the positioning system may employ a monocular vision camera to scan the working area of the robot along with the movement of the robot, and obtain image data including the detector, where the image data may be a color picture.
Specifically, the positioning system may perform image preprocessing on the acquired image data to obtain profile information of the detector in the image data. It will be appreciated that the image pre-processing process described above may be a process that trains filter parameters using a convolutional neural network. The convolutional neural network can be a convolutional neural network comprising a convolutional-pooling-full-connection three-layer network, and the filtering parameters can be applied to subsequent operation of fitting the position of the detector through a circle or an ellipse. Further, the strong corresponding points of the edge of the detector can be obtained through edge detection, and the contour information of the detector can be obtained through binarization processing of the strong corresponding points.
In an alternative embodiment, the positioning system may use a noise filtering algorithm to filter information noise in the profile information. Preferably, a method of first opening operation and then closing operation can be adopted to filter noise in the binarization threshold value parameter.
And S102, performing primary position fitting on the detector based on the contour information, and positioning the position information of the detector.
Specifically, the positioning system may perform a first position fitting on the detector by using the contour information, and position the position information of the detector, where it is understood that the position information may indicate the position of the detector in the current working area.
In alternative embodiments, the positioning system may use a circle or ellipse detection algorithm to perform the fitting operation of the primary position fitting described above. The image data may include 4 detectors, or may include other numbers of detectors, which are determined according to the working environment of the field or the working mode of the robot, where the number of detectors is a preferred number. It should be noted that after the position information and the radius information of the circle are obtained by performing circle or ellipse fitting, a circle with an appropriate size may be screened out, and the number may be counted. In a specific implementation, when four circles or ellipses are fitted to the image data, the detector in the image data may be considered to be successfully identified (for example, the identification result diagram shown in fig. 2), and the process of determining the initial position fitting may be ended.
In a specific implementation manner of the embodiment of the invention, the positioning system can adopt a hough circle detection algorithm to fit a circle or an ellipse. The HoughCallices function in the algorithm can be mainly used, and particularly, the detection range of the circle can be in the range of 15mm-40 mm.
And S103, controlling the mechanical arm to grab the detector according to the position information.
Specifically, the positioning system can control the mechanical arm to grab the detector according to the position information.
In an alternative embodiment, when there are 4 detectors in the image data, the positioning system may sequence and number the detectors according to the distance between the position of each positioned detector and the mechanical arm, for example, in the order from the near to the far before performing the capture control. Furthermore, the mechanical arm can be controlled to grab the detectors at the corresponding serial numbers according to the sequencing serial numbers.
In an optional embodiment, the positioning system may obtain a grabbing parameter when the mechanical arm grabs the detector based on the force sensor, and determine whether the mechanical arm enters a jamming state according to the grabbing parameter, where it is understood that the grabbing parameter may include a grabbing force and a grabbing direction, and the jamming state may be a state where a gripper of the mechanical arm is not sleeved in the detector. Further, when the mechanical arm enters the blocking state, the detector can be subjected to position fitting again, the relative position and the relative angle between the detector and the mechanical arm are obtained, and the mechanical arm is adjusted to exit the blocking state according to the relative position and the relative angle.
In the embodiment of the invention, the position of the detector is initially determined through image recognition, and then the grabbing state is adjusted along with the force feedback when the detector is grabbed, so that the aim of accurately and automatically replacing the detector is fulfilled.
In a specific implementation manner of the embodiment of the invention, the attitude and the position of the detector can be solved through circle image recognition.
Figure BDA0002010452810000051
Wherein the content of the first and second substances,
Figure BDA0002010452810000052
representing a rotation matrix in the world coordinate system W to the camera coordinate system I.
Figure BDA0002010452810000053
Is the vector corresponding to the maximum distance from the origin of the identified ellipse to the locus of the ellipse in the image coordinate system { I }.
Figure BDA0002010452810000054
Is the corresponding vector of the minimum distance from the origin of the identified ellipse to the locus of the ellipse under the image coordinate system { I }.
At this time, the detector pose corresponds to the transfer matrix:
Figure BDA0002010452810000055
converting the conversion matrix estimator into corresponding quaternion by coordinate conversion
Figure BDA0002010452810000056
The following governing equation in cartesian coordinates can be introduced:
Figure BDA0002010452810000057
wherein the content of the first and second substances,
Figure BDA0002010452810000058
is the ideal attitude quaternion of the sensor under the { W } coordinate system, the actual value of which is the unit quaternion in the vertical direction,
Figure BDA0002010452810000059
corresponding to actual quaternion
Figure BDA00020104528100000510
qcommandIs an increment which guides the motor to move to realize control. Alternatively, the motor may be a servo motor based on CANopoen bus control.
The following describes in detail a detector positioning system based on image recognition and force feedback fusion according to an embodiment of the present invention with reference to fig. 3. It should be noted that the positioning system shown in fig. 3 is used for executing the method of the embodiment of the present invention shown in fig. 1 and fig. 2, and for convenience of description, only the portion related to the embodiment of the present invention is shown, and details of the specific technology are not disclosed, please refer to the embodiment of the present invention shown in fig. 1 and fig. 2.
Referring to fig. 3, a schematic structural diagram of a detector positioning system based on image recognition and force feedback fusion is provided for an embodiment of the present invention. As shown in fig. 3, the positioning system 10 of the embodiment of the present invention may include: the system comprises a contour information acquisition module 101, a position fitting module 102, a grabbing control module 103, a sequencing numbering module 104, a fitting end determination module 105, a state judgment module 106, a state adjustment module 107 and a noise filtering module 108.
The contour information acquiring module 101 is configured to perform image preprocessing on the acquired image data to acquire contour information of the detector in the image data.
It should be noted that the positioning system 10 can scan the working area of the robot with the monocular vision camera as the robot moves, and obtain the image data including the detector, which may be a color picture.
In a specific implementation, the contour information obtaining module 101 may perform image preprocessing on the acquired image data to obtain contour information of the detector in the image data. It will be appreciated that the image pre-processing process described above may be a process that trains filter parameters using a convolutional neural network. The convolutional neural network can be a convolutional neural network comprising a convolutional-pooling-full-connection three-layer network, and the filtering parameters can be applied to subsequent operation of fitting the position of the detector through a circle or an ellipse. Further, the strong corresponding points of the edge of the detector can be obtained through edge detection, and the contour information of the detector can be obtained through binarization processing of the strong corresponding points.
In an alternative embodiment, noise filtering module 108 can use a noise filtering algorithm to filter the information noise in the contour information. Preferably, a method of first opening operation and then closing operation can be adopted to filter noise in the binarization threshold value parameter.
And the position fitting module 102 is configured to perform primary position fitting on the detector based on the contour information, and locate position information of the detector.
In a specific implementation, the position fitting module 102 may perform primary position fitting on the detector by using the contour information, and position the position information of the detector, where it is understood that the position information may indicate the position of the detector in the current working area.
In alternative embodiments, the position fitting module 102 may perform the fitting operation of the primary position fitting using a circle or ellipse detection algorithm. The image data may include 4 detectors, or may include other numbers of detectors, which are determined according to the working environment of the field or the working mode of the robot, where the number of detectors is a preferred number. It should be noted that, after performing circle or ellipse fitting to obtain the position information and radius information of the circle, the positioning system 10 may screen out the circle with a suitable size and count the number. In a specific implementation, when four circles or ellipses are fitted to the image data, the fitting end determination module 105 may consider that the detector in the image data is successfully identified (for example, the identification result diagram shown in fig. 2), and may determine that the process of fitting the primary position is ended.
In a specific implementation manner of the embodiment of the invention, the positioning system can adopt a hough circle detection algorithm to fit a circle or an ellipse. The HoughCallices function in the algorithm can be mainly used, and particularly, the detection range of the circle can be in the range of 15mm-40 mm.
And the grabbing control module 103 is used for controlling the mechanical arm to grab the detector according to the position information.
In a specific implementation, the grabbing control module 103 may control the mechanical arm to grab the detector according to the position information.
In an alternative embodiment, when there are 4 detectors in the image data, the sequence numbering module 104 may sequence and number the detectors according to the distance between the position of the positioned detector and the mechanical arm, for example, in the order from the near to the far before performing the capture control. Further, the grabbing control module 103 may control the mechanical arm to grab the detectors at the corresponding numbers according to the above sorting numbers.
In an alternative embodiment, the state determining module 106 may obtain a grabbing parameter when the mechanical arm grabs the detector based on the force sensor, and determine whether the mechanical arm enters a jamming state according to the grabbing parameter, where it is understood that the grabbing parameter may include a grabbing force and a grabbing direction, and the jamming state may be a state where a gripper of the mechanical arm is not sleeved in the detector. Further, when the mechanical arm enters the jamming state, the position fitting module 102 may perform position fitting on the detector again to obtain a relative position and a relative angle between the detector and the mechanical arm, and the state adjusting module 107 may adjust the mechanical arm to exit the jamming state according to the relative position and the relative angle.
In the embodiment of the invention, the position of the detector is initially determined through image recognition, and then the grabbing state is adjusted along with the force feedback when the detector is grabbed, so that the aim of accurately and automatically replacing the detector is fulfilled.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (8)

1. A detector positioning method based on image recognition and force feedback fusion is characterized by comprising the following steps:
carrying out image preprocessing on the acquired image data to acquire contour information of a detector in the image data;
performing primary position fitting on the detector based on the contour information, and positioning position information of the detector;
controlling a mechanical arm to grab the detector according to the position information;
the method further comprises the following steps:
acquiring grabbing parameters when the mechanical arm grabs the detector based on a force sensor, and judging whether the mechanical arm enters a blocking state or not according to the grabbing parameters, wherein the grabbing parameters comprise the grabbing force and the grabbing direction;
when the mechanical arm enters a blocking state, performing position fitting on the detector again to obtain the relative position and the relative angle between the detector and the mechanical arm;
and adjusting the mechanical arm to exit the blocking state according to the relative position and the relative angle.
2. The method of claim 1, wherein the image data includes at least four detectors, the method further comprising:
sequencing and numbering the detectors according to the distance between the positioned positions of the detectors and the mechanical arm;
and controlling the mechanical arm to grab the detector at the corresponding number according to the sequencing number.
3. The method of claim 2, further comprising:
performing the primary position fitting by using a circle or ellipse detection algorithm;
when four circles or ellipses are fit in the image data, the process of determining the primary position fit ends.
4. The method of claim 1, further comprising:
and filtering information noise points in the contour information by adopting a noise point filtering algorithm.
5. A detector positioning system based on image recognition and force feedback fusion, comprising:
the contour information acquisition module is used for carrying out image preprocessing on the acquired image data to acquire contour information of the detector in the image data;
the position fitting module is used for performing primary position fitting on the detector based on the contour information and positioning the position information of the detector;
the grabbing control module is used for controlling the mechanical arm to grab the detector according to the position information;
the system further comprises:
the state judgment module is used for acquiring a grabbing parameter when the mechanical arm grabs the detector based on the force sensor, and judging whether the mechanical arm enters a blocking state or not according to the grabbing parameter, wherein the grabbing parameter comprises grabbing force and grabbing direction;
the position fitting module is further used for performing position fitting on the detector again when the mechanical arm enters a blocking state to obtain the relative position and the relative angle between the detector and the mechanical arm;
and the state adjusting module is used for adjusting the mechanical arm to exit the blocking state according to the relative position and the relative angle.
6. The system of claim 5, wherein the image data includes at least four detectors, the system further comprising:
the sequencing numbering module is used for sequencing and numbering the detectors according to the distance between the positioned positions of the detectors and the mechanical arm;
and the grabbing control module is also used for controlling the mechanical arm to grab the detectors at the corresponding numbers according to the sequencing numbers.
7. The system of claim 6, further comprising:
the position fitting module is also used for performing the primary position fitting by adopting a circle or ellipse detection algorithm;
and the fitting end determining module is used for determining that the process of fitting the primary position is ended when four circles or ellipses are fitted in the image data.
8. The system of claim 5, further comprising:
and the noise filtering module is used for filtering the information noise in the contour information by adopting a noise filtering algorithm.
CN201910243779.2A 2019-03-28 2019-03-28 Detector positioning method and system based on image recognition and force feedback fusion Active CN109993763B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910243779.2A CN109993763B (en) 2019-03-28 2019-03-28 Detector positioning method and system based on image recognition and force feedback fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910243779.2A CN109993763B (en) 2019-03-28 2019-03-28 Detector positioning method and system based on image recognition and force feedback fusion

Publications (2)

Publication Number Publication Date
CN109993763A CN109993763A (en) 2019-07-09
CN109993763B true CN109993763B (en) 2021-10-08

Family

ID=67131144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910243779.2A Active CN109993763B (en) 2019-03-28 2019-03-28 Detector positioning method and system based on image recognition and force feedback fusion

Country Status (1)

Country Link
CN (1) CN109993763B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112428263B (en) * 2020-10-16 2022-06-10 北京理工大学 Mechanical arm control method and device and cluster model training method
CN112466489A (en) * 2020-11-17 2021-03-09 中广核工程有限公司 Automatic positioning and defect detecting system and method for spent fuel storage grillwork of nuclear power station

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104057290A (en) * 2014-06-24 2014-09-24 中国科学院自动化研究所 Method and system for assembling robot based on visual sense and force feedback control
CN105157563A (en) * 2015-04-28 2015-12-16 湖南大学 Beer bottleneck positioning method based on visual sense of machine
CN106272416A (en) * 2016-08-29 2017-01-04 上海交通大学 Feel based on power and the robot slender axles Fine Boring system and method for vision
CN106272424A (en) * 2016-09-07 2017-01-04 华中科技大学 A kind of industrial robot grasping means based on monocular camera and three-dimensional force sensor
CN106737665A (en) * 2016-11-30 2017-05-31 天津大学 The mechanical arm control system and implementation method matched based on binocular vision and SIFT feature
CN107804708A (en) * 2017-09-21 2018-03-16 华南理工大学 A kind of pivot localization method of placement equipment feeding rotary shaft

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109129474B (en) * 2018-08-10 2020-07-14 上海交通大学 Multi-mode fusion-based active manipulator grabbing device and method
CN109164829B (en) * 2018-10-23 2021-08-27 哈尔滨工业大学(深圳) Flying mechanical arm system based on force feedback device and VR sensing and control method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104057290A (en) * 2014-06-24 2014-09-24 中国科学院自动化研究所 Method and system for assembling robot based on visual sense and force feedback control
CN105157563A (en) * 2015-04-28 2015-12-16 湖南大学 Beer bottleneck positioning method based on visual sense of machine
CN106272416A (en) * 2016-08-29 2017-01-04 上海交通大学 Feel based on power and the robot slender axles Fine Boring system and method for vision
CN106272424A (en) * 2016-09-07 2017-01-04 华中科技大学 A kind of industrial robot grasping means based on monocular camera and three-dimensional force sensor
CN106737665A (en) * 2016-11-30 2017-05-31 天津大学 The mechanical arm control system and implementation method matched based on binocular vision and SIFT feature
CN107804708A (en) * 2017-09-21 2018-03-16 华南理工大学 A kind of pivot localization method of placement equipment feeding rotary shaft

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"面向EAST第一壁石墨瓦快速更换的遥操作机器人柔顺控制系统研究";潘洪涛;《中国博士学位论文全文数据库工程科技Ⅱ辑》;20170915;参见摘要、第5-6章 *

Also Published As

Publication number Publication date
CN109993763A (en) 2019-07-09

Similar Documents

Publication Publication Date Title
CN110314854B (en) Workpiece detecting and sorting device and method based on visual robot
CN108637435B (en) Three-dimensional weld tracking system and method based on vision and arc pressure sensing
CN106000904A (en) Automatic sorting system for household refuse
CN109993763B (en) Detector positioning method and system based on image recognition and force feedback fusion
CN111428731B (en) Multi-category identification positioning method, device and equipment based on machine vision
CN113146172B (en) Multi-vision-based detection and assembly system and method
EP3812105B1 (en) Artificial intelligence architecture for industrial welding
CN106934813A (en) A kind of industrial robot workpiece grabbing implementation method of view-based access control model positioning
JPH01196501A (en) Object recognizing apparatus for robot
CN110866903A (en) Ping-pong ball identification method based on Hough circle transformation technology
CN107442900A (en) A kind of laser vision welding seam tracking method
CN111308987B (en) Automatic uncoiling control system of uncoiler based on image processing and detection method
CN115171097B (en) Processing control method and system based on three-dimensional point cloud and related equipment
CN116673962B (en) Intelligent mechanical arm grabbing method and system based on Faster R-CNN and GRCNN
CN109459984A (en) A kind of positioning grasping system and its application method based on three-dimensional point cloud
CN113369761A (en) Method and system for guiding robot welding seam positioning based on vision
CN114132745A (en) Automatic workpiece loading and unloading system and method based on AGV and machine vision
CN113878576A (en) Robot vision sorting process programming method
CN113012228B (en) Workpiece positioning system and workpiece positioning method based on deep learning
CN114193446A (en) Closed loop capture detection method based on morphological image processing
CN113034526B (en) Grabbing method, grabbing device and robot
CN115661271B (en) Robot nucleic acid sampling guiding method based on vision
CN114842335A (en) Slotting target identification method and system for construction robot
CN114926531A (en) Binocular vision based method and system for autonomously positioning welding line of workpiece under large visual field
CN113763400A (en) Robot vision guiding method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant