CN113146625A - Binocular vision material three-dimensional space detection method - Google Patents
Binocular vision material three-dimensional space detection method Download PDFInfo
- Publication number
- CN113146625A CN113146625A CN202110329345.1A CN202110329345A CN113146625A CN 113146625 A CN113146625 A CN 113146625A CN 202110329345 A CN202110329345 A CN 202110329345A CN 113146625 A CN113146625 A CN 113146625A
- Authority
- CN
- China
- Prior art keywords
- binocular vision
- detected
- robot
- dimensional space
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a binocular vision material three-dimensional space detection method which comprises the following steps that firstly, an image can be obtained, a robot obtains a positioning mechanism through a binocular vision device, absorbs binocular vision images of materials and standard marks of the materials, then carries out correction fitting, carries out correction fitting on the binocular vision images of an article to be detected, obtains a depth image of the article to be detected under a world coordinate system, determines a point cloud data image of the article to be detected according to the depth image, then carries out position data determination, and determines upper surface point cloud data and an upper surface center position as position data of the article to be detected; according to the invention, binocular vision is adopted to obtain the material image, the three-dimensional space detection of the material is carried out through a three-dimensional fitting algorithm, and the material image is fed back to the robot, so that the robot can better control the material posture and the movement distance of a mechanical arm of the robot, and the accurate positioning of the material posture is realized.
Description
Technical Field
The invention relates to the technical field of electronic pile stacking equipment, in particular to a binocular vision material three-dimensional space detection method.
Background
The hydrogen fuel cell is a novel energy source, has huge market demand and development prospect, needs new production technology and production facility as a novel energy source simultaneously, and the manufacturer that can produce on the present domestic market is less, mainly for the equipment of non-standard automation company production, and this kind of production facility uses the robot to carry monocular vision and snatchs the material in the tool, only can acquire the relative robot's of material two-dimensional position. Or the robot first grabs the material and then obtains the two-dimensional position of the material relative to the robot through a fixed vision device. It cannot acquire the height and size of the material. The precision can not meet the precision requirement in the high-end industry field.
At present, the photoelectric distance measurement technology is mainly divided into an active distance measurement technology and a passive distance measurement technology. The active distance measurement technology needs to actively emit artificial light to irradiate a measured target, and the distance of the measured target is obtained by analyzing the texture deformation of the reflected light of the measured target or directly measuring the propagation time of the light. The active distance measurement technology has the defects of expensive equipment, complex operation, easy exposure and the like. Passive ranging techniques determine the distance to a target by detecting and analyzing natural light radiation from the object. The traditional passive distance measurement technology has the defect of low precision, particularly the CCD camera experiment calibration precision is low, so that the distance measurement precision of the passive distance measurement technology is influenced, and therefore, the binocular vision material three-dimensional space detection method is provided to solve the problem.
Disclosure of Invention
The invention aims to provide a binocular vision material three-dimensional space detection method, which solves the problem of low precision of the existing passive distance measurement technology of electronic pile stacking equipment.
In order to achieve the purpose, the invention provides the following technical scheme: the binocular vision material three-dimensional space detection method comprises the following steps:
step 1: the robot acquires the positioning mechanism through the binocular vision device, and absorbs binocular vision images of the materials and standard marks of the materials;
step 2: correcting and fitting the binocular vision image of the object to be detected to obtain a depth image of the object to be detected in a world coordinate system, and determining a point cloud data image of the object to be detected according to the depth image;
and step 3: determining upper surface point cloud data and an upper surface center position as position data of an article to be detected;
and 4, step 4: according to the fitting surface of the upper surface point cloud data of the object to be detected and the distance between the central position of the upper surface of the object to be detected and the boundary position of the upper surface point cloud data of the object to be detected, form data of the object to be detected, including the position of the positioning mechanism and the freedom degree posture of the material, are determined and fed back to the robot, and after the robot is fed back, the posture of the material and the movement distance of an arm of the robot are controlled, the material is stacked, and therefore accurate stacking is achieved.
Preferably, in the step 1, the servo control lifting mechanism is used for controlling the stations in the material suction process, so that the material suction heights are consistent.
Preferably, in the step 1, after the material is absorbed, the material position is secondarily positioned by binocular vision, the bipolar plate and the membrane electrode are accurately absorbed by a manipulator, and multiple pieces of ultrathin materials are prevented from being absorbed by mistake.
Preferably, in the step 2, when the binocular vision image of the object to be detected is fitted, a three-dimensional fitting algorithm is adopted to detect the three-dimensional space of the material.
Preferably, in the step 2, the binocular vision image is fed back to the robot after being corrected and fitted, so that the accurate positioning of the material posture is realized.
Preferably, in step 4, the stacking jig used by the robot is positioned and placed by adopting six guide pillars, so that stacking is ensured to be free of dislocation, and the outer side cylinder is limited, so that accurate overlapping is realized.
Preferably, in step 4, the stacking tool in the robot stacking jig meets the stacking requirements of 20 sections and 40 sections of the sub-electric pile, and the stacking tool is a finish machining assembly and is provided with a multi-directional distance sensor to confirm the stacking precision.
Preferably, in step 4, the multi-azimuth distance sensors are reasonably arranged, and the multi-azimuth distance sensors are mainly used for repeatedly confirming the stacking accuracy.
Compared with the prior art, the invention has the beneficial effects that:
according to the invention, binocular vision is adopted to obtain the material image, the three-dimensional space detection of the material is carried out through a three-dimensional fitting algorithm, and the material image is fed back to the robot, so that the robot can better control the material posture and the movement distance of a mechanical arm of the robot, and the accurate positioning of the material posture is realized.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The present invention will now be described in more detail by way of examples, which are given by way of illustration only and are not intended to limit the scope of the present invention in any way.
The invention provides a technical scheme that: the binocular vision material three-dimensional space detection method comprises the following steps:
step 1: the robot acquires the positioning mechanism through the binocular vision device, and absorbs binocular vision images of the materials and standard marks of the materials;
step 2: correcting and fitting the binocular vision image of the object to be detected to obtain a depth image of the object to be detected in a world coordinate system, and determining a point cloud data image of the object to be detected according to the depth image;
and step 3: determining upper surface point cloud data and an upper surface center position as position data of an article to be detected;
and 4, step 4: according to the fitting surface of the upper surface point cloud data of the object to be detected and the distance between the central position of the upper surface of the object to be detected and the boundary position of the upper surface point cloud data of the object to be detected, form data of the object to be detected, including the position of the positioning mechanism and the freedom degree posture of the material, are determined and fed back to the robot, and after the robot is fed back, the posture of the material and the movement distance of an arm of the robot are controlled, the material is stacked, and therefore accurate stacking is achieved.
The first embodiment is as follows:
firstly, acquiring an available image, acquiring a positioning mechanism by a robot through a binocular vision device, absorbing a binocular vision image of a material and a standard mark thereof, then carrying out correction fitting, carrying out correction fitting on the binocular vision image of the object to be detected to obtain a depth image of the object to be detected under a world coordinate system, determining a point cloud data image of the object to be detected according to the depth image, then determining position data, determining upper surface point cloud data and an upper surface center position as position data of the object to be detected, finally carrying out material stacking, determining shape data of the object to be detected, including the position of the positioning mechanism and the free degree attitude of the material, according to the fitting surface of the upper surface point cloud data of the object to be detected and the distance between the upper surface center position of the object to be detected and the upper surface point cloud data boundary position of the object to be detected, and feeding back to the robot, after the robot is fed back, the material posture and the movement distance of the robot arm are controlled, and materials are stacked, so that accurate stacking is realized.
Example two:
in the first embodiment, the following steps are added:
in the step 1, the servo control lifting mechanism is used for ensuring that the material absorption heights are consistent for the stations in the material absorption process, and after the material absorption process, the positions of the material are secondarily positioned through binocular vision, the bipolar plate and the membrane electrode are accurately absorbed through a manipulator, and multiple pieces of ultrathin materials are prevented from being absorbed by mistake.
Firstly, acquiring an available image, acquiring a positioning mechanism by a robot through a binocular vision device, absorbing a binocular vision image of a material and a standard mark thereof, then carrying out correction fitting, carrying out correction fitting on the binocular vision image of the object to be detected to obtain a depth image of the object to be detected under a world coordinate system, determining a point cloud data image of the object to be detected according to the depth image, then determining position data, determining upper surface point cloud data and an upper surface center position as position data of the object to be detected, finally carrying out material stacking, determining shape data of the object to be detected, including the position of the positioning mechanism and the free degree attitude of the material, according to the fitting surface of the upper surface point cloud data of the object to be detected and the distance between the upper surface center position of the object to be detected and the upper surface point cloud data boundary position of the object to be detected, and feeding back to the robot, after the robot is fed back, the material posture and the movement distance of the robot arm are controlled, and materials are stacked, so that accurate stacking is realized.
Example three:
in the second embodiment, the following steps are added:
in the step 2, when the binocular vision image of the object to be detected is fitted, the three-dimensional space detection of the material is carried out by adopting a three-dimensional fitting algorithm, and the binocular vision image is fed back to the robot after being corrected and fitted, so that the accurate positioning of the material posture is realized.
Firstly, acquiring an available image, acquiring a positioning mechanism by a robot through a binocular vision device, absorbing a binocular vision image of a material and a standard mark thereof, then carrying out correction fitting, carrying out correction fitting on the binocular vision image of the object to be detected to obtain a depth image of the object to be detected under a world coordinate system, determining a point cloud data image of the object to be detected according to the depth image, then determining position data, determining upper surface point cloud data and an upper surface center position as position data of the object to be detected, finally carrying out material stacking, determining shape data of the object to be detected, including the position of the positioning mechanism and the free degree attitude of the material, according to the fitting surface of the upper surface point cloud data of the object to be detected and the distance between the upper surface center position of the object to be detected and the upper surface point cloud data boundary position of the object to be detected, and feeding back to the robot, after the robot is fed back, the material posture and the movement distance of the robot arm are controlled, and materials are stacked, so that accurate stacking is realized.
Example four:
in the third embodiment, the following steps are added:
in step 4, the heap dress tool that the robot used adopts six guide pillars to fix a position and places, ensures to pile up no dislocation, and outside cylinder carries on spacingly, realizes accurate overlapping, and the frock of piling up among the dress tool is piled up to satisfy the requirement that 20 sections of thermopile and 40 sections pile up, piles up the frock and be the finish machining subassembly, is furnished with diversified distance sensor, confirms to pile up the precision, and diversified distance sensor rationally arranges, and diversified distance sensor mainly used repeatedly confirms to pile up the precision.
Firstly, acquiring an available image, acquiring a positioning mechanism by a robot through a binocular vision device, absorbing a binocular vision image of a material and a standard mark thereof, then carrying out correction fitting, carrying out correction fitting on the binocular vision image of the object to be detected to obtain a depth image of the object to be detected under a world coordinate system, determining a point cloud data image of the object to be detected according to the depth image, then determining position data, determining upper surface point cloud data and an upper surface center position as position data of the object to be detected, finally carrying out material stacking, determining shape data of the object to be detected, including the position of the positioning mechanism and the free degree attitude of the material, according to the fitting surface of the upper surface point cloud data of the object to be detected and the distance between the upper surface center position of the object to be detected and the upper surface point cloud data boundary position of the object to be detected, and feeding back to the robot, after the robot is fed back, the material posture and the movement distance of the robot arm are controlled, and materials are stacked, so that accurate stacking is realized.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (8)
1. The binocular vision material three-dimensional space detection method is characterized by comprising the following steps: the method comprises the following steps:
step 1: the robot acquires the positioning mechanism through the binocular vision device, and absorbs binocular vision images of the materials and standard marks of the materials;
step 2: correcting and fitting the binocular vision image of the object to be detected to obtain a depth image of the object to be detected in a world coordinate system, and determining a point cloud data image of the object to be detected according to the depth image;
and step 3: determining upper surface point cloud data and an upper surface center position as position data of an article to be detected;
and 4, step 4: according to the fitting surface of the upper surface point cloud data of the object to be detected and the distance between the central position of the upper surface of the object to be detected and the boundary position of the upper surface point cloud data of the object to be detected, form data of the object to be detected, including the position of the positioning mechanism and the freedom degree posture of the material, are determined and fed back to the robot, and after the robot is fed back, the posture of the material and the movement distance of an arm of the robot are controlled, the material is stacked, and therefore accurate stacking is achieved.
2. The binocular vision material three-dimensional space detection method according to claim 1, wherein: in the step 1, the servo control lifting mechanism is used for the stations in the material suction process to ensure that the material suction heights are consistent.
3. The binocular vision material three-dimensional space detection method according to claim 1, wherein: in the step 1, after the material is absorbed, the position of the material is secondarily positioned by binocular vision, the bipolar plate and the membrane electrode are accurately absorbed by a manipulator, and multiple pieces of ultrathin materials are prevented from being absorbed by mistake.
4. The binocular vision material three-dimensional space detection method according to claim 1, wherein: and in the step 2, when the binocular vision image of the object to be detected is fitted, the three-dimensional space detection of the material is carried out by adopting a three-dimensional fitting algorithm.
5. The binocular vision material three-dimensional space detection method according to claim 1, wherein: in the step 2, the binocular vision image is fed back to the robot after being corrected and fitted, and the accurate positioning of the material posture is achieved.
6. The binocular vision material three-dimensional space detection method according to claim 1, wherein: in step 4, the stacking jig used by the robot is positioned and placed by adopting six guide pillars, so that stacking is ensured to be free of dislocation, and the outer side cylinder is limited, so that accurate overlapping is realized.
7. The binocular vision material three-dimensional space detection method according to claim 1, wherein: in step 4, the stacking tool in the robot stacking jig meets the stacking requirements of 20 sections and 40 sections of the sub-electric pile, is a finish machining assembly and is provided with a multi-directional distance sensor to confirm the stacking precision.
8. The binocular vision material three-dimensional space detection method according to claim 1, wherein: in step 4, the multi-azimuth distance sensors are reasonably arranged and mainly used for repeatedly confirming the stacking precision.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110329345.1A CN113146625A (en) | 2021-03-28 | 2021-03-28 | Binocular vision material three-dimensional space detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110329345.1A CN113146625A (en) | 2021-03-28 | 2021-03-28 | Binocular vision material three-dimensional space detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113146625A true CN113146625A (en) | 2021-07-23 |
Family
ID=76885154
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110329345.1A Pending CN113146625A (en) | 2021-03-28 | 2021-03-28 | Binocular vision material three-dimensional space detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113146625A (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040028517A1 (en) * | 2002-07-18 | 2004-02-12 | Lindquist David Allen | Automatic down-stacking technology |
CN109273750A (en) * | 2018-09-20 | 2019-01-25 | 北京氢璞创能科技有限公司 | A kind of automated fuel cell dress stack device |
CN110555878A (en) * | 2018-05-31 | 2019-12-10 | 上海微电子装备(集团)股份有限公司 | Method and device for determining object space position form, storage medium and robot |
CN110957515A (en) * | 2019-11-29 | 2020-04-03 | 山东魔方新能源科技有限公司 | Automatic fuel cell stacking system |
CN111129562A (en) * | 2020-01-15 | 2020-05-08 | 无锡先导智能装备股份有限公司 | Fuel cell stack production line |
CN111360879A (en) * | 2020-02-19 | 2020-07-03 | 哈尔滨工业大学 | Visual servo automatic positioning device based on distance measuring sensor and visual sensor |
CN111785997A (en) * | 2020-06-12 | 2020-10-16 | 东风汽车集团有限公司 | Automatic stacking device for fuel cell stack |
CN112366331A (en) * | 2020-12-07 | 2021-02-12 | 无锡先导自动化设备股份有限公司 | Fuel cell graphite bipolar plate production system |
-
2021
- 2021-03-28 CN CN202110329345.1A patent/CN113146625A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040028517A1 (en) * | 2002-07-18 | 2004-02-12 | Lindquist David Allen | Automatic down-stacking technology |
CN110555878A (en) * | 2018-05-31 | 2019-12-10 | 上海微电子装备(集团)股份有限公司 | Method and device for determining object space position form, storage medium and robot |
CN109273750A (en) * | 2018-09-20 | 2019-01-25 | 北京氢璞创能科技有限公司 | A kind of automated fuel cell dress stack device |
CN110957515A (en) * | 2019-11-29 | 2020-04-03 | 山东魔方新能源科技有限公司 | Automatic fuel cell stacking system |
CN111129562A (en) * | 2020-01-15 | 2020-05-08 | 无锡先导智能装备股份有限公司 | Fuel cell stack production line |
CN111360879A (en) * | 2020-02-19 | 2020-07-03 | 哈尔滨工业大学 | Visual servo automatic positioning device based on distance measuring sensor and visual sensor |
CN111785997A (en) * | 2020-06-12 | 2020-10-16 | 东风汽车集团有限公司 | Automatic stacking device for fuel cell stack |
CN112366331A (en) * | 2020-12-07 | 2021-02-12 | 无锡先导自动化设备股份有限公司 | Fuel cell graphite bipolar plate production system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106625676B (en) | Three-dimensional visual accurate guiding and positioning method for automatic feeding in intelligent automobile manufacturing | |
CN106839979B (en) | The hand and eye calibrating method of line structured laser sensor | |
CN106197262B (en) | A kind of rectangular piece position and angle measurement method | |
US8346392B2 (en) | Method and system for the high-precision positioning of at least one object in a final location in space | |
CN106780623B (en) | Rapid calibration method for robot vision system | |
CN110370286A (en) | Dead axle motion rigid body spatial position recognition methods based on industrial robot and monocular camera | |
CN109676243A (en) | Weld distinguishing and tracking system and method based on dual laser structure light | |
CN201653373U (en) | Triaxial non-contact image measuring system | |
CN102735166A (en) | Three-dimensional scanner and robot system | |
CN110480615B (en) | Robot unstacking positioning correction method | |
CN107607064A (en) | LED fluorescent powder glue coating planeness detection system and method based on a cloud information | |
CN111421226B (en) | Pipe identification method and device based on laser pipe cutting equipment | |
CN113465513B (en) | Laser sensor inclination angle error measurement compensation method and system based on cylindrical angle square | |
CN109269422A (en) | A kind of experimental method and device of the check and correction of dot laser displacement sensor error | |
CN108305848A (en) | A kind of wafer automatic station-keeping system and the loading machine including it | |
CN103600353A (en) | Material edge detecting method of tooling | |
CN111609847A (en) | Automatic planning method of robot photographing measurement system for sheet parts | |
CN113198691B (en) | High-precision large-view-field dispensing method and device based on 3D line laser and CCD | |
CN113146625A (en) | Binocular vision material three-dimensional space detection method | |
CN106276285B (en) | Group material buttress position automatic testing method | |
CN109751987A (en) | A kind of vision laser locating apparatus and localization method for mechanical actuating mechanism | |
CN108917595A (en) | Glass on-line measuring device based on machine vision | |
CN209342062U (en) | 3D vision guide de-stacking measuring system | |
CN114918723A (en) | Workpiece positioning control system and method based on surface detection | |
JP2017007026A (en) | Position correcting system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210723 |
|
RJ01 | Rejection of invention patent application after publication |