CN111383321A - Three-dimensional modeling method and system based on 3D vision sensor - Google Patents

Three-dimensional modeling method and system based on 3D vision sensor Download PDF

Info

Publication number
CN111383321A
CN111383321A CN201811621657.4A CN201811621657A CN111383321A CN 111383321 A CN111383321 A CN 111383321A CN 201811621657 A CN201811621657 A CN 201811621657A CN 111383321 A CN111383321 A CN 111383321A
Authority
CN
China
Prior art keywords
coordinate system
modeled
user coordinate
mechanical arm
tail end
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811621657.4A
Other languages
Chinese (zh)
Inventor
徐方
陈亮
王晓东
姜楠
潘鑫
宋健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Siasun Robot and Automation Co Ltd
Original Assignee
Shenyang Siasun Robot and Automation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Siasun Robot and Automation Co Ltd filed Critical Shenyang Siasun Robot and Automation Co Ltd
Priority to CN201811621657.4A priority Critical patent/CN111383321A/en
Publication of CN111383321A publication Critical patent/CN111383321A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1605Simulation of manipulator lay-out, design, modelling of manipulator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts

Abstract

The application relates to the technical field of robots, and particularly discloses a three-dimensional modeling method and a three-dimensional modeling system based on a 3D vision sensor, wherein the three-dimensional modeling method comprises the following steps: selecting the positions of the tail end of the mechanical arm and the object to be modeled in the visual range of the 3D visual sensor, establishing a first user coordinate system, and acquiring depth data of the object to be modeled; determining three intersection points of the object to be modeled and the first user coordinate system; controlling the tail end of the mechanical arm to move to the three intersection points, and recording the pose of the tail end of the mechanical arm when the output value of the force sensor changes; establishing a second user coordinate system; and mapping the object to be modeled into a basic coordinate system of the mechanical arm through the second user coordinate system for three-dimensional modeling. According to the method for re-calibrating the user coordinate system on the object to be modeled, the characteristic that the repeated positioning precision of the robot is high is utilized, when the position of the object to be modeled is greatly changed, the error caused by low absolute positioning precision of the robot is reduced, and the three-dimensional modeling precision of the robot is improved.

Description

Three-dimensional modeling method and system based on 3D vision sensor
Technical Field
The application relates to the technical field of robots, in particular to a three-dimensional modeling method and a three-dimensional modeling system based on a 3D vision sensor.
Background
In the process of three-dimensional modeling of an object based on a basic coordinate system by a robot, when the position of the object is greatly changed, because the origin of a user coordinate system is not on the object and the absolute positioning accuracy of the robot is lower than the repeated positioning accuracy of the robot, a large error is generated in the process of three-dimensional modeling of the object based on the basic coordinate system by the robot, and the three-dimensional modeling and subsequent positioning and grabbing operations are influenced.
Disclosure of Invention
In view of this, embodiments of the present application provide a three-dimensional modeling method and system based on a 3D vision sensor, so as to solve the problem that when the position of an object changes, the object is positioned by using a model that is built in the prior art, which may cause inaccurate positioning.
In a first aspect, an embodiment of the present application provides a three-dimensional modeling method based on a 3D vision sensor, where the three-dimensional modeling method includes:
selecting the positions of the tail end of the mechanical arm and an object to be modeled in the visual range of the 3D visual sensor, establishing a first user coordinate system, and determining the position relation between the first user coordinate system and the 3D visual sensor;
if the object to be modeled exists at the position of the object to be modeled, acquiring depth data of the object to be modeled;
determining three intersection points of the object to be modeled and the first user coordinate system according to the depth data, and respectively recording coordinates of the three intersection points in the first user coordinate system;
controlling the tail end of the mechanical arm to move to the three intersection points, and recording the pose of the tail end of the mechanical arm when the output value of a force sensor arranged in the mechanical arm changes;
establishing a second user coordinate system according to the pose of the tail end of the mechanical arm and the three intersection points;
and mapping the object to be modeled into a basic coordinate system of the mechanical arm through the second user coordinate system to complete three-dimensional modeling of the object to be modeled.
Optionally, the establishing a position relationship between the first user coordinate system and the 3D vision sensor includes:
placing a calibration plate at the position of the object to be modeled, and establishing a first user coordinate system through the calibration plate;
determining, by the 3D vision sensor, a positional relationship between the first user coordinate system and the 3D vision sensor.
Optionally, the calibration plate is a checkerboard calibration plate;
accordingly, the establishing of the first user coordinate system through the calibration board comprises:
adjusting the chessboard pattern calibration plate to the central point of the position of the object to be modeled;
and calibrating the checkerboard in the field of view range through the 3D vision sensor so as to establish a first user coordinate system on the checkerboard calibration plate.
Optionally, the determining three intersection points of the object to be modeled and the first user coordinate system according to the depth data includes:
converting the depth data into point cloud data, and triangulating the point cloud data;
establishing a triangulation network data model of the object to be modeled according to triangulation results;
and determining three intersection points of the object to be modeled and the first user coordinate system through the triangular network data model.
Optionally, the controlling the end of the mechanical arm to move to the three intersection points, and recording the pose of the end of the mechanical arm when the output value of the force sensor arranged in the mechanical arm changes includes:
controlling the tail end of the mechanical arm to move to an original point along preset positions of x, y and z axes of the first user coordinate system respectively;
monitoring whether an output value changes in the moving process of the tail end of the mechanical arm;
and if so, recording the pose of the tail end of the mechanical arm at the moment.
A second aspect of embodiments of the present application provides a three-dimensional modeling system based on a 3D vision sensor, the three-dimensional modeling system including:
the position relation determining module is used for selecting the positions of the tail end of the mechanical arm and an object to be modeled in the visual range of the 3D visual sensor, establishing a first user coordinate system, and determining the position relation between the first user coordinate system and the 3D visual sensor;
the depth data acquisition module is used for acquiring the depth data of the object to be modeled when the object to be modeled exists at the position of the object to be modeled;
the intersection point coordinate acquisition module is used for determining three intersection points of the object to be modeled and the first user coordinate system according to the depth data and respectively recording coordinates of the three intersection points in the first user coordinate system;
the pose acquisition module is used for controlling the tail end of the mechanical arm to move to the three intersection points and recording the pose of the tail end of the mechanical arm when the output value of the force sensor arranged in the mechanical arm changes;
the user coordinate system establishing module is used for establishing a second user coordinate system according to the pose of the tail end of the mechanical arm and the three intersection points;
and the mapping module is used for mapping the object to be modeled into a basic coordinate system of the mechanical arm through the second user coordinate system so as to complete three-dimensional modeling of the object to be modeled.
Optionally, the position relationship determining module is specifically configured to:
placing a calibration plate at the position of the object to be modeled, and establishing a first user coordinate system through the calibration plate;
determining, by the 3D vision sensor, a positional relationship between the first user coordinate system and the 3D vision sensor.
Optionally, the calibration plate is a checkerboard calibration plate;
accordingly, the establishing of the first user coordinate system through the calibration board comprises:
adjusting the chessboard pattern calibration plate to the central point of the position of the object to be modeled;
and calibrating the checkerboard in the field of view range through the 3D vision sensor so as to establish a first user coordinate system on the checkerboard calibration plate.
Optionally, when determining three intersection points of the object to be modeled and the first user coordinate system according to the depth data, the intersection point coordinate obtaining module is specifically configured to:
converting the depth data into point cloud data, and triangulating the point cloud data;
establishing a triangulation network data model of the object to be modeled according to triangulation results;
and determining three intersection points of the object to be modeled and the first user coordinate system through the triangular network data model.
Optionally, the pose acquisition module is specifically configured to:
controlling the tail end of the mechanical arm to move to an original point along preset positions of x, y and z axes of the first user coordinate system respectively;
monitoring whether an output value changes in the moving process of the tail end of the mechanical arm;
and if so, recording the pose of the tail end of the mechanical arm at the moment.
According to the method for re-calibrating the user coordinate system on the object to be modeled in the embodiment provided by the application, the characteristic that the repeated positioning precision of the robot is higher is utilized, when the position of the object to be modeled is changed greatly, the error caused by lower absolute positioning precision of the robot is reduced, and the three-dimensional modeling precision of the robot is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below.
FIG. 1 is a schematic diagram of modeling based on a 3D vision sensor provided by an embodiment of the present application;
FIG. 2 is a schematic flow chart of an implementation of a three-dimensional modeling method based on a 3D vision sensor according to an embodiment of the present application;
fig. 3 is a structural diagram of a three-dimensional modeling system based on a 3D vision sensor according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the application and do not constitute a limitation on the application.
The system for modeling based on 3D vision sensor provided by the present application as shown in fig. 1 comprises: 1-moving mechanical arm, 2-3D vision sensor, 3-force sensor and 4-object to be modeled. Wherein the 3D vision sensor is independent from the mobile robot arm, and the force sensor is assembled at the tail end of the mobile robot arm.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
The first embodiment is as follows:
fig. 2 shows a schematic implementation flow diagram of a three-dimensional modeling method based on a 3D vision sensor provided in an embodiment of the present application, including steps S21-S24, where:
step S21, selecting the positions of the tail end of the mechanical arm and the object to be modeled in the visual range of the 3D visual sensor, establishing a first user coordinate system, and determining the position relation between the first user coordinate system and the 3D visual sensor.
According to the modeling method, a modeling scene is set at first, specifically, the positions of the tail end of the mechanical arm and an object to be modeled are selected in a modeling range, or the tail end of the mechanical arm provided with the three-dimensional force sensor and the object to be modeled are fixed to one point in a visual field range of the 3D visual sensor respectively and are not shielded. And moving the object to be modeled, placing a calibration plate for calibration, establishing a coordinate system, and determining the position relation between the object and the 3D vision sensor through the established coordinate system.
Optionally, the establishing a position relationship between the first user coordinate system and the 3D vision sensor includes:
placing a calibration plate at the position of the object to be modeled, and establishing a first user coordinate system through the calibration plate;
determining, by the 3D vision sensor, a positional relationship between the first user coordinate system and the 3D vision sensor.
Optionally, the calibration plate is a checkerboard calibration plate;
accordingly, the establishing of the first user coordinate system through the calibration board comprises:
adjusting the chessboard pattern calibration plate to the central point of the position of the object to be modeled;
and calibrating the checkerboard in the field of view range through the 3D vision sensor so as to establish a first user coordinate system on the checkerboard calibration plate.
Specifically, the checkerboard calibration plate is arranged at the center point of the original position of the object to be modeled facing the 3D visual sensor, then the checkerboard in the visual field range is positioned through the 3D visual sensor, then the user coordinate system of the robot is calibrated on the checkerboard, and finally the position relation between the 3D visual sensor and the user coordinate system is determined through the steps.
Step S22, if the object to be modeled exists at the position of the object to be modeled, acquiring the depth data of the object to be modeled.
In the embodiment provided by the application, the object to be modeled is put back to the original position, and then the depth data of the object to be modeled can be acquired by using the RGBD camera.
Step S23, determining three intersection points of the object to be modeled and the first user coordinate system according to the depth data, and respectively recording coordinates of the three intersection points in the first user coordinate system.
Optionally, the determining three intersection points of the object to be modeled and the first user coordinate system according to the depth data includes:
converting the depth data into point cloud data, and triangulating the point cloud data;
establishing a triangulation network data model of the object to be modeled according to triangulation results;
and determining three intersection points of the object to be modeled and the first user coordinate system through the triangular network data model.
In the method, the depth data are converted into point cloud data, triangulation is performed on the point cloud data again, and finally a triangular mesh data model of the object is formed. The first user coordinate system is then mapped onto the object to be modeled.
Specifically, if the object to be modeled is in situ (the position selected in step S21), an intersection CPx (x) on the object to be modeled with three axes of the first user coordinate system is determined by the triangulated data model of the object to be modeled0,y0,z0)、CPy(x1,y1,z1) And CPz (x)2,y2,z2) The three points are all points in the 3D vision sensor coordinate system.
If the object to be modeled is not in the original position, determining an intersection point CP ' x (x ') of the object to be modeled and three axes of a user coordinate system superimposed with the offset of the object relative to the original position given by the vision sensor through a triangular grid data model of the object to be modeled '0,y’0,z’0)、CP’y(x’1,y’1,z’1) And CP 'z (x'2,y’2,z’2) These three points are all points in the 3D vision sensor coordinate system.
And step S24, controlling the tail end of the mechanical arm to move to the three intersection points, and recording the pose of the tail end of the mechanical arm when the output value of a force sensor arranged in the mechanical arm changes.
Optionally, the controlling the end of the mechanical arm to move to the three intersection points, and recording the pose of the end of the mechanical arm when the output value of the force sensor arranged in the mechanical arm changes includes:
controlling the tail end of the mechanical arm to move to an original point along preset positions of x, y and z axes of the first user coordinate system respectively;
monitoring whether the output value changes in the moving process of the tail end of the mechanical arm;
and if so, recording the pose of the tail end of the mechanical arm at the moment.
In the step, if the object to be modeled is in the original position, the tail ends of the movable mechanical arms with the force sensors respectively move to the original point O of the user coordinate system from a proper far place (without touching the object to be modeled) along the X axis, the Y axis and the Z axis of the first user coordinate system.
If the object to be modeled is not in the original position, the tail end of the movable mechanical arm with the force sensor moves to the original point O 'of the user coordinate system respectively from a proper far place (without touching the object to be modeled) through the X' axis, the Y 'axis and the Z' axis of the first user coordinate system which are superposed with the offset of the object, relative to the original position, given by the visual sensor.
And step S25, establishing a second user coordinate system according to the pose of the tail end of the mechanical arm and the three intersection points.
And step S26, mapping the object to be modeled into a basic coordinate system of the mechanical arm through the second user coordinate system so as to complete three-dimensional modeling of the object to be modeled.
According to the method for re-calibrating the user coordinate system on the object to be modeled in the embodiment provided by the application, the characteristic that the repeated positioning precision of the robot is higher is utilized, when the position of the object to be modeled is changed greatly, the error caused by lower absolute positioning precision of the robot is reduced, and the three-dimensional modeling precision of the robot is improved.
Example two:
fig. 3 shows a schematic structural diagram of a three-dimensional modeling system based on a 3D vision sensor according to another embodiment of the present application, the system including:
the position relation determining module 31 is configured to select a position where the tail end of the mechanical arm and the object to be modeled are located within a visual range of the 3D visual sensor, establish a first user coordinate system, and determine a position relation between the first user coordinate system and the 3D visual sensor;
the depth data acquisition module 32 is configured to acquire depth data of the object to be modeled when the object to be modeled exists at the position of the object to be modeled;
an intersection coordinate obtaining module 33, configured to determine three intersections of the object to be modeled and the first user coordinate system according to the depth data, and record coordinates of the three intersections in the first user coordinate system respectively;
a pose acquisition module 34, configured to control the end of the mechanical arm to move to the three intersection points, and record a pose of the end of the mechanical arm when an output value of a force sensor provided in the mechanical arm changes;
a user coordinate system establishing module 35, configured to establish a second user coordinate system according to the pose of the end of the mechanical arm and the three intersection points;
and the mapping module 36 is configured to map the object to be modeled into the base coordinate system of the robot arm through the second user coordinate system to complete three-dimensional modeling of the object to be modeled.
Optionally, the position relation determining module 31 is specifically configured to:
placing a calibration plate at the position of the object to be modeled, and establishing a first user coordinate system through the calibration plate;
determining, by the 3D vision sensor, a positional relationship between the first user coordinate system and the 3D vision sensor.
Optionally, the calibration plate is a checkerboard calibration plate;
accordingly, the establishing of the first user coordinate system through the calibration board comprises:
adjusting the chessboard pattern calibration plate to the central point of the position of the object to be modeled;
and calibrating the checkerboard in the field of view range through the 3D vision sensor so as to establish a first user coordinate system on the checkerboard calibration plate.
Optionally, when determining three intersection points of the object to be modeled and the first user coordinate system according to the depth data, the intersection point coordinate obtaining module is specifically configured to:
converting the depth data into point cloud data, and triangulating the point cloud data;
establishing a triangulation network data model of the object to be modeled according to triangulation results;
and determining three intersection points of the object to be modeled and the first user coordinate system through the triangular network data model.
Optionally, the pose acquisition module 34 is specifically configured to:
controlling the tail end of the mechanical arm to move to an original point along preset positions of x, y and z axes of the first user coordinate system respectively;
monitoring whether an output value changes in the moving process of the tail end of the mechanical arm;
and if so, recording the pose of the tail end of the mechanical arm at the moment.
According to the method for re-calibrating the user coordinate system on the object to be modeled in the embodiment provided by the application, the characteristic that the repeated positioning precision of the robot is higher is utilized, when the position of the object to be modeled is changed greatly, the error caused by lower absolute positioning precision of the robot is reduced, and the three-dimensional modeling precision of the robot is improved.
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention. Any other corresponding changes and modifications made according to the technical idea of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A three-dimensional modeling method based on a 3D vision sensor is characterized by comprising the following steps:
selecting the positions of the tail end of the mechanical arm and an object to be modeled in the visual range of the 3D visual sensor, establishing a first user coordinate system, and determining the position relation between the first user coordinate system and the 3D visual sensor;
if the object to be modeled exists at the position of the object to be modeled, acquiring depth data of the object to be modeled;
determining three intersection points of the object to be modeled and the first user coordinate system according to the depth data, and respectively recording coordinates of the three intersection points in the first user coordinate system;
controlling the tail end of the mechanical arm to move to the three intersection points, and recording the pose of the tail end of the mechanical arm when the output value of a force sensor arranged in the mechanical arm changes;
establishing a second user coordinate system according to the pose of the tail end of the mechanical arm and the three intersection points;
and mapping the object to be modeled into a basic coordinate system of the mechanical arm through the second user coordinate system to complete three-dimensional modeling of the object to be modeled.
2. The method of claim 1, wherein the establishing a positional relationship between a first user coordinate system and the 3D vision sensor comprises:
placing a calibration plate at the position of the object to be modeled, and establishing a first user coordinate system through the calibration plate;
determining, by the 3D vision sensor, a positional relationship between the first user coordinate system and the 3D vision sensor.
3. The 3D vision sensor-based three-dimensional modeling method of claim 2, wherein the calibration plate is a checkerboard calibration plate;
accordingly, the establishing of the first user coordinate system through the calibration board comprises:
adjusting the chessboard pattern calibration plate to the central point of the position of the object to be modeled;
and calibrating the checkerboard in the field of view range through the 3D vision sensor so as to establish a first user coordinate system on the checkerboard calibration plate.
4. The 3D vision sensor-based three-dimensional modeling method of claim 1, wherein the determining three intersection points of the object to be modeled and the first user coordinate system from the depth data comprises:
converting the depth data into point cloud data, and triangulating the point cloud data;
establishing a triangulation network data model of the object to be modeled according to triangulation results;
and determining three intersection points of the object to be modeled and the first user coordinate system through the triangular network data model.
5. The three-dimensional modeling method based on the 3D vision sensor as claimed in any one of claims 1-4, characterized in that said controlling the movement of the robot arm end to the three intersection points and recording the pose of the robot arm end when the output value of the force sensor arranged in the robot arm changes comprises:
controlling the tail end of the mechanical arm to move to an original point along preset positions of x, y and z axes of the first user coordinate system respectively;
monitoring whether the output value changes in the moving process of the tail end of the mechanical arm;
and if so, recording the pose of the tail end of the mechanical arm at the moment.
6. A three-dimensional modeling system based on a 3D vision sensor, characterized in that the three-dimensional modeling system comprises:
the position relation determining module is used for selecting the positions of the tail end of the mechanical arm and an object to be modeled in the visual range of the 3D visual sensor, establishing a first user coordinate system, and determining the position relation between the first user coordinate system and the 3D visual sensor;
the depth data acquisition module is used for acquiring the depth data of the object to be modeled when the object to be modeled exists at the position of the object to be modeled;
the intersection point coordinate acquisition module is used for determining three intersection points of the object to be modeled and the first user coordinate system according to the depth data and respectively recording coordinates of the three intersection points in the first user coordinate system;
the pose acquisition module is used for controlling the tail end of the mechanical arm to move to the three intersection points and recording the pose of the tail end of the mechanical arm when the output value of the force sensor arranged in the mechanical arm changes;
the user coordinate system establishing module is used for establishing a second user coordinate system according to the pose of the tail end of the mechanical arm and the three intersection points;
and the mapping module is used for mapping the object to be modeled into a basic coordinate system of the mechanical arm through the second user coordinate system so as to complete three-dimensional modeling of the object to be modeled.
7. The 3D vision sensor-based three-dimensional modeling system of claim 6, wherein the positional relationship determination module is specifically configured to:
placing a calibration plate at the position of the object to be modeled, and establishing a first user coordinate system through the calibration plate;
determining, by the 3D vision sensor, a positional relationship between the first user coordinate system and the 3D vision sensor.
8. The 3D vision sensor-based three-dimensional modeling system of claim 7, wherein the calibration plate is a checkerboard calibration plate;
accordingly, the establishing of the first user coordinate system through the calibration board comprises:
adjusting the chessboard pattern calibration plate to the central point of the position of the object to be modeled;
and calibrating the checkerboard in the field of view range through the 3D vision sensor so as to establish a first user coordinate system on the checkerboard calibration plate.
9. The 3D vision sensor-based three-dimensional modeling system of claim 6, wherein the intersection point coordinate acquisition module, when determining three intersection points of the object to be modeled and the first user coordinate system from the depth data, is specifically configured to:
converting the depth data into point cloud data, and triangulating the point cloud data;
establishing a triangulation network data model of the object to be modeled according to triangulation results;
and determining three intersection points of the object to be modeled and the first user coordinate system through the triangular network data model.
10. The 3D vision sensor-based three-dimensional modeling system of any of claims 6-9, wherein the pose acquisition module is specifically configured to:
controlling the tail end of the mechanical arm to move to an original point along preset positions of x, y and z axes of the first user coordinate system respectively;
monitoring whether an output value changes in the moving process of the tail end of the mechanical arm;
and if so, recording the pose of the tail end of the mechanical arm at the moment.
CN201811621657.4A 2018-12-28 2018-12-28 Three-dimensional modeling method and system based on 3D vision sensor Pending CN111383321A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811621657.4A CN111383321A (en) 2018-12-28 2018-12-28 Three-dimensional modeling method and system based on 3D vision sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811621657.4A CN111383321A (en) 2018-12-28 2018-12-28 Three-dimensional modeling method and system based on 3D vision sensor

Publications (1)

Publication Number Publication Date
CN111383321A true CN111383321A (en) 2020-07-07

Family

ID=71217768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811621657.4A Pending CN111383321A (en) 2018-12-28 2018-12-28 Three-dimensional modeling method and system based on 3D vision sensor

Country Status (1)

Country Link
CN (1) CN111383321A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117415826A (en) * 2023-12-19 2024-01-19 苏州一目万相科技有限公司 Control method and device of detection system and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040168148A1 (en) * 2002-12-17 2004-08-26 Goncalves Luis Filipe Domingues Systems and methods for landmark generation for visual simultaneous localization and mapping
CN108256430A (en) * 2017-12-20 2018-07-06 北京理工大学 Obstacle information acquisition methods, device and robot
CN108724190A (en) * 2018-06-27 2018-11-02 西安交通大学 A kind of industrial robot number twinned system emulation mode and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040168148A1 (en) * 2002-12-17 2004-08-26 Goncalves Luis Filipe Domingues Systems and methods for landmark generation for visual simultaneous localization and mapping
CN108256430A (en) * 2017-12-20 2018-07-06 北京理工大学 Obstacle information acquisition methods, device and robot
CN108724190A (en) * 2018-06-27 2018-11-02 西安交通大学 A kind of industrial robot number twinned system emulation mode and device

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
张晓龙;尹仕斌;任永杰;郭寅;杨凌辉;王一;: "基于全局空间控制的高精度柔性视觉测量系统研究", 红外与激光工程 *
杜宇楠;叶平;孙汉旭;: "基于激光与立体视觉同步数据的场景三维重建", 软件 *
杨扬;曹其新;朱笑笑;陈培华;: "面向机器人手眼协调抓取的3维建模方法", 机器人 *
杨贺然;张莉彦;: "基于末端开环视觉系统的机器人目标抓取研究", 组合机床与自动化加工技术 *
邹媛媛;李鹏飞;左克铸;: "三线结构光视觉传感器现场标定方法", 红外与激光工程 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117415826A (en) * 2023-12-19 2024-01-19 苏州一目万相科技有限公司 Control method and device of detection system and readable storage medium
CN117415826B (en) * 2023-12-19 2024-02-23 苏州一目万相科技有限公司 Control method and device of detection system and readable storage medium

Similar Documents

Publication Publication Date Title
CN109153125B (en) Method for orienting an industrial robot and industrial robot
AU2018295572B2 (en) Real time position and orientation tracker
EP3068607B1 (en) System for robotic 3d printing
JP5670416B2 (en) Robot system display device
EP0489919B1 (en) Calibration system of visual sensor
JP6812095B2 (en) Control methods, programs, recording media, robotic devices, and manufacturing methods for articles
CN100489448C (en) Method for calibrating workpieces coordinate system
JP5371927B2 (en) Coordinate system calibration method and robot system
CN111442722A (en) Positioning method, positioning device, storage medium and electronic equipment
JP7102115B2 (en) Calibration method, calibration device, 3D measuring device, 3D visual measuring device, robot end effector, program, recording medium
CN108972544A (en) A kind of vision laser sensor is fixed on the hand and eye calibrating method of robot
CN105451461A (en) PCB board positioning method based on SCARA robot
JP6855491B2 (en) Robot system, robot system control device, and robot system control method
CN110695982A (en) Mechanical arm hand-eye calibration method and device based on three-dimensional vision
CN110672049A (en) Method and system for determining the relation between a robot coordinate system and a workpiece coordinate system
TW202212081A (en) Calibration apparatus and calibration method for coordinate system of robotic arm
CN111383321A (en) Three-dimensional modeling method and system based on 3D vision sensor
CN109909999B (en) Method and device for acquiring TCP (Transmission control protocol) coordinates of robot
CN114310868B (en) Coordinate system correction device and method for robot arm
CN109737871A (en) A kind of scaling method of the relative position of three-dimension sensor and mechanical arm
CN110238851B (en) Mobile robot and rapid calibration method and system thereof
CN115446836B (en) Visual servo method based on mixing of various image characteristic information
WO2021145280A1 (en) Robot system
EP3738725B1 (en) Measurement system, measurement device, measurement method, and measurement program
CN113960614A (en) Elevation map construction method based on frame-map matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200707

WD01 Invention patent application deemed withdrawn after publication