CN111590573A - Construction method and system for three-dimensional environment of robot - Google Patents

Construction method and system for three-dimensional environment of robot Download PDF

Info

Publication number
CN111590573A
CN111590573A CN202010414476.5A CN202010414476A CN111590573A CN 111590573 A CN111590573 A CN 111590573A CN 202010414476 A CN202010414476 A CN 202010414476A CN 111590573 A CN111590573 A CN 111590573A
Authority
CN
China
Prior art keywords
dimensional
robot
data
environment
acquisition unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010414476.5A
Other languages
Chinese (zh)
Inventor
史超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Guoxin Taifu Technology Co ltd
Original Assignee
Shenzhen Guoxin Taifu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Guoxin Taifu Technology Co ltd filed Critical Shenzhen Guoxin Taifu Technology Co ltd
Priority to CN202010414476.5A priority Critical patent/CN111590573A/en
Publication of CN111590573A publication Critical patent/CN111590573A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Abstract

The invention discloses a construction method and a system of a three-dimensional environment of a robot, belonging to the technical field of robots, wherein the method comprises the following steps: step S1, acquiring a three-dimensional depth image of a target area in an external working environment; step S2, collecting radar scanning images of an external working environment; step S3, merging the stereo depth image, the radar scanning image and the coordinate information of the acquisition unit in the global coordinate system, and obtaining point cloud data of the external working environment and the target area in the global coordinate system through point cloud conversion; step S4, processing the coordinate information of the acquisition unit in the global coordinate system to obtain the mask data of the robot body; step S5, performing voxel grid conversion according to the point cloud data and the mask data, and establishing a three-dimensional environment according to the conversion result; the system comprises: the device comprises an image acquisition unit, a transformation unit, a three-dimensional conversion unit and a construction unit; the beneficial effects are that: the user can effectively meet different pixel requirements on the three-dimensional environment under different application conditions by selecting the target area to construct the three-dimensional environment.

Description

Construction method and system for three-dimensional environment of robot
Technical Field
The invention relates to the technical field of robots, in particular to a method and a system for constructing a three-dimensional environment of a robot.
Background
The robot is applied to the technical field of industrial manufacturing for the first time, the industrial robot is a multi-joint manipulator or a multi-degree-of-freedom machine device facing the industrial field, the robot can automatically execute work and realize various functions by means of self power and control capability, the industrial robot can be directly controlled by an operator through a corresponding control end and can automatically run according to a pre-programmed program, the robot needs to construct a three-dimensional environment through a sensor carried by the robot before executing a task and interacts the three-dimensional environment with the control end, because the specific working environment of the robot is continuously changed and because of the limitation of signal transmission bandwidth and the processing capability of a processor carried by the robot, the robot usually selects a construction frame with low pixels in the construction process of the three-dimensional environment so as to save bandwidth and processor resources, but when the robot needs to complete a fine task through fine actions, a low-pixel construction framework cannot meet the requirements, and therefore, a three-dimensional environment construction method capable of dynamically changing according to the requirements of operators is required.
Disclosure of Invention
According to the problems in the prior art, a method and a system for constructing a three-dimensional environment of a robot are provided, and a user can effectively meet different pixel requirements on the three-dimensional environment under different application conditions by selecting a target area to construct the three-dimensional environment.
The technical scheme specifically comprises the following steps:
a robot three-dimensional environment construction method comprises the steps that a global coordinate system is constructed in advance by a robot, the robot collects video data and image data of an external working environment in real time and transmits the video data and the image data to a control terminal remotely connected with the robot, and a user selects a target area in the working environment through the control terminal;
the construction method specifically comprises the following steps:
step S1, acquiring a stereoscopic depth image of the target area in an external working environment through an acquisition unit;
step S2, collecting the radar scanning image of the external working environment through the collecting unit;
step S3, merging the three-dimensional depth image, the radar scanning image and the coordinate information of the acquisition unit in the global coordinate system to obtain the mask data of the target area, and obtaining the point cloud data of the external working environment through point cloud conversion;
step S4, processing according to the coordinate information of the acquisition unit in the global coordinate system to obtain the mask data of the robot body;
and step S5, performing voxel grid conversion according to the point cloud data and the mask data, and establishing a three-dimensional environment according to a conversion result.
Preferably, before executing the step S5, the method further includes:
and step S50, acquiring color information in the external working environment through a third acquisition unit, and adding data information for representing colors to corresponding points in the point cloud data according to the color information.
Preferably, in step S5, a three-dimensional voxel frame is pre-constructed, and the point cloud data is subjected to voxel grid conversion in the three-dimensional voxel frame.
Preferably, wherein the data storage structure in the three-dimensional voxel frame is an octree structure.
Preferably, the three-dimensional voxel frame is divided into three types, namely a high-resolution frame, a coarse-resolution frame and a user-defined resolution frame.
A robot three-dimensional environment construction system is characterized in that a global coordinate system is constructed in advance by a robot, the robot collects video data and image data of an external working environment in real time and transmits the video data and the image data to a control terminal remotely connected with the robot, and a user selects a target area in the working environment through the control terminal;
the construction system specifically comprises:
the image acquisition unit is used for acquiring a three-dimensional depth image of the target area in an external working environment and acquiring a radar scanning image of the external working environment;
the transformation unit is connected with the image acquisition unit and is used for merging the three-dimensional depth image, the radar scanning image and the coordinate information of the acquisition unit in the global coordinate system to acquire mask data of the target area and obtain point cloud data of the external working environment through point cloud transformation;
the three-dimensional conversion unit is connected with the conversion unit and used for carrying out voxel grid conversion according to the point cloud data and the mask data to generate a conversion result;
the mask acquisition unit is used for processing the coordinate information of the acquisition unit in the global coordinate system to obtain mask data of the robot body;
and the construction unit is connected with the three-dimensional conversion unit and used for constructing the three-dimensional environment according to the conversion result.
Preferably, the method further comprises the following steps:
the image acquisition unit is also used for acquiring color information in the external working environment;
and the processing unit is connected with the image acquisition unit and the transformation unit and is used for adding data information of colors for representation to corresponding points in the point cloud data according to the color information.
Preferably, a three-dimensional voxel frame is pre-constructed in the three-dimensional conversion unit, and the point cloud data is subjected to voxel grid conversion in the three-dimensional voxel frame.
Preferably, wherein the data storage structure in the three-dimensional voxel frame is an octree structure.
Preferably, the three-dimensional voxel frame is divided into three types of a high-resolution frame, a coarse-resolution frame and a frame for self-defining respective rate.
The beneficial effects of the above technical scheme are that:
the method and the system for constructing the three-dimensional environment of the robot are provided, and a user can effectively meet different pixel requirements on the three-dimensional environment under different application conditions by selecting a target area to construct the three-dimensional environment.
Drawings
FIG. 1 is a flow chart illustrating a method for constructing a three-dimensional environment of a robot according to a preferred embodiment of the present invention;
fig. 2 is a schematic structural diagram of a construction system of a robot three-dimensional environment in a preferred embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
A robot three-dimensional environment construction method is characterized in that a global coordinate system is constructed in advance by a robot, the robot collects video data and image data of an external working environment in real time and transmits the video data and the image data to a control terminal remotely connected with the robot, and a user selects a target area in the working environment through the control terminal;
as shown in fig. 1, the construction method specifically includes:
step S1, acquiring a three-dimensional depth image of a target area in an external working environment through an acquisition unit;
step S2, collecting radar scanning images of external working environment through a collecting unit;
step S3, merging the stereo depth image, the radar scanning image and the coordinate information of the acquisition unit in the global coordinate system to obtain the mask data of the target area, and obtaining the point cloud data of the external working environment through point cloud conversion;
step S4, processing according to the coordinate information of the acquisition unit in the global coordinate system to obtain the mask data of the robot body;
and step S5, performing voxel grid conversion according to the point cloud data and the mask data, and establishing a three-dimensional environment according to a conversion result.
As a preferable embodiment, a global coordinate system is pre-constructed in the robot, the global coordinate system takes the geographical position where the robot is started as an origin, a plurality of groups of acquisition sensors are arranged on the robot body to acquire different data in the working environment, in one specific embodiment, two radar scanners are arranged at the head of the robot and used for capturing geometric data surrounding the robot for 360 degrees, a camera with a full fisheye camera is arranged at the front and the rear of the robot respectively, the fish-eye camera can collect video texture data of the front and back environment of the robot, and the head of the robot is also provided with a pair of high-dynamic wide-view baseline stereo cameras and a pair of high-dynamic narrow-view baseline stereo cameras, the wide-view baseline stereo camera is used for realizing the function of a visual odometer and collecting obstacles in the traveling direction of the robot; the narrow-view baseline stereo camera is used for acquiring detailed information of an operation task, and all acquisition modules work cooperatively to construct a three-dimensional model of the robot;
the video data and the image data collected by the robot in real time have various definitions, after a user checks the video image data returned by the robot through a control terminal, a target area in the working environment of the robot is selected according to the task requirement of the robot, the target area corresponds to the area of the robot which needs to execute a fine task, a collecting unit collects a three-dimensional depth image in the target area, the collecting unit collects a radar scanning image of the whole working environment, coordinate conversion and merging processing of the video image data are carried out according to the position of the collecting unit on the robot body, namely the coordinate of the collecting unit in a global coordinate system and the coordinate of the collecting unit in the global coordinate system, and then point cloud data under the coarse resolution of the whole external working environment and point cloud data under the high resolution of the target area are obtained through point cloud conversion, carrying out voxel grid transformation on the generated point cloud data so as to establish a three-dimensional environment inside the robot; in another embodiment of the present invention, the target area may be selected in a plurality of numbers, and accordingly, the generated three-dimensional environment includes a plurality of high-resolution voxel grid areas corresponding to the target area.
In a preferred embodiment of the present invention, before performing step S4, the method further includes:
and step S40, acquiring color information in the external working environment through the acquisition unit, and adding data information for representing colors to corresponding points in the point cloud data according to the color information.
Specifically, in this embodiment, the third collecting unit colors the region corresponding to the point cloud data by collecting color information in the robot work environment, so that the robot has a function of identifying the color of the object, and the specific color data can be inserted into the corresponding point cloud data in the form of RGB values, thereby coloring the constructed three-dimensional model.
In the preferred embodiment of the present invention, in step S4, a three-dimensional voxel frame is pre-constructed, and the point cloud data is subjected to voxel grid conversion in the three-dimensional voxel frame.
In a preferred embodiment of the present invention, the data storage structure in the three-dimensional voxel frame is an octree structure.
In the preferred embodiment of the present invention, the three-dimensional voxel frame is divided into three types, a high resolution frame, a coarse resolution frame and a user-defined resolution frame.
Specifically, in this embodiment, the resolution of a specific region in the three-dimensional voxel frame is set according to specific needs, for example, processing point cloud data in the acquired target region needs to be performed by using a high-resolution frame, and processing the surrounding environment may be performed by using a coarse-resolution frame, so as to save resources.
A robot three-dimensional environment construction system is characterized in that a global coordinate system is constructed in advance by a robot, the robot collects video data and image data of an external working environment in real time and transmits the video data and the image data to a control terminal remotely connected with the robot, and a user selects a target area in the working environment through the control terminal;
as shown in fig. 2, the construction system specifically includes:
an image acquisition unit for acquiring a three-dimensional depth image of the target region in an external working environment and acquiring a radar scan image of the external working environment
The transformation unit is connected with the image acquisition unit and is used for merging the three-dimensional depth image, the radar scanning image and the coordinate information of the acquisition unit in the global coordinate system to acquire mask data of the target area and obtain point cloud data of the external working environment through point cloud transformation;
the three-dimensional conversion unit 4 is connected with the conversion unit 3 and is used for carrying out voxel grid conversion according to the point cloud data to generate a conversion result;
the mask acquisition unit 6 is used for processing the coordinate information of the acquisition unit in the global coordinate system to obtain mask data of the robot body;
and the construction unit 5 is connected with the three-dimensional conversion unit 4 and is used for constructing a three-dimensional environment according to the conversion result.
In the preferred embodiment of the present invention, as shown in fig. 2, further includes:
the image acquisition unit is also used for acquiring color information in the external working environment;
and the processing unit 2 is connected with the image acquisition unit 1 and the transformation unit 3 and is used for adding data information of colors for representation to corresponding points in the point cloud data according to the color information.
In the preferred embodiment of the present invention, a three-dimensional voxel frame is pre-constructed in the three-dimensional conversion unit 4, and the point cloud data is subjected to voxel grid conversion in the three-dimensional voxel frame.
In a preferred embodiment of the present invention, the data storage structure in the three-dimensional voxel frame is an octree structure.
In the preferred embodiment of the present invention, the three-dimensional voxel frame is divided into three types, a high resolution frame, a coarse resolution frame, and a frame for custom resolution.
The beneficial effects of the above technical scheme are that:
the method and the system for constructing the three-dimensional environment of the robot are provided, and a user can effectively meet different pixel requirements on the three-dimensional environment under different application conditions by selecting a target area to construct the three-dimensional environment.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims (8)

1. A construction method of a three-dimensional environment of a robot is characterized in that the robot pre-constructs a global coordinate system, the robot collects video data and image data of an external working environment in real time and transmits the video data and the image data to a control terminal remotely connected with the robot, and a user selects a target area in the working environment through the control terminal;
the construction method specifically comprises the following steps:
step S1, acquiring a stereoscopic depth image of the target area in an external working environment through an acquisition unit;
step S2, collecting the radar scanning image of the external working environment through the collecting unit;
step S3, merging the three-dimensional depth image, the radar scanning image and the coordinate information of the acquisition unit in the global coordinate system, and obtaining point cloud data of the external working environment and the target area in the global coordinate system through point cloud conversion;
step S4, processing according to the coordinate information of the acquisition unit in the global coordinate system to obtain the mask data of the robot body;
and step S5, performing voxel grid conversion according to the point cloud data and the mask data, and establishing a three-dimensional environment according to a conversion result.
2. The method for constructing a three-dimensional environment by using robots as claimed in claim 1, further comprising, before performing the step S5:
and step S50, acquiring color information in the external working environment through the acquisition unit, and adding data information for representing colors to corresponding points in the point cloud data according to the color information.
3. The method for constructing a three-dimensional environment of a robot according to claim 1, wherein in step S5, a three-dimensional voxel frame is constructed in advance, and the point cloud data is subjected to voxel grid conversion in the three-dimensional voxel frame.
4. The method for constructing a three-dimensional environment of a robot according to claim 3, wherein the three-dimensional voxel frame is divided into three types, namely a high-resolution frame, a coarse-resolution frame and a user-defined resolution frame.
5. A robot three-dimensional environment construction system is characterized in that a global coordinate system is constructed in advance by a robot, the robot collects video data and image data of an external working environment in real time and transmits the video data and the image data to a control terminal remotely connected with the robot, and a user selects a target area in the working environment through the control terminal;
the construction system specifically comprises:
the image acquisition unit is used for acquiring a three-dimensional depth image of the target area in an external working environment and acquiring a radar scanning image of the external working environment;
the transformation unit is connected with the image acquisition unit and is used for merging the three-dimensional depth image, the radar scanning image and the coordinate information of the acquisition unit in the global coordinate system to acquire mask data of the target area and obtain point cloud data of the external working environment through point cloud transformation;
the three-dimensional conversion unit is connected with the conversion unit and used for carrying out voxel grid conversion according to the point cloud data and the mask data to generate a conversion result;
the mask acquisition unit is used for processing the coordinate information of the acquisition unit in the global coordinate system to obtain mask data of the robot body;
and the construction unit is connected with the three-dimensional conversion unit and the mask acquisition unit and is used for constructing the three-dimensional environment according to the conversion result.
6. The system for building a robotic three-dimensional environment according to claim 5, further comprising:
the image acquisition unit is also used for acquiring color information in the external working environment;
and the processing unit is connected with the image acquisition unit and the transformation unit and is used for adding data information of colors for representation to corresponding points in the point cloud data according to the color information.
7. The system for constructing a three-dimensional environment of a robot according to claim 5, wherein a three-dimensional voxel frame is constructed in the three-dimensional transformation unit in advance, and the point cloud data is subjected to voxel grid transformation in the three-dimensional voxel frame.
8. The system for constructing a three-dimensional environment according to claim 7, wherein said three-dimensional voxel frame is divided into three types of a high resolution frame, a coarse resolution frame, and a frame for custom resolution.
CN202010414476.5A 2020-05-15 2020-05-15 Construction method and system for three-dimensional environment of robot Pending CN111590573A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010414476.5A CN111590573A (en) 2020-05-15 2020-05-15 Construction method and system for three-dimensional environment of robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010414476.5A CN111590573A (en) 2020-05-15 2020-05-15 Construction method and system for three-dimensional environment of robot

Publications (1)

Publication Number Publication Date
CN111590573A true CN111590573A (en) 2020-08-28

Family

ID=72182757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010414476.5A Pending CN111590573A (en) 2020-05-15 2020-05-15 Construction method and system for three-dimensional environment of robot

Country Status (1)

Country Link
CN (1) CN111590573A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5499306A (en) * 1993-03-08 1996-03-12 Nippondenso Co., Ltd. Position-and-attitude recognition method and apparatus by use of image pickup means
CN102510506A (en) * 2011-09-30 2012-06-20 北京航空航天大学 Virtual and real occlusion handling method based on binocular image and range information
CN104552295A (en) * 2014-12-19 2015-04-29 华南理工大学 Man-machine skill transmission system based on multi-information fusion
CN104914863A (en) * 2015-05-13 2015-09-16 北京理工大学 Integrated unmanned motion platform environment understanding system and work method thereof
CN105513132A (en) * 2015-12-25 2016-04-20 深圳市双目科技有限公司 Real-time map construction system, method and device
CN110842940A (en) * 2019-11-19 2020-02-28 广东博智林机器人有限公司 Building surveying robot multi-sensor fusion three-dimensional modeling method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5499306A (en) * 1993-03-08 1996-03-12 Nippondenso Co., Ltd. Position-and-attitude recognition method and apparatus by use of image pickup means
CN102510506A (en) * 2011-09-30 2012-06-20 北京航空航天大学 Virtual and real occlusion handling method based on binocular image and range information
CN104552295A (en) * 2014-12-19 2015-04-29 华南理工大学 Man-machine skill transmission system based on multi-information fusion
CN104914863A (en) * 2015-05-13 2015-09-16 北京理工大学 Integrated unmanned motion platform environment understanding system and work method thereof
CN105513132A (en) * 2015-12-25 2016-04-20 深圳市双目科技有限公司 Real-time map construction system, method and device
CN110842940A (en) * 2019-11-19 2020-02-28 广东博智林机器人有限公司 Building surveying robot multi-sensor fusion three-dimensional modeling method and system

Similar Documents

Publication Publication Date Title
CN111062873B (en) Parallax image splicing and visualization method based on multiple pairs of binocular cameras
US8989876B2 (en) Situational awareness for teleoperation of a remote vehicle
CN106548516B (en) Three-dimensional roaming method and device
CN111151463B (en) Mechanical arm sorting and grabbing system and method based on 3D vision
JPH09187038A (en) Three-dimensional shape extract device
CN105843166B (en) A kind of special type multiple degrees of freedom automatic butt jointing device and its working method
JP4976939B2 (en) Image processing device
JP2015043488A (en) Remote controller and remote construction method using the same
JP6513300B1 (en) IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM
CN115512042A (en) Network training and scene reconstruction method, device, machine, system and equipment
CN114407015A (en) Teleoperation robot online teaching system and method based on digital twins
CN117021059B (en) Picking robot, fruit positioning method and device thereof, electronic equipment and medium
CN111866467B (en) Method and device for determining three-dimensional coverage space of monitoring video and storage medium
CN111590573A (en) Construction method and system for three-dimensional environment of robot
US20160150189A1 (en) Image processing system and method
CN115294207A (en) Fusion scheduling system and method for smart campus monitoring video and three-dimensional GIS model
CN114782521A (en) Engineering vehicle operation information determination method and device, driving system and engineering vehicle
CN115147495A (en) Calibration method, device and system for vehicle-mounted system
CN109552262B (en) Washing supplementing method and system, upper computer and storage medium
Wong et al. A study of different unwarping methods for omnidirectional imaging
CN113516157A (en) Embedded three-dimensional scanning system and three-dimensional scanning device
CN111625001A (en) Robot control method and device and industrial robot
CN111890352A (en) Mobile robot touch teleoperation control method based on panoramic navigation
Hanabusa et al. 3D map generation for decommissioning work
Moon et al. Development of immersive augmented reality interface for construction robotic system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200828

RJ01 Rejection of invention patent application after publication