CN109191522B - Robot displacement correction method and system based on three-dimensional modeling - Google Patents

Robot displacement correction method and system based on three-dimensional modeling Download PDF

Info

Publication number
CN109191522B
CN109191522B CN201811033086.2A CN201811033086A CN109191522B CN 109191522 B CN109191522 B CN 109191522B CN 201811033086 A CN201811033086 A CN 201811033086A CN 109191522 B CN109191522 B CN 109191522B
Authority
CN
China
Prior art keywords
robot
displacement
user
user terminal
detection instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811033086.2A
Other languages
Chinese (zh)
Other versions
CN109191522A (en
Inventor
周磊
谭军民
曹永军
李丽丽
杨芹
赵良红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Institute of Intelligent Manufacturing
Shunde Polytechnic
South China Robotics Innovation Research Institute
Original Assignee
Guangdong Institute of Intelligent Manufacturing
Shunde Polytechnic
South China Robotics Innovation Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Institute of Intelligent Manufacturing, Shunde Polytechnic, South China Robotics Innovation Research Institute filed Critical Guangdong Institute of Intelligent Manufacturing
Priority to CN201811033086.2A priority Critical patent/CN109191522B/en
Publication of CN109191522A publication Critical patent/CN109191522A/en
Application granted granted Critical
Publication of CN109191522B publication Critical patent/CN109191522B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a robot displacement correction method and system based on three-dimensional modeling, wherein the method comprises the following steps: the detection equipment receives a detection instruction for detecting the displacement of the robot, which is sent by a user terminal, wherein the detection instruction is generated by a user of the user terminal based on the operation of an operation interface of the user terminal; the detection equipment responds to the detection instruction, starts a binocular camera on the detection equipment, and collects real-time images of the robot in multiple directions; performing three-dimensional modeling processing according to the real-time image to obtain a three-dimensional space model image of the robot; determining the actual position of the displacement of the robot based on the three-dimensional space model image; determining the theoretical position of the robot displacement based on a displacement trajectory planning algorithm; comparing the actual position of the robot displacement with the theoretical position of the robot displacement to obtain the position difference of the robot displacement; and performing displacement correction according to the position difference of the displacement of the robot. In the embodiment of the invention, the displacement deviation of the robot can be quickly corrected.

Description

Robot displacement correction method and system based on three-dimensional modeling
Technical Field
The invention relates to the technical field of robot displacement control, in particular to a robot displacement correction method and system based on three-dimensional modeling.
Background
A Robot (Robot) is a machine device that automatically performs work; it can accept human command, run the program programmed in advance, and also can operate according to the principle outline action made by artificial intelligence technology. The task of which is to assist or replace human work, such as production, construction, or dangerous work.
However, the robot may have a displacement error when walking due to a preset operation program or due to uneven ground, and the like, and when the displacement error is large, the next work of the robot will be seriously affected.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a robot displacement correction method and system based on three-dimensional modeling, which can realize quick correction of displacement deviation of a robot.
In order to solve the technical problem, an embodiment of the present invention provides a robot displacement correction method based on three-dimensional modeling, where the robot displacement correction method includes:
the method comprises the steps that detection equipment receives a detection instruction for detecting robot displacement sent by a user terminal, wherein the detection instruction is generated by a user of the user terminal based on the operation of an operation interface of the user terminal;
the detection equipment responds to the detection instruction, starts a binocular camera on the detection equipment, and collects real-time images of the robot in multiple directions;
carrying out three-dimensional modeling processing according to the real-time image to obtain a three-dimensional space model image of the robot;
determining an actual position of the robot displacement based on the three-dimensional space model image;
determining the theoretical position of the robot displacement based on a displacement trajectory planning algorithm;
comparing the actual position of the robot displacement with the theoretical position of the robot displacement to obtain the position difference of the robot displacement;
and carrying out displacement correction according to the position difference of the robot displacement.
Optionally, the detecting instruction is generated by a user of the user terminal based on an operation interface of the user terminal, and includes:
the user performs identity authentication on the user terminal operation interface to confirm that the user is a legal user;
and after the user is confirmed to be a legal user, allowing the user to perform detection instruction generation operation on a user terminal operation interface to generate the detection instruction.
Optionally, the communication between the user terminal and the detection device is based on wireless network communication, wired network communication or zigbee communication.
Optionally, the step of responding to the detection instruction by the detection device includes:
after receiving the detection instruction, the detection equipment analyzes the detection instruction to obtain a physical address of a user terminal sending the detection instruction and identity information of a user;
the detection equipment judges whether the detection instruction is legal or not according to the physical address of the user terminal and the identity information of the user;
if the detection instruction is judged to be illegal, feeding back a judgment result to the user terminal;
and if the detection instruction is judged to be legal, responding to the detection instruction.
Optionally, the determining, by the detecting device, whether the detection instruction is legal according to the physical address of the user terminal and the identity information of the user includes:
judging whether the physical address of the user terminal is a physical address prestored by the detection equipment or not;
if not, the detection instruction is illegal;
and if so, matching the identity information of the user with a user permission set prestored in the detection equipment, and matching and judging whether the user identity information has the permission to send the detection instruction.
Optionally, the performing three-dimensional modeling processing according to the real-time image includes:
acquiring a real-time image of the robot by using the binocular camera to construct a disparity map;
carrying out graying processing and wavelet denoising processing on the disparity map in sequence to obtain a processed disparity map;
determining the spatial layout of the robot in a single direction according to the disparity map in the single direction;
and splicing the disparity maps in multiple directions based on an image splicing algorithm to construct a three-dimensional space model image of the robot.
Optionally, the performing displacement correction according to the position difference of the robot displacement includes:
and guiding the position difference into a control device at the robot end, and carrying out displacement correction by the control device based on the position difference.
In addition, an embodiment of the present invention further provides a robot displacement correction system based on three-dimensional modeling, where the robot displacement correction system includes:
an instruction receiving module: the detection device is used for receiving a detection instruction for detecting the displacement of the robot, which is sent by a user terminal, wherein the detection instruction is generated by a user of the user terminal based on the operation of an operation interface of the user terminal;
the instruction response module: the detection equipment responds to the detection instruction, starts a binocular camera on the detection equipment, and collects real-time images of the robot in multiple directions;
a three-dimensional modeling module: the system is used for carrying out three-dimensional modeling processing according to the real-time image to obtain a three-dimensional space model image of the robot;
an actual position determination module: for determining an actual position of the robot displacement based on the three-dimensional space model image;
a theoretical position determination module: the robot displacement theoretical position is determined based on a displacement trajectory planning algorithm;
a comparison module: the system comprises a robot displacement sensor, a controller and a controller, wherein the robot displacement sensor is used for comparing the actual position of the robot displacement with the theoretical position of the robot displacement to obtain the position difference of the robot displacement;
a correction module: and the displacement correction is carried out according to the position difference of the robot displacement.
In the embodiment of the invention, corresponding instructions are sent on a user terminal to control a detection device to start a binocular camera to collect real-time images in multiple directions of a robot, three-dimensional modeling is carried out according to the real-time images so as to determine the actual position of the displacement of the robot, the theoretical position of the displacement of the robot is determined through a displacement trajectory planning algorithm, then the position difference of the displacement of the robot is determined, and the position difference correction is carried out; the real-time images are shot through the binocular camera, the disparity map of the real-time images can be utilized, the three-dimensional map of the robot is constructed quickly and accurately, the actual position condition of the displacement of the robot is determined quickly, and therefore the position difference is obtained and quick displacement correction is carried out.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a robot displacement correction method based on three-dimensional modeling in an embodiment of the present invention;
fig. 2 is a schematic structural composition diagram of a robot displacement correction system based on three-dimensional modeling in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example (b):
referring to fig. 1, fig. 1 is a schematic flowchart of a robot displacement correction method based on three-dimensional modeling according to an embodiment of the present invention.
As shown in fig. 1, a robot displacement correction method based on three-dimensional modeling includes:
s11: the method comprises the steps that detection equipment receives a detection instruction for detecting robot displacement sent by a user terminal, wherein the detection instruction is generated by a user of the user terminal based on the operation of an operation interface of the user terminal;
in the specific implementation process of the present invention, the detection instruction is generated by a user terminal user based on a user terminal operation interface operation, and includes: the user performs identity authentication on the user terminal operation interface to confirm that the user is a legal user; and after the user is confirmed to be a legal user, allowing the user to perform detection instruction generation operation on a user terminal operation interface to generate the detection instruction. The communication between the user terminal and the detection device is based on wireless network communication, wired network communication or zigbee communication.
Specifically, the detection device and the user terminal may be connected through a wireless network, a wired network, zigbee communication, bluetooth communication, or other communication manners; the user terminal may include a mobile phone terminal, a personal PC terminal or a tablet computer, or other smart terminal device operable.
When a user carries out detection instruction generation operation on an operation interface on a user terminal, the operation interface of the user terminal firstly needs the user to carry out corresponding identity authentication, and firstly, the identity authentication can be carried out for the user in a mode of inputting an account password; the user can also recognize the living body fingerprint through face recognition; the identity of the user is authenticated by the identity authentication method, so that the user is determined to be a legal operation user, the operation safety is guaranteed, and the user can safely control the detection equipment.
After the user is confirmed to be a legal user, the user confirmed to be the legal user is allowed to perform corresponding operations on the operation interface on the user terminal, for example, a detection instruction generation operation is performed on the operation interface on the user terminal, and a detection instruction corresponding to the operation is generated and sent to the detection device through a wireless network, a wired network, zigbee communication, bluetooth communication and the like.
S12: the detection equipment responds to the detection instruction, starts a binocular camera on the detection equipment, and collects real-time images of the robot in multiple directions;
in a specific implementation process of the present invention, the step of responding to the detection instruction by the detection device includes: after receiving the detection instruction, the detection equipment analyzes the detection instruction to obtain a physical address of a user terminal sending the detection instruction and identity information of a user; the detection equipment judges whether the detection instruction is legal or not according to the physical address of the user terminal and the identity information of the user; if the detection instruction is judged to be illegal, feeding back a judgment result to the user terminal; and if the detection instruction is judged to be legal, responding to the detection instruction.
Wherein, the detecting device judges whether the detecting instruction is legal according to the physical address of the user terminal and the identity information of the user, including: judging whether the physical address of the user terminal is a physical address prestored by the detection equipment or not; if not, the detection instruction is illegal; and if so, matching the identity information of the user with a user permission set prestored in the detection equipment, and matching and judging whether the user identity information has the permission to send the detection instruction.
Specifically, after receiving a detection instruction sent from a user terminal, a detection device first performs corresponding analysis on the detection instruction, where the sent detection instruction includes a physical address of the user terminal, identity information of a user, and instruction information of the detection instruction, and the physical address of the user terminal sending the detection instruction, the identity information of the user, and the instruction information of the detection instruction, which are included in the detection instruction, can be obtained by analyzing the detection instruction; after the physical address and the user identity information of the user terminal sending the detection instruction are obtained, the detection equipment needs to judge the legality of the detection instruction according to the physical address and the user identity information of the user terminal, and if the detection instruction is judged to be illegal through the judgment, the detection equipment feeds back the judgment result to the user terminal through the original path; if the detection instruction is judged to be a legal instruction, responding to the detection instruction and carrying out operation related to the instruction; the process of judging the validity of the detection instruction mainly comprises the following steps: judging the validity of the detection instruction according to the physical address of the user terminal and the identity information of the user; specifically, whether a physical address of a user terminal is a physical address pre-stored on detection equipment is judged, if the judged result is not the physical address pre-stored on the detection equipment, the detection instruction is judged to be an illegal instruction, if the judged result is the physical address pre-stored on the detection equipment, identity information of a user is matched with a user authority set pre-stored on the detection equipment, whether the identity information of the user exists in a user authority set corresponding to the detection instruction is matched, if not, the detection instruction is illegal, and if yes, the detection instruction is legal.
After the detection instruction is judged to be legal, the detection equipment responds to the detection instruction, starts a binocular camera on the equipment according to the detection instruction, and acquires real-time images of the robot from multiple directions by using the binocular camera.
Here, the acquiring real-time images of the robot from a plurality of directions using the binocular camera includes: taking the robot as a center, enabling the acquisition equipment to do circular motion by taking the robot as the center, and respectively acquiring real-time image information of the robot at 0 degree, 90 degrees, 180 degrees and 270 degrees in the circle; in the specific implementation process, real-time images at different angles can be acquired according to specific conditions.
S13: carrying out three-dimensional modeling processing according to the real-time image to obtain a three-dimensional space model image of the robot;
in a specific implementation process of the present invention, the performing three-dimensional modeling processing according to the real-time image includes: acquiring a real-time image of the robot by using the binocular camera to construct a disparity map; carrying out graying processing and wavelet denoising processing on the disparity map in sequence to obtain a processed disparity map; determining the spatial layout of the robot in a single direction according to the disparity map in the single direction; and splicing the disparity maps in multiple directions based on an image splicing algorithm to construct a three-dimensional space model image of the robot.
In a specific embodiment, the imaging of a single camera in the binocular vision system is described by a pinhole camera mathematical model, i.e. the projected position Q of any point Q in the image is the intersection of the line connecting the optical center and point Q and the image plane, and the point Q in the physical world has coordinates (X, Y, Z). The projection is a point (x, y, f), as shown in the following equation:
Figure BDA0001790215980000071
in the formula, cxAnd cyIs the offset of the center of the imaging chip from the optical axis; f. ofxAnd fyIs the physical focal length of the lens and the size S of each unit of the imagerxAnd SyThe product of (a). Then write the matrix as:
q=MQ;
wherein
Figure BDA0001790215980000072
The matrix M is called as an internal parameter matrix of the camera, and lens distortion vectors can be simultaneously solved in the camera calibration process to correct lens distortion. The stereo calibration is a process of calculating the geometric relationship between two cameras in space, namely, a rotation matrix R and a translation matrix T between the two cameras are searched, an image black-and-white chessboard image is calibrated, the chessboard image is translated and rotated in front of the cameras in the calibration process, the corner point positions on the chessboard image are obtained at different angles, the rotation matrix R and the translation matrix T between the stereo images are given, and the stereo calibration is carried out by using a related algorithm, for example, a Bouguet algorithm, and the purpose of the stereo calibration is to enable corresponding matching points of images shot by two visual sensors to be respectively in the same-name pixel rows of the two images, so that the matching search direction is limited in one pixel row.
Image preprocessing is needed before generating the disparity map, so that a more obvious disparity map can be generated, after a large number of tests, the Gaussian filtering algorithm has a good effect, and the image texture is obviously enhanced after Gaussian filtering. It will be appreciated by those skilled in the art that the use of other pre-processing algorithms is not excluded in order to generate a better disparity map.
The origin of coordinates of the ideal binocular vision stereo coordinate system is the projection center of the left camera, the X axis points to the projection center of the right camera from the origin, the Z axis is perpendicular to the imaging plane of the camera and points to the front, and the Y axis is perpendicular to the arrow of the X-Z plane and points downwards.
Stereo matching is required for the corrected camera to generate a disparity map, for example, stereo matching is performed by selecting a regional gray scale correlation method.
For example, a similarity detection factor is selected: the sum of absolute values of the pixel gray differences is shown as follows:
Figure BDA0001790215980000085
wherein, Il(x, y) and Ir(x + d, y) are pixel gray values of the left and right images respectively; and (3) obtaining a disparity map through matching by a Gaussian filter algorithm, wherein each value on the disparity map represents a certain distance value in front of the camera. Larger parallax indicates closer distance, where a larger gray value of the region has higher brightness, indicating closer relative distance to the camera.
After the graying processing, the parallax image is subjected to wavelet filtering processing, so that the noise of the parallax image is reduced; constructing a three-dimensional space model image of the robot through a stitching algorithm for the current direction disparity map, wherein the stitching algorithm is an image stitching algorithm based on Fourier transform; for example, the algorithm for stitching robot three-dimensional space images in two adjacent directions performs two-dimensional discrete fourier transform on two digital images to be stitched, and assuming that the transform results are X (μ, ν) and Y (μ, ν), the related discrete fourier transform can be obtained:
Figure BDA0001790215980000081
in pair
Figure BDA0001790215980000082
And (3) carrying out Fourier inversion to obtain a spatial domain correlation function:
Figure BDA0001790215980000083
by computing the spatial domain's notional correlation function, the optimal image registration position can be found. For example, in image registration, discrete fourier transforms X (μ, ν) and Y (μ, ν) of two images to be stitched have mutual power spectra of:
S(μ,v)=X(μ,v)Y*(μ,v);
normalization can obtain a phase spectrum of a corresponding cross-power spectrum:
Figure BDA0001790215980000084
wherein Q isXAnd QYRespectively representing the phases of the fourier transforms of the two images to be stitched. The above formula shows that the phase spectrum is a delta pulse function located at the offset (mu, v) of two images to be spliced, and the function can calculate the similarity for splicing the two images, and then calculate the two images to be spliced through a polar coordinate system.
The disparity map in the visual field range can be determined by applying a binocular vision system, and the layout structure in a single direction can be integrated into an integral space structure by adopting a splicing algorithm, a data fusion algorithm and the like according to the single reverse space layout characteristic of the disparity map, so that a three-dimensional space model of the robot is obtained.
S14: determining an actual position of the robot displacement based on the three-dimensional space model image;
in the implementation process of the invention, after the three-dimensional space model image of the robot is obtained, the actual position of the displacement of the robot is determined based on the three-dimensional space model image.
S15: determining the theoretical position of the robot displacement based on a displacement trajectory planning algorithm;
in the specific implementation process of the invention, displacement track planning algorithms are embedded in the robots, the track motions of the robots are planned through the displacement track planning algorithms, and the theoretical positions of the displacement of the strange trace persons can be determined through the displacement track planning algorithms.
S16: comparing the actual position of the robot displacement with the theoretical position of the robot displacement to obtain the position difference of the robot displacement;
in the specific implementation process of the invention, after the actual position of the robot displacement and the theoretical position of the robot displacement are obtained, the actual position and the theoretical position are compared and matched in the embodiment of the invention, so that the position difference of the robot displacement is obtained.
S17: and carrying out displacement correction according to the position difference of the robot displacement.
In a specific implementation process of the present invention, the performing displacement correction according to the position difference of the robot displacement includes: and guiding the position difference into a control device at the robot end, and carrying out displacement correction by the control device based on the position difference.
Specifically, after the position difference of the robot displacement is obtained, the position difference is introduced into a control device of the robot, and the control device performs corresponding adjustment according to the position difference, so as to correct the displacement.
In the embodiment of the invention, corresponding instructions are sent on a user terminal to control a detection device to start a binocular camera to collect real-time images in multiple directions of a robot, three-dimensional modeling is carried out according to the real-time images so as to determine the actual position of the displacement of the robot, the theoretical position of the displacement of the robot is determined through a displacement trajectory planning algorithm, then the position difference of the displacement of the robot is determined, and the position difference correction is carried out; the real-time images are shot through the binocular camera, the disparity map of the real-time images can be utilized, the three-dimensional map of the robot is constructed quickly and accurately, the actual position condition of the displacement of the robot is determined quickly, and therefore the position difference is obtained and quick displacement correction is carried out.
Example (b):
referring to fig. 2, fig. 2 is a schematic structural composition diagram of a robot displacement correction system based on three-dimensional modeling according to an embodiment of the present invention.
As shown in fig. 2, a robot displacement correcting system based on three-dimensional modeling includes:
the instruction receiving module 11: the detection device is used for receiving a detection instruction for detecting the displacement of the robot, which is sent by a user terminal, wherein the detection instruction is generated by a user of the user terminal based on the operation of an operation interface of the user terminal;
in the specific implementation process of the present invention, the detection instruction is generated by a user terminal user based on a user terminal operation interface operation, and includes: the user performs identity authentication on the user terminal operation interface to confirm that the user is a legal user; and after the user is confirmed to be a legal user, allowing the user to perform detection instruction generation operation on a user terminal operation interface to generate the detection instruction. The communication between the user terminal and the detection device is based on wireless network communication, wired network communication or zigbee communication.
Specifically, the detection device and the user terminal may be connected through a wireless network, a wired network, zigbee communication, bluetooth communication, or other communication manners; the user terminal may include a mobile phone terminal, a personal PC terminal or a tablet computer, or other smart terminal device operable.
When a user carries out detection instruction generation operation on an operation interface on a user terminal, the operation interface of the user terminal firstly needs the user to carry out corresponding identity authentication, and firstly, the identity authentication can be carried out for the user in a mode of inputting an account password; the user can also recognize the living body fingerprint through face recognition; the identity of the user is authenticated by the identity authentication method, so that the user is determined to be a legal operation user, the operation safety is guaranteed, and the user can safely control the detection equipment.
After the user is confirmed to be a legal user, the user confirmed to be the legal user is allowed to perform corresponding operations on the operation interface on the user terminal, for example, a detection instruction generation operation is performed on the operation interface on the user terminal, and a detection instruction corresponding to the operation is generated and sent to the detection device through a wireless network, a wired network, zigbee communication, bluetooth communication and the like.
The instruction response module 12: the detection equipment responds to the detection instruction, starts a binocular camera on the detection equipment, and collects real-time images of the robot in multiple directions;
in a specific implementation process of the present invention, the step of responding to the detection instruction by the detection device includes: after receiving the detection instruction, the detection equipment analyzes the detection instruction to obtain a physical address of a user terminal sending the detection instruction and identity information of a user; the detection equipment judges whether the detection instruction is legal or not according to the physical address of the user terminal and the identity information of the user; if the detection instruction is judged to be illegal, feeding back a judgment result to the user terminal; and if the detection instruction is judged to be legal, responding to the detection instruction.
Wherein, the detecting device judges whether the detecting instruction is legal according to the physical address of the user terminal and the identity information of the user, including: judging whether the physical address of the user terminal is a physical address prestored by the detection equipment or not; if not, the detection instruction is illegal; and if so, matching the identity information of the user with a user permission set prestored in the detection equipment, and matching and judging whether the user identity information has the permission to send the detection instruction.
Specifically, after receiving a detection instruction sent from a user terminal, a detection device first performs corresponding analysis on the detection instruction, where the sent detection instruction includes a physical address of the user terminal, identity information of a user, and instruction information of the detection instruction, and the physical address of the user terminal sending the detection instruction, the identity information of the user, and the instruction information of the detection instruction, which are included in the detection instruction, can be obtained by analyzing the detection instruction; after the physical address and the user identity information of the user terminal sending the detection instruction are obtained, the detection equipment needs to judge the legality of the detection instruction according to the physical address and the user identity information of the user terminal, and if the detection instruction is judged to be illegal through the judgment, the detection equipment feeds back the judgment result to the user terminal through the original path; if the detection instruction is judged to be a legal instruction, responding to the detection instruction and carrying out operation related to the instruction; the process of judging the validity of the detection instruction mainly comprises the following steps: judging the validity of the detection instruction according to the physical address of the user terminal and the identity information of the user; specifically, whether a physical address of a user terminal is a physical address pre-stored on detection equipment is judged, if the judged result is not the physical address pre-stored on the detection equipment, the detection instruction is judged to be an illegal instruction, if the judged result is the physical address pre-stored on the detection equipment, identity information of a user is matched with a user authority set pre-stored on the detection equipment, whether the identity information of the user exists in a user authority set corresponding to the detection instruction is matched, if not, the detection instruction is illegal, and if yes, the detection instruction is legal.
After the detection instruction is judged to be legal, the detection equipment responds to the detection instruction, starts a binocular camera on the equipment according to the detection instruction, and acquires real-time images of the robot from multiple directions by using the binocular camera.
Here, the acquiring real-time images of the robot from a plurality of directions using the binocular camera includes: taking the robot as a center, enabling the acquisition equipment to do circular motion by taking the robot as the center, and respectively acquiring real-time image information of the robot at 0 degree, 90 degrees, 180 degrees and 270 degrees in the circle; in the specific implementation process, real-time images at different angles can be acquired according to specific conditions.
The three-dimensional modeling module 13: the system is used for carrying out three-dimensional modeling processing according to the real-time image to obtain a three-dimensional space model image of the robot;
in a specific implementation process of the present invention, the performing three-dimensional modeling processing according to the real-time image includes: acquiring a real-time image of the robot by using the binocular camera to construct a disparity map; carrying out graying processing and wavelet denoising processing on the disparity map in sequence to obtain a processed disparity map; determining the spatial layout of the robot in a single direction according to the disparity map in the single direction; and splicing the disparity maps in multiple directions based on an image splicing algorithm to construct a three-dimensional space model image of the robot.
In a specific embodiment, the imaging of a single camera in the binocular vision system is described by a pinhole camera mathematical model, i.e. the projected position Q of any point Q in the image is the intersection of the line connecting the optical center and point Q and the image plane, and the point Q in the physical world has coordinates (X, Y, Z). The projection is a point (x, y, f), as shown in the following equation:
Figure BDA0001790215980000121
in the formula, cxAnd cyIs the offset of the center of the imaging chip from the optical axis; f. ofxAnd fyIs the physical focal length of the lens and the size S of each unit of the imagerxAnd SyThe product of (a). Then write the matrix as:
q=MQ;
wherein
Figure BDA0001790215980000122
The matrix M is called as an internal parameter matrix of the camera, and lens distortion vectors can be simultaneously solved in the camera calibration process to correct lens distortion. The stereo calibration is a process of calculating the geometric relationship between two cameras in space, namely, a rotation matrix R and a translation matrix T between the two cameras are searched, an image black-and-white chessboard image is calibrated, the chessboard image is translated and rotated in front of the cameras in the calibration process, the corner point positions on the chessboard image are obtained at different angles, the rotation matrix R and the translation matrix T between the stereo images are given, and the stereo calibration is carried out by using a related algorithm, for example, a Bouguet algorithm, and the purpose of the stereo calibration is to enable corresponding matching points of images shot by two visual sensors to be respectively in the same-name pixel rows of the two images, so that the matching search direction is limited in one pixel row.
Image preprocessing is needed before generating the disparity map, so that a more obvious disparity map can be generated, after a large number of tests, the Gaussian filtering algorithm has a good effect, and the image texture is obviously enhanced after Gaussian filtering. It will be appreciated by those skilled in the art that the use of other pre-processing algorithms is not excluded in order to generate a better disparity map.
The origin of coordinates of the ideal binocular vision stereo coordinate system is the projection center of the left camera, the X axis points to the projection center of the right camera from the origin, the Z axis is perpendicular to the imaging plane of the camera and points to the front, and the Y axis is perpendicular to the arrow of the X-Z plane and points downwards.
Stereo matching is required for the corrected camera to generate a disparity map, for example, stereo matching is performed by selecting a regional gray scale correlation method.
For example, a similarity detection factor is selected: the sum of absolute values of the pixel gray differences is shown as follows:
Figure BDA0001790215980000134
wherein, Il(x, y) and Ir(x + d, y) are pixel gray values of the left and right images respectively; and (3) obtaining a disparity map through matching by a Gaussian filter algorithm, wherein each value on the disparity map represents a certain distance value in front of the camera. Larger parallax indicates closer distance, where a larger gray value of the region has higher brightness, indicating closer relative distance to the camera.
After the graying processing, the parallax image is subjected to wavelet filtering processing, so that the noise of the parallax image is reduced; constructing a three-dimensional space model image of the robot through a stitching algorithm for the current direction disparity map, wherein the stitching algorithm is an image stitching algorithm based on Fourier transform; for example, the algorithm for stitching robot three-dimensional space images in two adjacent directions performs two-dimensional discrete fourier transform on two digital images to be stitched, and assuming that the transform results are X (μ, ν) and Y (μ, ν), the related discrete fourier transform can be obtained:
Figure BDA0001790215980000131
in pair
Figure BDA0001790215980000132
And (3) carrying out Fourier inversion to obtain a spatial domain correlation function:
Figure BDA0001790215980000133
by computing the spatial domain's notional correlation function, the optimal image registration position can be found. For example, in image registration, discrete fourier transforms X (μ, ν) and Y (μ, ν) of two images to be stitched have mutual power spectra of:
S(μ,v)=X(μ,v)Y*(μ,v);
normalization can obtain a phase spectrum of a corresponding cross-power spectrum:
Figure BDA0001790215980000141
wherein Q isXAnd QYRespectively representing the phases of the fourier transforms of the two images to be stitched. As can be known from the above formula, the phase spectrum is a delta pulse function located at the offset (mu, v) of two images to be spliced, and the function can calculate the similarity for splicing the two images and then uses a polar coordinate system to splice the two imagesAnd calculating two images to be spliced.
The disparity map in the visual field range can be determined by applying a binocular vision system, and the layout structure in a single direction can be integrated into an integral space structure by adopting a splicing algorithm, a data fusion algorithm and the like according to the single reverse space layout characteristic of the disparity map, so that a three-dimensional space model of the robot is obtained.
The actual position determination module 14: for determining an actual position of the robot displacement based on the three-dimensional space model image;
in the implementation process of the invention, after the three-dimensional space model image of the robot is obtained, the actual position of the displacement of the robot is determined based on the three-dimensional space model image.
Theoretical position determination module 15: the robot displacement theoretical position is determined based on a displacement trajectory planning algorithm;
in the specific implementation process of the invention, displacement track planning algorithms are embedded in the robots, the track motions of the robots are planned through the displacement track planning algorithms, and the theoretical positions of the displacement of the strange trace persons can be determined through the displacement track planning algorithms.
The comparison module 16: the system comprises a robot displacement sensor, a controller and a controller, wherein the robot displacement sensor is used for comparing the actual position of the robot displacement with the theoretical position of the robot displacement to obtain the position difference of the robot displacement;
in the specific implementation process of the invention, after the actual position of the robot displacement and the theoretical position of the robot displacement are obtained, the actual position and the theoretical position are compared and matched in the embodiment of the invention, so that the position difference of the robot displacement is obtained.
The correction module 17: and the displacement correction is carried out according to the position difference of the robot displacement.
In a specific implementation process of the present invention, the performing displacement correction according to the position difference of the robot displacement includes: and guiding the position difference into a control device at the robot end, and carrying out displacement correction by the control device based on the position difference.
Specifically, after the position difference of the robot displacement is obtained, the position difference is introduced into a control device of the robot, and the control device performs corresponding adjustment according to the position difference, so as to correct the displacement.
In the embodiment of the invention, corresponding instructions are sent on a user terminal to control a detection device to start a binocular camera to collect real-time images in multiple directions of a robot, three-dimensional modeling is carried out according to the real-time images so as to determine the actual position of the displacement of the robot, the theoretical position of the displacement of the robot is determined through a displacement trajectory planning algorithm, then the position difference of the displacement of the robot is determined, and the position difference correction is carried out; the real-time images are shot through the binocular camera, the disparity map of the real-time images can be utilized, the three-dimensional map of the robot is constructed quickly and accurately, the actual position condition of the displacement of the robot is determined quickly, and therefore the position difference is obtained and quick displacement correction is carried out.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
In addition, the robot displacement correction method and system based on three-dimensional modeling provided by the embodiment of the present invention are described in detail, and a specific embodiment is adopted herein to explain the principle and the implementation of the present invention, and the description of the embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (7)

1. A robot displacement correction method based on three-dimensional modeling is characterized by comprising the following steps:
the method comprises the steps that detection equipment receives a detection instruction for detecting robot displacement sent by a user terminal, wherein the detection instruction is generated by a user of the user terminal based on the operation of an operation interface of the user terminal;
the detection equipment responds to the detection instruction, starts a binocular camera on the detection equipment, and collects real-time images of the robot in multiple directions;
carrying out three-dimensional modeling processing according to the real-time image to obtain a three-dimensional space model image of the robot;
determining an actual position of the robot displacement based on the three-dimensional space model image;
determining the theoretical position of the robot displacement based on a displacement trajectory planning algorithm;
comparing the actual position of the robot displacement with the theoretical position of the robot displacement to obtain the position difference of the robot displacement;
performing displacement correction according to the position difference of the robot displacement;
the three-dimensional modeling processing according to the real-time image comprises the following steps:
acquiring a real-time image of the robot by using the binocular camera to construct a disparity map;
carrying out graying processing and wavelet denoising processing on the disparity map in sequence to obtain a processed disparity map;
determining the spatial layout of the robot in a single direction according to the disparity map in the single direction;
splicing the disparity maps in multiple directions based on an image splicing algorithm to construct a three-dimensional space model image of the robot;
after graying processing, the parallax image is subjected to wavelet filtering processing, and the noise of the parallax image is reduced; constructing a three-dimensional space model image of the robot through a stitching algorithm for the parallax image in the single direction, wherein the stitching algorithm is an image stitching algorithm based on Fourier transform;
the method for determining the theoretical position of the robot displacement based on the displacement trajectory planning algorithm comprises the following steps:
the displacement trajectory planning algorithm is embedded in the robot, the trajectory motion of the robot is planned through the displacement trajectory planning algorithm, and the theoretical position of the robot displacement is determined based on the displacement trajectory planning algorithm.
2. The method of robot displacement correction according to claim 1, wherein the detection instruction is generated by a user terminal user based on a user terminal operation interface operation, and includes:
the user performs identity authentication on the user terminal operation interface to confirm that the user is a legal user;
and after the user is confirmed to be a legal user, allowing the user to perform detection instruction generation operation on a user terminal operation interface to generate the detection instruction.
3. The method of robot displacement correction according to claim 1, wherein the communication between the user terminal and the detection device is based on wireless network communication, wired network communication, or zigbee communication.
4. The method of robot displacement correction according to claim 1, wherein the detection device, in response to the detection instruction, includes:
after receiving the detection instruction, the detection equipment analyzes the detection instruction to obtain a physical address of a user terminal sending the detection instruction and identity information of a user;
the detection equipment judges whether the detection instruction is legal or not according to the physical address of the user terminal and the identity information of the user;
if the detection instruction is judged to be illegal, feeding back a judgment result to the user terminal;
and if the detection instruction is judged to be legal, responding to the detection instruction.
5. The method according to claim 4, wherein the determining, by the detection device, whether the detection instruction is legal according to the physical address of the user terminal and the identity information of the user includes:
judging whether the physical address of the user terminal is a physical address prestored by the detection equipment or not;
if not, the detection instruction is illegal;
and if so, matching the identity information of the user with a user permission set prestored in the detection equipment, and matching and judging whether the user identity information has the permission to send the detection instruction.
6. The method of robot displacement correction according to claim 1, wherein the performing of displacement correction according to the position difference of the robot displacement includes:
and guiding the position difference into a control device at the robot end, and carrying out displacement correction by the control device based on the position difference.
7. A robot displacement correction system based on three-dimensional modeling, the robot displacement correction system comprising:
an instruction receiving module: the detection device is used for receiving a detection instruction for detecting the displacement of the robot, which is sent by a user terminal, wherein the detection instruction is generated by a user of the user terminal based on the operation of an operation interface of the user terminal;
the instruction response module: the detection equipment responds to the detection instruction, starts a binocular camera on the detection equipment, and collects real-time images of the robot in multiple directions;
a three-dimensional modeling module: the system is used for carrying out three-dimensional modeling processing according to the real-time image to obtain a three-dimensional space model image of the robot;
an actual position determination module: for determining an actual position of the robot displacement based on the three-dimensional space model image;
a theoretical position determination module: the robot displacement theoretical position is determined based on a displacement trajectory planning algorithm;
a comparison module: the system comprises a robot displacement sensor, a controller and a controller, wherein the robot displacement sensor is used for comparing the actual position of the robot displacement with the theoretical position of the robot displacement to obtain the position difference of the robot displacement;
a correction module: the displacement correction is carried out according to the position difference of the robot displacement;
the three-dimensional modeling module: the binocular camera is further used for acquiring a real-time image of the robot to construct a disparity map; carrying out graying processing and wavelet denoising processing on the disparity map in sequence to obtain a processed disparity map; determining the spatial layout of the robot in a single direction according to the disparity map in the single direction; splicing the disparity maps in multiple directions based on an image splicing algorithm to construct a three-dimensional space model image of the robot;
after graying processing, the parallax image is subjected to wavelet filtering processing, and the noise of the parallax image is reduced; constructing a three-dimensional space model image of the robot through a stitching algorithm for the parallax image in the single direction, wherein the stitching algorithm is an image stitching algorithm based on Fourier transform;
the method for determining the theoretical position of the robot displacement based on the displacement trajectory planning algorithm comprises the following steps:
the displacement trajectory planning algorithm is embedded in the robot, the trajectory motion of the robot is planned through the displacement trajectory planning algorithm, and the theoretical position of the robot displacement is determined based on the displacement trajectory planning algorithm.
CN201811033086.2A 2018-09-05 2018-09-05 Robot displacement correction method and system based on three-dimensional modeling Active CN109191522B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811033086.2A CN109191522B (en) 2018-09-05 2018-09-05 Robot displacement correction method and system based on three-dimensional modeling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811033086.2A CN109191522B (en) 2018-09-05 2018-09-05 Robot displacement correction method and system based on three-dimensional modeling

Publications (2)

Publication Number Publication Date
CN109191522A CN109191522A (en) 2019-01-11
CN109191522B true CN109191522B (en) 2021-03-16

Family

ID=64914628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811033086.2A Active CN109191522B (en) 2018-09-05 2018-09-05 Robot displacement correction method and system based on three-dimensional modeling

Country Status (1)

Country Link
CN (1) CN109191522B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11529738B2 (en) * 2020-07-02 2022-12-20 NDR Medical Technology Pte. Ltd. Control system and a method for operating a robot
CN111862307A (en) * 2020-07-16 2020-10-30 广州安廷数字技术有限公司 Three-dimensional modeling system of inspection robot
CN112131980A (en) * 2020-09-10 2020-12-25 中数通信息有限公司 False position identification method, false position identification system, electronic equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0236614A2 (en) * 1986-03-10 1987-09-16 Si Handling Systems, Inc. Automatic guided vehicle systems
CN102538868A (en) * 2011-12-21 2012-07-04 北京农业智能装备技术研究中心 Self-traveling robot for crop character collection
CN103926927A (en) * 2014-05-05 2014-07-16 重庆大学 Binocular vision positioning and three-dimensional mapping method for indoor mobile robot
CN105511471A (en) * 2016-01-04 2016-04-20 杭州亚美利嘉科技有限公司 Method and device of correcting robot terminal driving route deviations

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0236614A2 (en) * 1986-03-10 1987-09-16 Si Handling Systems, Inc. Automatic guided vehicle systems
CN102538868A (en) * 2011-12-21 2012-07-04 北京农业智能装备技术研究中心 Self-traveling robot for crop character collection
CN103926927A (en) * 2014-05-05 2014-07-16 重庆大学 Binocular vision positioning and three-dimensional mapping method for indoor mobile robot
CN105511471A (en) * 2016-01-04 2016-04-20 杭州亚美利嘉科技有限公司 Method and device of correcting robot terminal driving route deviations

Also Published As

Publication number Publication date
CN109191522A (en) 2019-01-11

Similar Documents

Publication Publication Date Title
CN109737874B (en) Object size measuring method and device based on three-dimensional vision technology
US11010924B2 (en) Method and device for determining external parameter of stereoscopic camera
CN109240291B (en) Robot motion line planning method and system based on remote control
CN105740778B (en) Improved three-dimensional human face in-vivo detection method and device
CN104933389B (en) Identity recognition method and device based on finger veins
CN111160178B (en) Image processing method and device, processor, electronic equipment and storage medium
CN106960454B (en) Depth of field obstacle avoidance method and equipment and unmanned aerial vehicle
CN109191522B (en) Robot displacement correction method and system based on three-dimensional modeling
US20200177866A1 (en) Calibration apparatus, chart for calibration, chart pattern generation apparatus, and calibration method
CN109670390B (en) Living body face recognition method and system
US20120162220A1 (en) Three-dimensional model creation system
KR101510312B1 (en) 3D face-modeling device, system and method using Multiple cameras
KR20170008638A (en) Three dimensional content producing apparatus and three dimensional content producing method thereof
CN111797650A (en) Obstacle identification method and device, computer equipment and storage medium
EP2842075A1 (en) Three-dimensional face recognition for mobile devices
JP6897082B2 (en) Computer program for face orientation estimation, face orientation estimation device and face orientation estimation method
CN107072552A (en) Skin treatment system
CN110926330A (en) Image processing apparatus, image processing method, and program
CN107590444A (en) Detection method, device and the storage medium of static-obstacle thing
CN109089102A (en) A kind of robotic article method for identifying and classifying and system based on binocular vision
CN113762033A (en) Face recognition method, device, equipment and medium
CN111652018B (en) Face registration method and authentication method
KR101053253B1 (en) Apparatus and method for face recognition using 3D information
CN115063339A (en) Face biopsy method, system, equipment and medium based on binocular camera ranging
KR20090115738A (en) Information extracting method, registering device, collating device and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant