CN115493489A - Method for detecting relevant surface of measured object - Google Patents

Method for detecting relevant surface of measured object Download PDF

Info

Publication number
CN115493489A
CN115493489A CN202210715537.0A CN202210715537A CN115493489A CN 115493489 A CN115493489 A CN 115493489A CN 202210715537 A CN202210715537 A CN 202210715537A CN 115493489 A CN115493489 A CN 115493489A
Authority
CN
China
Prior art keywords
vehicle body
punching
angle
calibration
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210715537.0A
Other languages
Chinese (zh)
Inventor
潘凌锋
陈浙泊
陈一信
林建宇
陈龙威
叶雪旺
陈镇元
余建安
张一航
吴荻苇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Research Institute of Zhejiang University Taizhou
Original Assignee
Research Institute of Zhejiang University Taizhou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Research Institute of Zhejiang University Taizhou filed Critical Research Institute of Zhejiang University Taizhou
Priority to CN202210715537.0A priority Critical patent/CN115493489A/en
Publication of CN115493489A publication Critical patent/CN115493489A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the invention discloses a method for detecting the relative surface of a detected object, which comprises the steps of conveying a vehicle body to a flat plate of a production line, sequentially conveying the vehicle body to a flat plate production line with a limiting device through a crane, controlling the flat plate to stop by the limiting device arranged below when the vehicle body runs to a visual detection position through the production line, acquiring a stop signal by a visual system, further performing user operation selection, performing single-station operation or full-station operation by the visual system according to the selection of a user, performing a preparation flow of positioning and punching each station before the single-station and full-station real-time positioning operation, namely a distortion calibration flow, an N-point calibration flow, an amplification rate acquisition flow and a template manufacturing flow, and adjusting the position of a laser sensor to approach the position of a template during manufacturing through the characteristic point position in the template manufacturing in the single-station real-time positioning flow and the full-station real-time positioning flow so as to obtain an accurate laser sensor value.

Description

Method for detecting relevant surface of measured object
Technical Field
The invention belongs to the technical field of automatic driving automobile production, and particularly relates to a method for detecting a related surface of a detected object.
Background
An automatic driving automobile is an intelligent automobile which realizes unmanned driving through a computer system. The automatic driving automobile depends on the cooperation of artificial intelligence, visual calculation, radar, monitoring device and global positioning system, so that the computer can operate the motor vehicle automatically and safely without the active operation of human.
With the rise of the technology of the automatic driving automobile, more and more enterprises are put into the research and development of the automatic driving automobile. The current technologies such as artificial intelligence, visual monitoring and calculation, radar and the like tend to be mature, and the technologies are effectively integrated, so that an automatic driving system is realized. The automatic driving system is arranged in a manual driving automobile to modify the automobile, so that the manual driving automobile can be modified into the automatic driving automobile, and the automatic driving system can realize safe and reliable automatic driving of a learned lane after long-term testing.
The automatic driving system comprises a plurality of hardware such as radars, a vision system and the like, and the hardware devices are installed after manual punching is carried out on the appointed installation position of the automobile body of the manual driving automobile in the current automobile refitting factory. In the installation process, if manual punching is carried out, various problems of low precision, low efficiency, poor repeatability and the like exist.
Disclosure of Invention
In view of the technical problems, the invention is used for providing a method for detecting the relevant surface of a measured object, which is used for detecting each surface of a vehicle body, thereby realizing high-precision positioning and punching of vehicle body punching positions of modified automatic driving vehicles.
In order to solve the technical problems, the invention adopts the following technical scheme:
a method for detecting the relative surface of an object to be detected includes conveying a vehicle body to a flat plate of an assembly line, conveying the vehicle body to a flat plate assembly line with a limiting device in sequence through a crane, controlling the flat plate to stop by the limiting device installed below when the vehicle body runs to a visual detection position through the assembly line, acquiring a stop signal by a visual system, selecting user operation, carrying out single-station operation or full-station operation by the visual system according to the user selection, carrying out preparation flows of positioning and punching each station before the single-station and full-station real-time positioning operation, respectively carrying out distortion calibration flow, N point calibration flow, amplification rate acquisition flow and template manufacturing flow, carrying out acquisition and processing of measured values of a laser sensor by using a detection component corresponding to each station in the single-station real-time positioning flow and the full-station real-time positioning flow, adjusting the position of the laser sensor to approach the position of the template when the laser sensor is manufactured, obtaining accurate values of the laser sensor values by using a detection component corresponding to the position in the single-station real-time positioning and full-time positioning flow, and then carrying out correction on the image of the corresponding to the image of the offset angle of the camera when the image of the camera is calculated by the camera, and the offset angle of the camera; the physical positions of the characteristic points are further calculated through images acquired by the cameras of the station detection components, and the position information and the punching posture of the punching point are accurately calculated according to the characteristic information of the images and the acquired three-dimensional deflection angle information of the vehicle body during template manufacturing and real-time positioning, so that accurate positioning punching is realized.
Preferably, the single-station manual operation process comprises distortion calibration, N-point calibration of the drilling robot, magnification acquisition, template manufacturing and real-time positioning operation, and the specific execution process is as follows:
if the user selects distortion calibration, a camera distortion calibration process is executed, and distortion calibration files under different angles of the characteristic surface are obtained by executing the camera distortion calibration process, wherein the files can be used for correcting image distortion when the characteristic surface and the camera surface form different angles;
if the user selects 'punching robot N point calibration', executing a punching robot N point calibration flow, and executing the punching robot N point calibration flow to obtain an N point calibration file so as to realize conversion of image coordinates into physical coordinates under a punching robot coordinate system;
if the user selects 'magnification acquisition', executing a magnification acquisition process, acquiring a first-order polynomial relation function of the height of the feature plane and the image magnification by executing the magnification acquisition process, and realizing image coordinate conversion of the feature plane at different heights according to the function so as to acquire accurate physical coordinates of the feature points;
if the user selects 'template making', executing a template making process, and obtaining the position information of the image processing characteristic points and the position information of the punching points on the punching surface acquired by the camera through executing the template making process, thereby obtaining the position relation between the characteristic points and the punching points;
and if the user selects the real-time positioning, executing a single-station real-time positioning process, and positioning the currently selected station in real time through the single-station real-time positioning process to obtain the position information of the punching point so as to control the punching robot to perform punching operation.
Preferably, the distortion calibration process includes the following steps:
and controlling the checkerboard calibration plate to the initial position. Placing the calibration plate on an object to be detected, namely the position close to the detection height of the detection characteristic point on the automobile, and controlling the laser sensor to acquire the height information of each position after the placement is finished;
calculating the offset angle between the calibration plate and the camera surface according to the acquired height information of the laser sensor;
if the angle meets the requirement, triggering a camera to shoot and acquire an image;
carrying out distortion calibration on the collected image to generate a distortion calibration file;
and after the distortion calibration file is generated, the distortion calibration file is loaded on the image acquired by the camera for distortion correction.
Preferably, calculating the offset angle of the calibration plate from the camera surface according to the acquired height information of the laser sensor further comprises: 3 laser sensor of installation among the distortion calibration process, wherein calibrate board left and right rear both sides and respectively install a laser sensor, top left side range finding sensor and top right side range finding sensor promptly, a laser sensor is installed to the right front, top front side range finding sensor promptly, the distance between top left side range finding sensor and the top right side range finding sensor is marked as H1, the distance of top right side range finding sensor and top front side range finding sensor is marked as H2, the height that top left side range finding sensor obtained is marked as H1, the height that top right side range finding sensor obtained is marked as H2, the height that top front side range finding sensor obtained is marked as H3. The offset angles that can be obtained from the three heights include a left-right offset angle and a front-back offset angle from the camera face, where the left-right offset angle is denoted as θ 1 and the front-back offset angle is denoted as θ 2. The calculation formula is as follows:
Figure RE-GDA0003934814450000031
Figure RE-GDA0003934814450000032
judging whether the obtained theta 1 and theta 2 are stable in a set threshold range, if so, determining that the chessboard pattern calibration plate is in a stable state, otherwise, determining that the chessboard pattern calibration plate is still in a motion state, at the moment, judging whether the angle measurement frequency exceeds a set value, if so, ending the distortion calibration operation, and prompting that the calibration plate is not stable and the calibration plate is required to be placed again, otherwise, continuing to measure the theta 1 and the theta 2;
and (3) carrying out distortion calibration on the left deflection angle, the right deflection angle and the front deflection angle at intervals of 1 degree within the range of +/-10 degrees, keeping the front deflection angle and the rear deflection angle as 0 degree when the left deflection angle and the right deflection angle are calibrated, controlling the angle deviation of the chessboard grid calibration plate from-10 degrees to 10 degrees each time to carry out distortion calibration, and carrying out the calibration process of the front deflection angle and the rear deflection angle. Judging whether the currently measured angle is in a required range according to the current calibration process, if the distortion calibration at the left deflection angle and the right deflection angle of minus 10 degrees is currently carried out, judging whether theta 1 is in a threshold range near minus 10 degrees set by a system, simultaneously ensuring that theta 2 is 0 degree, if the conditions are met, considering that the angle is in the required range, carrying out next processing, and if not, controlling the motion of the checkerboard calibration plate to adjust the checkerboard calibration plate angle so as to enable the calibration plate angle to meet the angle requirement.
Preferably, the magnification acquisition comprises the magnification of +/-10 cm of the standard distance h between the measured surface of the vehicle body and the laser sensor, and fitting is carried out to obtain a fitting function of the magnification and the distance.
Preferably, the step of obtaining a magnification of ± 10cm of the standard distance h between the measured surface of the vehicle body and the laser sensor by fitting comprises the following steps of: the specific measurement process is from h-10cm to h +10cm, the measurement is carried out at intervals of 5mm every time, the calibration plate is controlled to reach the h-10cm, then the distance from the upper laser sensor to the calibration plate is obtained, and the left and right deflection angles theta 1 'and the front and back deflection angles theta 2' of the calibration plate are obtained by adopting an angle calculation method in the distortion calibration process;
judging whether theta 1 'and theta 2' are stable or not, if not, indicating that the calibration plate is not completely static, judging whether the measurement times exceed a set value or not, if so, ending the amplification rate acquisition, waiting for the operation of a user, and if not, continuously acquiring the distance from the laser sensor to the calibration plate and calculating the angle;
if theta 1 'and theta 2' are stable, whether the theta 1 'and the theta 2' are in the required range is judged, when the camera and the detected surface are completely vertical, the acquired image is subjected to distortion correction to remove the distortion influence of the lens, and then no distortion exists in each position of the image, so that the theta 1 'and the theta 2' are required to be about 0 DEG in order to remove the influence of the distortion introduced by the angle on the magnification, and if the conditions are met, the camera is triggered to take a picture to obtain the image and store the image. If the theta 1 'and the theta 2' are not in the required range, adjusting the angle of the calibration plate and then obtaining the theta 1 'and the theta 2' again;
and judging whether the images at all the height positions are acquired completely, if so, processing all the corresponding images at all the heights to obtain the pixel spacing of the checkerboards at the same positions of the calibration plate, selecting the checkerboards in the middle of the images at the checkerboard positions, wherein the distortion of the middle positions of the images is minimum, the obtained pixel spacing error is minimum, and then obtaining the proportional relation between the pixel spacing and the physical size according to the actual physical size of the checkerboards, namely the image magnification of the calculated height. And if the images at all height positions are not acquired completely, adjusting the height of the calibration plate to acquire the image at the next height.
And after the amplification factor of each height is obtained, performing first-order polynomial fitting according to the corresponding height to obtain fitting coefficients k and b, namely m = k × h + b, wherein m is the amplification factor, and h is a distance value measured by the laser sensor, and the fitting coefficients k and b are obtained and stored.
Preferably, the template making comprises: the system judges whether the punching robot switches the tool center point TCP or not, namely, the system switches the position of the TCP of the punching robot from the marking head TCP to the punching head TCP, and after the switching is finished, the punching robot carries out position movement and posture adjustment in a workpiece coordinate system of the punching robot by taking the punching head TCP as a standard;
if not, the system firstly controls the punching robot switching tool TCP, and controls each laser sensor to obtain the distance from the vehicle body after switching is finished; if yes, each laser sensor acquires the distance from the vehicle body;
the system judges whether the distance acquired by each laser sensor is stable, namely, each distance value is smaller than a set threshold value;
if not, the system counts the measurement times and judges whether the times exceed a set value, if not, the system is continuously executed to judge whether the distance acquired by each laser sensor is kept stable, if so, the system prompts that the vehicle body is not placed stably, and the system returns to judge whether a button for starting to manufacture the template is clicked or not;
if yes, calculating deflection angles of the vehicle body in all directions according to the distances from the vehicle body and obtained by the laser sensors; judging whether the obtained deflection angle of each direction is within a set range;
if not, prompting that the position of the vehicle body is abnormal, needing the user to replace the position of the vehicle body, and returning to judge whether to click a button for starting to make the template for execution;
if so, executing a vehicle body feature acquisition process;
the system judges whether the vehicle body characteristic acquisition is abnormal or not;
if not, the system controls the punching robot to enable the punching head to move to the punching position; the system respectively records the intersection point coordinates, the straight line angles and the punching point position coordinates of the feature points; storing the position data of the corresponding laser sensor, and returning to judge whether to click a button for starting to manufacture the template for execution;
if so, prompting the user to confirm the position of the vehicle body according to the abnormal type, and ending the template making process.
Preferably, the vehicle body feature acquisition process in template manufacturing comprises the following steps:
the system controls a camera to acquire a vehicle body characteristic surface image;
according to the vehicle body deflection angle, the system calls a corresponding distortion calibration file to carry out image distortion correction;
according to the characteristics of the sheet metal of the vehicle body, matching characteristic templates at seams between the sheet metals by the system;
the system judges whether the characteristics are successfully matched, if not, the system prompts that the characteristics of the vehicle body are not found, the system asks for confirming the position of the vehicle body, and the process is ended; if so, correcting the positions of the found features, and then respectively searching two straight lines at the joint;
the system judges whether the two straight lines are searched successfully, if not, the system prompts that a seam at the characteristic position of the vehicle body is not found, the system asks for confirming the position of the vehicle body, and the process is finished; if yes, solving an intersection point of the two found straight lines, and carrying out coordinate conversion on the intersection point to obtain a physical coordinate.
Preferably, the real-time positioning and punching process comprises the following steps:
carrying out a vehicle body angle obtaining process;
the system judges whether the acquisition of the vehicle body angle is successful; if not, prompting abnormal positioning, returning to continuously judging whether to press a start real-time positioning button. If yes, executing a vehicle body feature obtaining process;
the system judges whether the vehicle body characteristic acquisition is successful; if not, prompting that the vehicle body characteristic is not found or prompting that the seam at the position of the vehicle body characteristic is not found, asking for confirming the position of the vehicle body, returning to continuously judging whether to press a start real-time positioning button or not; if yes, calculating to obtain the position of a new punching point according to the characteristic point, the punching point, the deflection angle of the vehicle body in each direction and the characteristic point coordinate positioned in real time in the template manufacturing process;
the system judges whether the vehicle body offset position is in the set range, if not, the system prompts the vehicle body position to be abnormal,
the user needs to replace the position of the vehicle body, and returns to continuously judge whether to click the start real-time positioning button; if so, controlling the punching robot to move above the new punching point after adjusting the posture, and acquiring the distance from the new punching point in real time by the laser sensor;
and judging whether the distance reaches a set range. If not, the punching robot continues to approach the new punching point for a certain distance, and then whether the distance reaches the set range is judged; if yes, the system controls the punching robot to punch. And after the real-time positioning is finished, returning to continuously judging whether the real-time positioning starting button is clicked or not.
Preferably, the full-station automatic operation flow sequentially executes a single-station real-time positioning process for the top, the front side and the rear side of the vehicle body on the total of 6 stations, wherein the single-station real-time positioning process is executed for the stations on the left and right sides of the vehicle roof, and after the single-station real-time positioning process for the two sides of the vehicle roof is completed, three angles of the vehicle body relative to the vehicle body manufactured by the template in the three-dimensional direction can be obtained, and the left deflection angle, the right deflection angle and the rotation angle in the camera plane can be used for the rotation angle required in the side single-station real-time positioning process.
The invention has the following beneficial effects:
(1) The machine vision technology is combined with a high-precision sensor to determine the position relation of positioning points of different curved surfaces, so that the high-precision positioning of the characteristic-free curved surface of the vehicle body and the punching operation of the punching robot are realized.
(2) And through visual positioning of the characteristic surface and the rotation angle of the characteristic surface of the vehicle body in the three-dimensional direction and by combining the relative position relation between the characteristic points in the characteristic surface and the punching points, the position and the punching posture of the punching points are obtained in real-time positioning detection, and finally the punching robot is controlled to perform punching operation.
(3) In real-time detection, the laser sensor approaches the position of a camera surface during template manufacturing.
And roughly measuring by a laser sensor and roughly positioning visually. In the real-time positioning detection, the height of the detected characteristic surface is obtained through a laser sensor, and the rotation angle and the position offset of the characteristic point relative to the characteristic point during the template manufacturing are obtained through a vision system. And controlling the laser sensor to adjust the measuring position on the camera surface through the vision coarse positioning result, so that the laser sensor runs to the measuring position when the template is manufactured, and then carrying out fine measurement on the laser sensor. And then, obtaining an accurate height value through a precise measurement result of the laser sensor, and carrying out precise positioning on the characteristic information of the characteristic surface by vision.
(4) A high-precision positioning method for a curved surface of a vehicle body during planar three-dimensional rotation and translation.
And obtaining an accurate height value of the measured characteristic surface through a laser sensor. And carrying out image distortion correction by using distortion correction files of the corresponding images at different heights. And obtaining a fitting function of the height and the magnification through the magnification fitting of different heights. And calculating the magnification according to the height when detecting and positioning in real time. And converting the pixel coordinates of the image feature points of the detected surface in real-time detection into coordinates under the pixel coordinate standard of the template making image through the magnification. And obtaining an N-point calibration file by N-point calibration of the punching robot and the camera, and obtaining the translation amount of the characteristic points on the physical position after the acquired vehicle body characteristic image is subjected to coordinate conversion of the N-point calibration. And acquiring the rotation angle of the three-dimensional plane, and calculating the position of the punching point of the curved surface of the vehicle body according to the angle and the translation amount of the characteristic point.
(5) The multi-station positioning and punching of a plurality of different surfaces of the vehicle body are realized without human intervention.
The method comprises the steps of firstly positioning the top surface characteristics of a vehicle body, acquiring the characteristic rotation angle of the vehicle body through the characteristic image of the top surface of the vehicle body, acquiring the left and right rotation angles of a characteristic surface in a three-dimensional space through laser sensors at left and right stations of a vehicle roof, and acquiring the front and rear rotation angles of the vehicle body through laser sensors at front and rear stations of the vehicle roof, so that the rotation angles of the vehicle body in 3 directions in the three-dimensional space can be acquired, and further the positioning of the characteristic-free punching points corresponding to the top surface is realized. And positioning the side surface characteristic surface through the three-dimensional rotation angle, and further acquiring the position and the punching gesture of the side surface featureless punching point.
Drawings
FIG. 1 is a flowchart illustrating the general operation of a method for detecting a correlation surface of an object under test according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a top detection part and a limiting part of the detection method for the relevant surface of the object to be detected according to the embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a right-side detecting element of the method for detecting a relevant surface of an object to be detected according to the embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a left-side detecting element of the method for detecting a relevant surface of an object to be detected according to the embodiment of the present invention;
FIG. 5 is a flow chart of a single-station manual operation of the method for detecting the relevant surface of the object to be detected according to the embodiment of the present invention;
FIG. 6 is a flowchart illustrating camera distortion calibration of a method for detecting a relevant surface of an object to be detected according to an embodiment of the present invention;
FIG. 7 is a flow chart of a full-station automation operation of a method for detecting a relevant surface of an object under test according to an embodiment of the present invention;
FIG. 8 is a schematic view of a deflection angle of a method for detecting a correlation surface of an object to be detected according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a workpiece plane coordinate system established by the punching robot according to the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention relates to a method for detecting the relative surface of a detected object, which comprises the steps of conveying a vehicle body to a production line flat plate, conveying the vehicle body to a flat plate production line with a limiting device in sequence through a crane, controlling the flat plate to stop by the limiting device arranged below when the vehicle body runs to a visual detection position through the production line, acquiring a stop signal by a vision system, and further performing user operation selection, wherein the vision system performs single-station operation or full-station operation according to the selection of a user, a preparation flow of positioning and punching each station is required to be performed before the single-station and full-station real-time positioning operation, namely a distortion calibration flow, an N-point calibration flow, an amplification rate acquisition flow and a template manufacturing flow, in the single-station real-time positioning flow and the full-station real-time positioning flow, the positions of laser sensors are adjusted to be close to the positions of the template manufacturing, so as to obtain accurate laser sensor values, the preparation flows of positioning and punching each station respectively use the corresponding detection part to acquire and process image and laser sensor measured values, and further calculate the corresponding offset angles of the camera on the top of the image and the corresponding detection part of the camera, and the camera when the offset angle of the camera is calculated, and the image of the camera is accurately calculated, so as to obtain the offset angle of the corresponding image of the image and the image of the detection camera; the physical positions of the feature points are further calculated through images acquired by the cameras of the station detection components, and according to the feature information of the images and the acquired three-dimensional deflection angle information of the vehicle body during template manufacturing and real-time positioning, the position information and the punching posture of the punching point are accurately calculated, so that accurate positioning and punching are realized.
Specifically, the positioning function of the relevant surface is realized through a relevant curved surface positioning and punching system based on machine vision, and when an object to be detected runs to the positioning system, positioning and punching at the relevant surface positioning position are realized through a laser sensor positioning method, a machine vision positioning method and the position relation of the relevant surface. The operation method comprises the following steps:
after the positioning system is powered on and started, the system starting initialization process is executed firstly, and the process comprises the following steps: detecting the hardware state, including detecting the connection state of the two-side punching robot, the cameras of all stations and the laser sensor; if all the hardware is normally connected, performing the next operation, otherwise performing related hardware connection abnormity alarm and prompting a user to perform related hardware maintenance; confirming the position of the punching robot, and if the position of the punching robot is not at the zero return position, carrying out zero return operation on the punching robot according to the zero return track; reading related detection parameters in the operation process of the system;
after the system is started and initialized, the positioning system executes different processes according to different operations of a user:
if the user selects the single-station operation, executing a single-station manual operation process; wherein, the single-station manual operation process can finish the pretreatment operation before the real-time positioning of the appointed station and the positioning punching of the single station, and mainly comprises the following steps: the method comprises a station selection process, a camera distortion calibration process, a punching robot N point calibration process, a magnification acquisition process, a template manufacturing process and a real-time positioning process.
If the user selects the full-station operation, the positioning system executes the full-station automatic operation process; and the full-station automatic operation flow finishes the positioning and punching operation of all stations to be punched according to a set sequence, and after the positioning and punching operation of all stations is finished, the punching robot is operated to a zero point according to a zero-returning track and waits for the arrival of the next detected object. In the execution process of the full-station automatic operation flow, if the operation needing to be executed is carried out, the current punching operation is stopped through the emergency pause, and the punching robot is reset to zero.
In one embodiment of the invention, the detection method for the relevant surface of the measured object is realized by detecting the relevant surface of the measured object, wherein the detection of the relevant surface of the measured object comprises a top detection part, a limiting device, a left detection part and a right detection part, the vehicle body is conveyed to a flat plate of a production line firstly, the vehicle body is conveyed to a flat plate production line with the limiting device in sequence through a crane, and the position deviation and the angle deviation of the vehicle body conveyed to the flat plate each time are ensured to be within +/-5 cm and within +/-5 degrees.
Further, in an embodiment of the present invention, referring to fig. 2 to 4, the top detection component includes a top right detection camera 1, a first top right light source 2, a second top right light source 3, a top right sensor left-right adjustment device 4, a top right sensor front-back adjustment device 5, a top right ranging sensor 6, a top left detection camera 7, a first top left light source 8, a second top left light source 9, a top left sensor left-right adjustment device 10, a top left sensor front-back adjustment device 11, a top left ranging sensor 12, a top front sensor left-right adjustment device 13, a top front sensor front-back adjustment device 14, and a top front ranging sensor 15. The top left detection camera 7 is arranged above the position of the roof of the left side of the rear seat of the vehicle, the camera is vertically downward, and the top left detection camera 7 is approximately positioned above the position of the roof for effective measurement of the height of about 100cm; the first top left light source 8 and the second top left light source 9 are relatively vertical, wherein the direction of the first top left light source 8 is parallel to the front-back direction of the vehicle, the direction of the second top left light source 9 is parallel to the left-right direction of the vehicle, the illumination directions of the first top left light source 8 and the second top left light source 9 are pointed to the vicinity of the center of the camera view, and the light source height is approximately 30cm away from the vehicle roof; the height of top left side range sensor 12 apart from roof 20cm, adjusting device 10 controls top left side sensor front and back adjusting device 11 and realizes the side-to-side motion, and adjusting device control top left side range sensor 12 realizes the seesaw around top left side sensor, is equivalent to top left side range sensor 12 can carry out the side-to-side motion around the roof plane, and sensor motion adjustment mechanism wholly is located near left side vision system is close to the rear of a vehicle top. The top right detection camera 1 is located approximately at an effective measurement height of approximately 100cm above the position of the roof on the right side of the rear seats of the vehicle, with the camera facing vertically downwards. The first top right light source 2 and the second top right light source 3 are relatively vertical, wherein the direction of the first top right light source 3 is parallel to the front-back direction of the vehicle, the second top right light source 2 is parallel to the left-right direction of the vehicle, the first top right light source and the second top right light source are arranged in parallel to the roof, the illumination direction points to the vicinity of the center of the visual field of the camera, and the height of the light sources is approximately 30cm away from the roof. The height of the distance measuring sensor 6 on the left side of the top is 20cm away from the top of the vehicle, the left and right adjusting device 4 of the sensor on the right side of the top controls the front and back adjusting device 5 of the sensor on the right side of the top to realize the left and right movement, the front and back adjusting device of the sensor on the right side of the top controls the distance measuring sensor 6 on the right side of the top to realize the front and back movement, which is equivalent to that the distance measuring sensor 12 on the right side of the top can carry out the front and back left and right movement on the plane of the vehicle top, and the sensor movement adjusting mechanism is wholly positioned near the top of the vehicle tail near the right side vision system. The height of the top front ranging sensor 15 from the roof is 20cm, the top front sensor left-right adjusting device 13 controls the top front sensor front-back adjusting device 14 to move left and right, the top front sensor front-back adjusting device 14 controls the top front ranging sensor 15 to move front and back, the top front ranging sensor 15 can move front and back left and right on the plane of the roof equivalently, and the sensor movement adjusting mechanism is integrally positioned near the roof on the right side of the front row of the vehicle.
The limiting device comprises a right front wheel front-back limiting 16, a right front wheel left-right limiting 17, a left front wheel front-back limiting 18, a left front wheel left-right limiting 19, a right rear wheel left-right limiting 20, a right rear wheel front-back limiting 21, a left rear wheel front-back limiting 22 and a left rear wheel left-right limiting 23. Wherein the right front wheel front-rear limit 16 and the left front wheel front-rear limit 18 are respectively positioned in the front of the left and right tires at the front of the automobile to limit the forward deviation of the automobile. The right rear wheel front-rear limit 21 and the left rear wheel front-rear limit 22 are respectively positioned behind rear tires of the automobile to limit backward deviation of the automobile. The right front wheel left-right limit 17 and the right rear wheel left-right limit 20 are respectively positioned on the side surfaces of the front and rear tires on the right side of the automobile to limit the automobile to shift rightwards. The left front wheel left and right limit 19 and the left rear wheel left and right limit 23 are respectively positioned on the side surfaces of the front and rear tires on the left side of the automobile and limit the automobile to leftwards deviate.
The right side detection means includes a right front side distance measurement sensor 24, a right front side sensor front-rear adjustment device 25, a right front side sensor up-down adjustment device 26, a right front side detection camera 27, a first right front side light source 28, a second right front side light source 29, a right rear side distance measurement sensor 30, a right rear side sensor front-rear adjustment device 31, a right rear side sensor up-down adjustment device 32, a right rear side detection camera 33, a first right rear side light source 34, and a second right rear side light source 35. The right front side detection camera 27 is located approximately at the right side of the position where the right front door of the vehicle is linked with the vehicle body, the effective measurement distance is about 100cm, the height from the ground is more than 80cm, and the camera surface faces to the left in parallel with the front-back direction of the vehicle. The first right front light 28 and the second right front light 29 are relatively vertical, wherein the first right front light 28 is parallel to the front and back direction of the vehicle, the second right front light 29 is parallel to the up and down direction of the vehicle, the two are parallel to the side of the vehicle body, the light direction points to the vicinity of the center of the camera view field, and the height of the light source is about 30cm away from the right side of the vehicle. The distance between the right front side distance measuring sensor 24 and the vehicle right front panel sheet metal is 20cm, the right front side sensor up-and-down adjusting device 26 controls the right front side sensor front-and-back adjusting device 25 to move up and down, the right front side sensor front-and-back adjusting device 25 controls the right front side distance measuring sensor 24 to move back and forth, equivalently, the right front side distance measuring sensor 24 can move back and forth and up and down on the vehicle right plane, and the sensor movement adjusting mechanism is wholly positioned near the vehicle head of the right front side distance measuring system. The right rear side detection camera 33 is located approximately at the right side effective measurement distance of the connecting position of the right rear fender and the right rear bumper of the vehicle, the height from the ground is more than 80cm, and the camera surface faces to the left in parallel with the front-rear direction of the vehicle. The first right rear side light source 34 is relatively perpendicular to the second right rear side light source 35, wherein the direction of the first right rear side light source 34 is parallel to the front-rear direction of the vehicle, the direction of the second right rear side light source 35 is parallel to the up-down direction of the vehicle, the first right rear side light source and the second right rear side light source are arranged in parallel to the side face of the vehicle, the illumination direction points to the vicinity of the center of the camera view field, and the height of the light source is approximately 30cm away from the right side of the vehicle. The distance between the right rear side distance measuring sensor 30 and the front right panel metal plate of the vehicle is 20cm, the right rear side sensor up-down adjusting device 32 controls the right rear side sensor front-back adjusting device 31 to move up and down, the right rear side sensor front-back adjusting device 31 controls the right rear side distance measuring sensor 30 to move back and forth, the right rear side distance measuring sensor 30 can move back and forth on the right plane of the vehicle equivalently, and the sensor movement adjusting mechanism is integrally positioned near the rear part of the vehicle far away from the rear right side vision system.
The left side detection means includes a left front side distance measuring sensor 36, a left front side sensor front-rear adjusting device 37, a left front side sensor up-down adjusting device 38, a left front side detection camera 39, a first left front side light source 40, a second left front side light source 41, a left rear side distance measuring sensor 42, a left rear side sensor up-down adjusting device 43, a left rear side sensor front-rear adjusting device 44, a left rear side detection camera 45, a first left rear side light source 46, and a second left rear side light source 47. The left front side detection camera 39 is located approximately at an effective measurement distance of about 100cm on the left side of the position where the left front door of the vehicle is linked with the vehicle body, the height from the ground is more than 80cm, and the camera surface faces to the right in parallel with the front-rear direction of the vehicle. The first left front light source 40 is perpendicular to the second left front light source 41, wherein the first left front light source 40 is parallel to the front-back direction of the vehicle, the second left front light source 41 is parallel to the up-down direction of the vehicle, the two are installed parallel to the side of the vehicle body, the illumination direction points to the vicinity of the center of the camera view, and the height of the light source is about 30cm away from the left side of the vehicle. The left front side distance measuring sensor 36 is 20cm away from the vehicle right front panel metal plate, the left front side sensor up-down adjusting device 38 controls the left front side sensor front-back adjusting device 37 to realize up-down movement, the left front side sensor front-back adjusting device 37 controls the left front side distance measuring sensor 36 to realize front-back movement, the left front side distance measuring sensor 36 can perform front-back up-down movement on the vehicle left side plane equivalently, and the sensor movement adjusting mechanism is integrally positioned near the vehicle head of the left front side vision system. The left rear side detection camera 44 is located approximately at the left side of the location where the left rear fender and the left rear bumper of the vehicle are connected, at an effective measurement distance of approximately 100cm, at a height greater than 80cm from the ground, with the camera plane facing right parallel to the front-rear direction of the vehicle. The first left rear light source 46 and the second left rear light source 47 are relatively vertical, wherein the direction of the first left rear light source 46 is parallel to the front-back direction of the vehicle, the direction of the second left rear light source 47 is parallel to the up-down direction of the vehicle, the first left rear light source and the second left rear light source are arranged in parallel to the side face of the vehicle, the illumination direction points to the vicinity of the center of the visual field of the camera, and the height of the light source is approximately 30cm away from the right side of the vehicle. The left rear side distance measuring sensor 42 is 20cm away from the front panel on the right side of the vehicle, the left rear side sensor up-down adjusting device 43 controls the left rear side sensor front-back adjusting device 45 to move up and down, the left rear side sensor front-back adjusting device 45 controls the left rear side distance measuring sensor 42 to move back and forth, the left rear side distance measuring sensor 42 can move back and forth and up and down on the plane on the left side of the vehicle equivalently, and the sensor movement adjusting mechanism is integrally positioned near the tail of the vehicle far away from the left rear side vision system.
In one embodiment of the invention, before the system performs full-station automatic operation, single-station manual operation is required, image distortion correction files, a fitting function of magnification and height, a coordinate conversion file between an image and a punching robot and a position relation between an image characteristic surface and a punching point on a positioned punching surface of each station are obtained through the single-station manual operation, and automatic positioning and punching operation of each station is realized through the obtained files and data. Referring to fig. 5, a flow chart of the single-station manual operation is shown, in which a single-station manual operation interface provides operation buttons of distortion calibration, N-point calibration of the punching robot, magnification acquisition, template making, and real-time positioning, and corresponding processes can be executed according to related operations of a user. The specific implementation process is as follows:
the single-station operation flow executes the station selection flow first. Before entering the single-station manual operation interface, a user needs to select a station to be operated first, and then enters the manual operation interface after the selection is completed.
If the user clicks the distortion calibration button, the camera distortion calibration process is executed. By executing the camera distortion calibration process, distortion calibration files under different angles of the characteristic surface are obtained, and the files can be used for image distortion correction when the characteristic surface and the camera surface form different angles.
And if the user clicks a 'punching robot N-point calibration' button, executing a punching robot N-point calibration process. And obtaining an N-point calibration file by executing an N-point calibration flow of the punching robot, and realizing the conversion of the image coordinates into physical coordinates under a coordinate system of the punching robot.
If the user clicks the "magnification acquisition" button, the magnification acquisition procedure will be executed. By executing the magnification acquisition process, a first-order polynomial relation function of the height of the characteristic surface and the image magnification is obtained, and according to the function, the image coordinate conversion of the characteristic surface at different heights can be realized, so that the accurate physical coordinates of the characteristic points are obtained.
If the user clicks the 'template making' button, the template making process is executed. By executing the template making process, the position information of the image processing characteristic points acquired by the camera and the position information of the punching points on the punching surface can be obtained, and the position relation between the characteristic points and the punching points can be obtained according to the position information.
If the user clicks the real-time positioning button, the single-station real-time positioning process is executed. Through the single-station real-time positioning process, the current selected station is positioned in real time to obtain the position information of the punching point, and the punching robot is further controlled to perform punching operation. In the real-time positioning process, an emergency pause operation can be performed, and after a user presses an emergency pause button, an emergency pause processing process is executed.
When the system is in a single-station operation standby state, the user can quit the single-station operation after pressing an exit button.
In one embodiment of the invention, cameras are required to be installed at all stations of the detected object for visual positioning, and the detected characteristic surface and the camera surface present different angles due to different placement positions of the detected object, so that the camera distortion calibration flow of the system can calibrate the distortion of the characteristic surface and the camera surface at different angles. In the template making process, calling a corresponding distortion calibration file according to the measured angles of the characteristic surface and the camera surface to perform image distortion correction. Distortion calibration files with different deflection angles formed by a calibration plate and a camera surface can be generated through a camera distortion calibration process, each distortion calibration file consists of perspective distortion of an image with the corresponding deflection angle and radial distortion parameters of the image, and the distortion files are used for image distortion correction. Referring to fig. 6, the camera distortion calibration procedure includes the following steps:
and judging whether the user clicks a distortion calibration button or not, if so, carrying out distortion calibration, and otherwise, waiting for user operation.
And controlling the checkerboard calibration plate to the initial position. And placing the calibration plate at the object to be detected, namely the position close to the detection height of the detection characteristic point on the automobile, and controlling the laser sensor to acquire the height information of each position after the calibration plate is placed.
And calculating the offset angle between the calibration plate and the camera surface according to the acquired height information of the laser sensor. 3 laser sensor of installation among the distortion calibration process, wherein calibrate board left and right rear both sides and respectively install a laser sensor, top left side range finding sensor and top right side range finding sensor promptly, a laser sensor is installed to the right front, top front side range finding sensor promptly, the distance between top left side range finding sensor and the top right side range finding sensor is marked as H1, the distance of top right side range finding sensor and top front side range finding sensor is marked as H2, the height that top left side range finding sensor obtained is marked as H1, the height that top right side range finding sensor obtained is marked as H2, the height that top front side range finding sensor obtained is marked as H3. The offset angles that can be obtained from the three heights include a left-right offset angle and a front-back offset angle from the camera face, where the left-right offset angle is denoted as θ 1 and the front-back offset angle is denoted as θ 2. The calculation formula is as follows:
Figure RE-GDA0003934814450000121
Figure RE-GDA0003934814450000122
and judging whether the obtained theta 1 and theta 2 are stable in a set threshold range, if so, determining that the chessboard pattern calibration plate is in a stable state, otherwise, determining that the chessboard pattern calibration plate is still in a motion state, at the moment, judging whether the angle measurement frequency exceeds a set value, if so, ending the distortion calibration operation, and prompting that the calibration plate is not stable and the calibration plate is required to be placed again, otherwise, continuing to measure the theta 1 and the theta 2.
And (3) carrying out distortion calibration on the left deflection angle, the right deflection angle and the front deflection angle at intervals of 1 degree within the range of +/-10 degrees, keeping the front deflection angle and the rear deflection angle as 0 degree when the left deflection angle and the right deflection angle are calibrated, controlling the angle deviation of the chessboard grid calibration plate from-10 degrees to 10 degrees each time to carry out distortion calibration, and carrying out the calibration process of the front deflection angle and the rear deflection angle. Judging whether the currently measured angle is in a required range according to the current calibration process, if the distortion calibration at the left deflection angle and the right deflection angle of minus 10 degrees is currently carried out, judging whether theta 1 is in a threshold range near minus 10 degrees set by a system, simultaneously ensuring that theta 2 is 0 degree, if the conditions are met, considering that the angle is in the required range, carrying out next processing, and if not, controlling the motion of the checkerboard calibration plate to adjust the checkerboard calibration plate angle so as to enable the calibration plate angle to meet the angle requirement.
If the angle meets the requirement, the camera is triggered to shoot and acquire images.
And carrying out distortion calibration on the acquired image to generate a distortion calibration file.
And after the distortion calibration file is generated, the distortion calibration file is loaded on the image acquired by the camera for distortion correction.
The horizontal pixel pitch and the vertical pixel pitch of the checkerboard at the upper, lower, left, right and middle positions in the checkerboard calibration plate are respectively obtained.
Judging whether the horizontal pixel spacing and the vertical pixel spacing of 5 directions are both in a threshold range near a standard value, if so, indicating that distortion calibration is successful, carrying out correct distortion correction on the image at the current angle, and storing a generated distortion calibration file and a corresponding calibration plate left-right deflection angle and a corresponding front-back deflection angle; and if not, indicating that the distortion correction fails, re-acquiring the distance from the laser sensor to the calibration plate, calculating the angle of the calibration plate and further re-calibrating the distortion of the current angle.
If the distortion calibration of the current angle is successful, judging whether all the angles finish the distortion calibration, if so, finishing the distortion calibration, waiting for the operation of a user, and if not, adjusting the angle of the calibration plate to continue the distortion calibration of the next angle.
In an embodiment of the present invention, the visual system performs feature positioning to obtain pixel coordinates of the feature point, the punching robot operates according to a coordinate system defined by the punching robot, and the punching robot needs to perform positioning of the punching point according to the position of the feature point detected by the visual system, so that the visual coordinates need to be converted into coordinates of the punching robot, and the punching robot can be controlled by relative position deviation of the visual system after the two coordinate systems are combined. The system generates N-point calibration files through the N-point calibration process of the punching robot, and realizes the conversion of visual coordinates and punching robot coordinates through the N-point calibration files in real-time positioning. The N-point calibration process of the punching robot comprises the following steps:
and after entering the N-point calibration bounding surface, waiting for the user to carry out N-point calibration operation, if the user clicks an 'N-point calibration' button, carrying out N-point calibration, and otherwise, waiting for the user to operate.
And the punching robot makes a world coordinate system according to the characteristic surface of the vehicle body. The punching robot uses a world coordinate system in the operation process, and the coordinate system can be remanufactured according to the actual working condition. The system takes the roof camera surface of the vehicle body as an XY coordinate system, the vertical direction as the Z direction of the coordinate system, and the world coordinate system is re-manufactured by adopting the mode, so that the directions of the visual coordinate system and the XY coordinate system of the punching robot are kept consistent.
The punching robot sets a tool TCP (tool center point), the tool TCP position of the punching robot is set to the N-point calibration marking head, namely the attitude calculation reference point of the punching robot is set to the marking head. The attitude calculation reference point of the punching robot is converted into the marking head, so that the attitude change of the punching robot cannot influence the coordinate of the marking head, and the coordinate of the marking head is also the actual position of the punching robot in the N-point calibration process.
And acquiring the distance from the upper laser sensor to the marking plate, and calculating the front-back deflection angle and the left-right deflection angle of the marking plate.
And judging whether the angle is within the angle range of distortion correction, if so, controlling the punching robot to perform N-point rounding point marking within the camera view field range, otherwise, prompting a user to adjust the angle of the calibration plate and waiting for user operation.
After the punching robot finishes the N-point rounding and point marking, the punching robot is controlled to exit the marking area and leave the view field range.
And controlling a camera to collect images, and calling distortion calibration files at corresponding angles respectively according to the left deflection angle, the right deflection angle, the front deflection angle and the rear deflection angle of the measured marking plate and the camera surface to perform distortion correction.
And finding the circle center pixel coordinates of the N marked points by a circle center finding algorithm.
And performing N-point calibration according to the circle center pixel coordinates of the N calibration points and the physical coordinates corresponding to the punching robot to obtain an N-point calibration file, wherein the N-point calibration file is a conversion matrix of a camera visual coordinate system and a punching robot coordinate system on an XY two-dimensional plane, and the pixel coordinate values of the camera can be converted into the physical coordinate values of the punching robot through the conversion matrix.
And after the N-point calibration file is obtained, carrying out N-point coordinate conversion on the current image so as to verify whether the generated calibration file is correct or not.
And performing circle search on the marking points in the image after the distortion correction is finished at present, and finding respective circle center pixel coordinates of the N points. And calling the generated N point calibration file to convert the circle center pixel coordinates of the N points into physical coordinates of the punching robot.
And comparing the obtained physical coordinates of the N points with the physical coordinates of the N points recorded in the dotting process of the punching robot to obtain statistical data. And judging whether the difference value is within a set threshold range, if so, storing the N-point calibration file and the height value measured by the corresponding laser sensor. Otherwise, prompting that the coordinate conversion is abnormal after the N point calibration, and please re-perform the N point calibration to wait for the user operation.
In one embodiment of the invention, due to the physical characteristics of machine vision imaging, when the measured surface is at different heights, the measured object has the characteristic of being large and small in size in the image. When the physical coordinates of the feature points are converted, the distance between the feature surface of the template and the laser sensor can be different in the template manufacturing process and the real-time positioning and punching process, so that the coordinate standards of the same images acquired in the two processes are different, the position deviation can exist when the coordinate conversion is directly performed, the feature point coordinates of the images acquired in the two processes are converted into the image coordinates at the time of N point calibration, and then the image coordinates are converted into the physical coordinates. Since the image magnification can quantitatively represent the size of the measured object of the image at different heights, the process is converted by the magnification at different heights. The image magnification acquisition process includes the steps of:
after entering a magnification acquisition interface, waiting for a user to perform magnification acquisition operation, if the user clicks a magnification acquisition button, acquiring magnification, and otherwise, waiting for user operation;
by combining the tool precision at the vehicle body positioning and punching station and the condition of the vehicle body, the system obtains the amplification rate of +/-10 cm of the standard distance h between the measured surface of the vehicle body and the laser sensor, and fits to obtain a fitting function of the amplification rate and the distance. The specific measurement process is from (standard distance h-10 cm) to (standard distance h +10 cm), and the measurement is carried out at intervals of 5 mm. Therefore, the calibration plate is controlled to the position (the standard distance h-10 cm) first, then the distance from the upper laser sensor to the calibration plate is obtained, and the left-right deflection angle theta 1 'and the front-back deflection angle theta 2' of the calibration plate are obtained by adopting an angle calculation method in the distortion calibration process.
And judging whether the theta 1 'and the theta 2' are stable or not, if not, indicating that the calibration plate is not completely static, judging whether the measurement times exceed a set value or not, if so, ending the amplification rate acquisition, waiting for the operation of a user, and if not, continuously acquiring the distance from the laser sensor to the calibration plate and calculating the angle.
If theta 1 'and theta 2' are kept stable, whether theta 1 'and theta 2' are in the required range or not is judged. When the camera and the measured surface are kept completely vertical, the distortion influence of the lens is removed through distortion correction of the collected image, and no distortion exists in each position of the image, so that in order to remove the influence of the distortion introduced by the angle on the magnification, the theta 1 'and the theta 2' are required to be about 0 degrees and within +/-0.1 degrees, and if the conditions are met, the camera is triggered to shoot to obtain the image and store the image. If theta 1 'and theta 2' are not in the required range, the angle of the calibration plate is adjusted, and then theta 1 'and theta 2' are obtained again.
And judging whether the images at all the height positions are acquired completely, if so, processing all the corresponding images at all the heights to obtain the pixel spacing of the checkerboards at the same positions of the calibration plate, selecting the checkerboards in the middle of the images at the checkerboard positions, wherein the distortion of the middle positions of the images is minimum, the obtained pixel spacing error is minimum, and then obtaining the proportional relation between the pixel spacing and the physical size according to the actual physical size of the checkerboards, namely the image magnification of the calculated height. And if the images at all height positions are not acquired completely, adjusting the height of the calibration plate to acquire the image at the next height.
And after the amplification factor of each height is obtained, performing first-order polynomial fitting according to the corresponding height to obtain fitting coefficients k and b, namely m = k x h + b, wherein m is the amplification factor, h is a distance value measured by the laser sensor, and the obtained fitting coefficients k and b are stored.
In an embodiment of the present invention, the template manufacturing process includes the following steps:
clicking a template making button, and entering a template making flow interface;
and judging whether a button for starting to make the template is clicked or not.
If not, judging whether to click the template making exit button, if so, exiting the template making interface, otherwise, not operating the system and waiting for the user to click the template making start button.
If yes, the system judges whether the punching robot switches the tool TCP (tool center point), namely the system switches the position of the punching robot TCP from the marking head TCP to the punching head TCP, and after the switching is finished, the punching robot carries out position movement and posture adjustment in a workpiece coordinate system of the punching robot by taking the punching head TCP as a standard.
If not, the system firstly controls the punching robot to switch the tool TCP, and controls the laser sensors (including 3 on the top and 4 on the side) to acquire the distance from the vehicle body after switching is completed. If yes, each laser sensor (including 3 on the top and 4 on the side) acquires the distance from the vehicle body.
The system judges whether the distance acquired by each laser sensor is stable, namely, each distance value is smaller than a set threshold value.
If not, the system counts the measurement times and judges whether the times exceed a set value or not, if not, the system is continuously executed to judge whether the distances acquired by the laser sensors are kept stable or not, namely, all the distance values are smaller than a set threshold value, if so, the system prompts that the vehicle body is not placed stably, and the system returns to judge whether a template starting button is clicked or not.
If so, the yaw angle of the vehicle body in each direction (the rotation angle α of the vehicle body about the Y-axis direction, the rotation angle β of the vehicle body about the X-axis direction, and the rotation angle γ of the vehicle body about the Z-axis direction) is calculated from the distances from the vehicle body obtained by the respective laser sensors (three sensors at the top and two sensors on the right or left side).
And judging whether the obtained deflection angle of each direction is in a set range.
If not, prompting that the position of the vehicle body is abnormal, needing the user to replace the position of the vehicle body, and returning to judge whether to click a button for starting to make the template for execution.
And if so, executing a vehicle body feature acquisition process.
The system judges whether the vehicle body characteristic acquisition is abnormal.
If not, the system controls the punching robot to enable the punching head to move to the punching position; the system respectively records the intersection point coordinates, the straight line angles and the punching point position coordinates of the feature points; and storing the position data of the corresponding laser sensor, and returning to judge whether to click a button for starting to manufacture the template for execution.
If yes, prompting a user to confirm the position of the vehicle body according to the abnormal type (including the vehicle body feature not found and the seam at the position where the vehicle body feature is not found), and ending the template manufacturing process.
Further, in an embodiment of the present invention, a vehicle body feature obtaining process in template manufacturing is as follows:
the system controls the camera to acquire the vehicle body characteristic surface image.
And according to the deflection angle of the vehicle body, the system calls a corresponding distortion calibration file to carry out image distortion correction.
According to the characteristics of the sheet metal of the vehicle body, the system matches characteristic templates of seams between the sheet metals.
The system determines whether the feature lookup is successful, i.e., whether the matching is successful. If not, the system prompts that the vehicle body characteristics are not found, and then the system asks for confirming the vehicle body position, and the process is ended; if so, position correction is carried out on the found features, and then two straight lines at the joint are respectively searched.
The system determines whether the two lines are successfully found. If not, prompting that no seam at the characteristic position of the vehicle body is found, and asking for confirming the position of the vehicle body, and ending the process; if yes, solving an intersection point of the two found straight lines, and carrying out coordinate conversion on the intersection point to obtain a physical coordinate.
In an embodiment of the present invention, the real-time positioning and punching process includes the following steps:
clicking a real-time positioning and punching button to enter a real-time positioning and punching flow interface;
and judging whether to press a real-time positioning starting button.
If not, judging whether the exiting single-station operating button is clicked, if so, exiting the real-time positioning and punching interface, otherwise, not operating the system, and waiting for a user to click the starting real-time positioning button.
And if so, carrying out a vehicle body angle acquisition process.
The system judges whether the acquisition of the vehicle body angle is successful; if not, prompting abnormal positioning, returning to continuously judging whether to press a start real-time positioning button. And if so, executing a vehicle body feature acquisition process.
The system judges whether the vehicle body characteristic acquisition is successful; if not, prompting that the vehicle body characteristic is not found or prompting that the seam at the position of the vehicle body characteristic is not found, asking to confirm the position of the vehicle body, returning to continuously judge whether to click a start real-time positioning button or not; if so, calculating to obtain the position of a new punching point according to the characteristic point, the punching point, the deflection angle of the vehicle body in each direction and the real-time positioning characteristic point coordinate in the template manufacturing process.
The system judges whether the vehicle body offset position is within a set range. If not, the abnormal position of the vehicle body is prompted,
the user needs to replace the position of the vehicle body, and returns to continuously judge whether to click the start real-time positioning button; if yes, controlling the punching robot to move to the position above the new punching point after the posture of the punching robot is adjusted, and acquiring the distance from the new punching point in real time by the laser sensor.
And judging whether the distance reaches a set range. If not, the punching robot continues to approach the new punching point for a certain distance, and then whether the distance reaches the set range is judged; if yes, the system controls the punching robot to punch. And after the real-time positioning is finished, returning to continuously judging whether the real-time positioning starting button is clicked or not.
Further, the vehicle body angle obtaining process in the real-time positioning punching process is as follows:
and executing a sensor approaching template point flow.
The system judges whether the approach is completed; if not, the sensor approaching template point process is continuously executed. If yes, the system calculates and obtains the non-camera surface deflection angle according to the distance measured by the laser sensor.
And executing a vehicle body characteristic acquisition process to obtain a camera surface deflection angle. Thus, the rotation angles of the vehicle body in three directions are obtained.
The above-mentioned automobile body angle obtains the flow and fixes a position to punch and punch two kinds of circumstances in real time of automobile body side in real time to automobile body top and have a difference:
taking positioning and punching on the top of the vehicle body as an example, a workpiece surface coordinate system established by the punching robot is shown in fig. 9, wherein the positive direction of the Y axis is the direction of the vehicle head.
The non-camera surface deflection angle obtained by real-time positioning and punching on the top of the vehicle body comprises a rotation angle alpha (a left-right inclination angle of the vehicle body) of the vehicle body around the Y-axis direction and a rotation angle beta (a front-back inclination angle of the vehicle body) of the vehicle body around the X-axis direction, and the top camera surface deflection angle refers to a rotation angle gamma (a rotation angle in a vehicle top plane parallel to the top camera surface) of the vehicle body around the Z-axis direction.
The camera face deflection angle acquired by real-time positioning and punching on the side face of the car body refers to a rotation angle beta of the car body around the X-axis direction acquired by a side camera, and the non-camera face deflection angle comprises a rotation angle alpha around the Y-axis direction and a rotation angle gamma of the car body around the Z-axis direction acquired by a top camera.
In one embodiment of the invention, because angle changes exist at different positions of the curved surface space, the point laser sensor may have position deviation during distance acquisition, so that the acquired distance is inaccurate. The sensor approaching template point flow in the vehicle body angle acquisition flow is as follows:
and each laser sensor acquires the distance from the vehicle body and performs rough positioning.
The system judges whether the distance acquired by each laser sensor is stable, namely, each distance value is smaller than a set threshold value. If not, the system counts the measurement times and judges whether the times exceed a set value, if not, the system returns to continue coarse positioning, and if so, the system prompts that the vehicle body is not stably placed, and the process is ended.
If yes, calculating the deflection angle of each direction of the vehicle body according to the distance from the vehicle body obtained by each laser sensor.
The system judges whether the deflection angles are all in a set range; if not, prompting that the position of the vehicle body is abnormal, needing a user to replace the vehicle body, and ending the process; and if so, executing a vehicle body feature acquisition process.
And solving the position difference between the target point and the template point, calculating the deviation value of the sensor moving to the template point and controlling the sensor to move.
After moving, the laser sensor measures the distance from the vehicle body, and calculates the deflection angle of the vehicle body in each direction according to the distance from the vehicle body.
The sensor approaching template point flow is different for the two conditions of real-time positioning and punching on the top of the vehicle body and real-time positioning and punching on the side surface of the vehicle body:
each laser sensor in the real-time positioning and punching of the top of the vehicle body is a top sensor (6, 12, 15), and the calculated deflection angle of each direction of the vehicle body comprises a rotation angle a of the vehicle body around the Y-axis direction and a rotation angle beta of the vehicle body around the X-axis direction.
Each laser sensor in the real-time positioning and punching of the side surface of the vehicle body comprises a top sensor (6, 12) and a sensor which corresponds to the positioning and punching position and is used for measuring the working distance of a camera.
The method comprises the steps of obtaining the vehicle body characteristics in real-time positioning punching, wherein the vehicle body characteristic obtaining process is consistent with the vehicle body characteristic obtaining process in template manufacturing, only solving the intersection point of two found straight lines, carrying out coordinate conversion on the intersection point to obtain a physical coordinate, changing the method into the method for solving the intersection point of the two found straight lines, and converting the intersection point of the two straight lines into the pixel coordinate of a template image according to the height of the real-time characteristic point and the height of the template characteristic point. And coordinate conversion is performed on the intersection points to obtain physical coordinates.
The formula for converting the intersection point (i.e. the feature point) of two straight lines into the pixel coordinate of the template image in the process of acquiring the vehicle body feature during real-time positioning and punching is as follows:
Figure RE-GDA0003934814450000181
Figure RE-GDA0003934814450000182
wherein m is 0 The magnification of the characteristic points in the template manufacturing process is represented, and the magnification is obtained by substituting the distance values measured by the corresponding station sensors in the template manufacturing process into a first-order fitting function in the magnification acquisition process; m is a unit of 1 The amplification rate of the characteristic points in real-time positioning is represented, and is obtained by substituting the distance values measured after the position of the corresponding station sensors is corrected in real-time positioning into a first-order fitting function in the amplification rate acquisition flow; the point (X, Y) represents the pixel coordinates of the feature point in the real-time localization, and the point (X) center ,Y center ) The pixel coordinates of the image center point in the real-time localization are represented, and the points (X ', Y') represent the pixel coordinates of the feature points in the real-time localization after the magnification compensation.
Coordinate conversion is carried out on the points (X ', Y') to obtain a physical coordinate point (X) 1 ,Y 1 )。
Calculating new punching point position in real-time positioning punching process
Figure RE-GDA0003934814450000183
The specific calculation process of (2) is as follows:
case 1: the deflection angles of the three directions of the vehicle body measured in the real-time positioning process are the same as those measured in the template manufacturing process;
Figure RE-GDA0003934814450000184
Figure RE-GDA0003934814450000185
wherein, point (X) 0 ,Y 0 ) Physical coordinates, points, obtained by coordinate conversion of pixel coordinates representing characteristic points in template fabrication
Figure RE-GDA0003934814450000186
Representing the physical coordinates of the punch point in the template fabrication.
Case 2: the deflection angles (a and beta) of the vehicle body measured in the real-time positioning process are the same as those in the template manufacturing process;
Figure RE-GDA0003934814450000187
Figure RE-GDA0003934814450000188
wherein the angle γ Δ Equal to the rotation angle gamma of the car body at the time of real-time positioning 1 Subtracting the rotation angle gamma of the car body during the manufacture of the template 0
Figure RE-GDA0003934814450000191
Shows that the car body rotates by an angle gamma relative to the template during manufacture during real-time positioning Δ The resulting X-axis offset of the punching point,
Figure RE-GDA0003934814450000192
shows that the vehicle body rotates by an angle gamma relative to the template during manufacture when positioned in real time Δ The resulting Y-axis offset of the punching point.
Case 3: the vehicle body deflection angle beta measured in real-time positioning is the same as that in template manufacturing;
(1) If the line segment formed by the punching point and the characteristic point in the template manufacturing process does not have an included angle with the template plane;
Figure RE-GDA0003934814450000193
Figure RE-GDA0003934814450000194
wherein, the angle a Δ Equal to the rotation angle a of the vehicle body at the time of real-time positioning 1 Minus the rotation angle a of the vehicle body during the fabrication of the template 0
Figure RE-GDA0003934814450000195
The X-axis direction offset in case 2 is expressed by multiplying cos (a) Δ ) This is a change in the X-axis direction offset amount in case 2 due to the left-right inclination of the vehicle body, i.e., means that the offset amount becomes its projection on the template plane.
(2) If the line segment formed by the punching point and the characteristic point has an included angle eta with the template plane when the template is manufactured, the punching point is lower than the characteristic point and inclines downwards relative to the characteristic point (or the punching point is higher than the characteristic point and inclines upwards relative to the characteristic point);
Figure RE-GDA0003934814450000196
Figure RE-GDA0003934814450000197
wherein, the angle eta X Equal to the projection angle of the included angle formed by the line segment formed by the perforating point and the characteristic point and the template plane in the plane formed by the X axis and the Z axis when the template is manufactured,
Figure RE-GDA0003934814450000198
and the X-axis offset which needs to be compensated when the left and right inclination angles are different and an included angle eta exists when the automobile body is positioned in real time relative to the template.
(3) If the line segment formed by the punching point and the characteristic point has an included angle eta with the template plane when the template is manufactured, the punching point is higher than the characteristic point and the punching point inclines downwards relative to the characteristic point (or the punching point is lower than the characteristic point and the punching point inclines upwards relative to the characteristic point);
Figure RE-GDA0003934814450000199
Figure RE-GDA00039348144500001910
case 4: the vehicle body deflection angle a measured in real-time positioning is the same as that measured in template manufacturing;
(1) If the line segment formed by the punching point and the characteristic point in the template manufacturing process does not have an included angle with the template plane;
Figure RE-GDA00039348144500001911
Figure RE-GDA00039348144500001912
wherein the angle beta Δ Equal to the angle of rotation beta of the vehicle body at the time of real-time positioning 1 Minus the rotation angle beta of the vehicle body during the manufacture of the template 0
Figure RE-GDA00039348144500001913
The amount of Y-axis direction shift in case 2 is expressed by multiplying cos (. Beta.) by Δ ) This is a change in the Y-axis direction offset amount in case 2 due to the vehicle body tilting forward and backward, i.e., means that the offset amount becomes its projection on the template plane.
(2) If the line segment formed by the punching point and the characteristic point has an included angle eta with the template plane when the template is manufactured, the punching point is lower than the characteristic point and inclines downwards relative to the characteristic point (or the punching point is higher than the characteristic point and inclines upwards relative to the characteristic point);
Figure RE-GDA0003934814450000201
Figure RE-GDA0003934814450000202
wherein, the angle eta Y The projection angle of the included angle formed by the line segment formed by the perforating point and the characteristic point and the template plane in the plane formed by the Y axis and the Z axis is equal to the projection angle of the included angle formed by the line segment formed by the perforating point and the characteristic point and the template plane in the template manufacturing process.
Figure RE-GDA0003934814450000203
And the Y-axis offset which needs to be compensated when the front and rear inclination angles are different and an included angle eta exists when the automobile body is positioned in real time relative to the template.
(3) If the line segment formed by the punching point and the characteristic point in the template manufacturing process forms an included angle eta with the template plane, the punching point is higher than the characteristic point, and the punching point inclines downwards relative to the characteristic point (or the punching point is lower than the characteristic point and the punching point inclines upwards relative to the characteristic point);
Figure RE-GDA0003934814450000204
Figure RE-GDA0003934814450000205
case 5: the measured inclination angles a and beta of the car body during real-time positioning are different from those during template manufacturing;
(1) If the line segment formed by the punching point and the characteristic point in the template manufacturing process does not have an included angle with the template plane;
Figure RE-GDA0003934814450000206
Figure RE-GDA0003934814450000207
(2) If the line segment formed by the punching point and the characteristic point has an included angle eta with the template plane when the template is manufactured, the punching point is lower than the characteristic point and inclines downwards relative to the characteristic point (or the punching point is higher than the characteristic point and inclines upwards relative to the characteristic point);
Figure RE-GDA0003934814450000208
Figure RE-GDA0003934814450000209
(3) If the line segment formed by the punching point and the characteristic point has an included angle eta with the template plane when the template is manufactured, the punching point is higher than the characteristic point and the punching point inclines downwards relative to the characteristic point (or the punching point is lower than the characteristic point and the punching point inclines upwards relative to the characteristic point);
Figure RE-GDA00039348144500002010
Figure RE-GDA00039348144500002011
in one embodiment of the invention, the full-station automatic operation flow finishes the positioning and punching operation of all stations to be punched according to a set sequence, and after the positioning and punching operation of all stations is finished, the punching robot is operated to a zero point according to a zero-returning track and waits for the arrival of the next detected object. As shown in fig. 7, the full-station automatic operation process includes the following steps:
and after entering the all-station operation interface, waiting for a user to click an all-station operation button, executing a vehicle in-place judgment process if the user clicks the all-station operation, and otherwise, waiting for the user to operate.
The vehicle body operation production line is composed of a plurality of large flat plates, front and rear tires of the vehicle body and tire side limiting devices are installed on the large flat plates, the large flat plates sequentially move forwards after the vehicle body is sequentially placed at a limiting position, when the vehicle body operates to a visual positioning station, the limiting device at the bottom of the large flat plates triggers a limiting signal, therefore, the vehicle body in-place judgment process firstly judges whether the limiting signal is received, if the limiting signal is received, a laser sensor at the visual positioning station is controlled to obtain a height value from the roof in real time, if the height value is kept stable within a set time, the vehicle body is considered to be in place and kept stable, a single-station real-time positioning process is executed, otherwise, the height value of the laser sensor is continuously obtained, and whether the height value is kept stable within the set time is judged.
The system sequentially executes a single-station real-time positioning process on the top of the vehicle body, the left side and the right side of the front side and the rear side of the vehicle body, and 6 stations in total. The stations on the left side and the right side of the roof execute a single-station real-time positioning process, and because the front-back deflection angle and the left-right deflection angle of the top relative to the camera surface are minimum, the rotation angle of the top in the camera surface can be accurately measured through vision, and the three-dimensional angular deviation of the car body relative to the template during manufacturing is acquired in the mode to be the most accurate. After the single-station real-time positioning process of the two sides of the roof is completed, three angles of the car body relative to the template manufacturing car body in the three-dimensional direction can be obtained, wherein the left deflection angle, the right deflection angle and the rotation angle in the camera plane can be used for the rotation angle required in the single-station real-time positioning process of the side face. The system is provided with 7 laser sensors, wherein the roof is provided with 3 laser sensors for acquiring the front-back deflection angle and the left-right deflection angle of the vehicle body camera surface, and the left-right laser sensor below the roof camera is also used for acquiring the actual vehicle body height; the front station and the rear station on the left side and the right side of the vehicle body are respectively provided with 1 laser sensor for obtaining the distance of the vehicle body, so that the system is provided with 7 laser sensors in total.
And judging whether all stations complete punching, if so, controlling the punching robot to return to a standby point to wait for the arrival of the next vehicle body, and otherwise, continuing to position the vehicle body of the next station in real time.
The surface of the detected object, namely the automobile, is not a complete plane, the radian of a metal plate on the surface of the automobile exists, if the position of the sensor is kept unchanged, when a template is manufactured, the distance information acquired by the laser sensor and the actual measurement result are that the laser sensor can have deviation. The angular offset of the car by distance is not the correct angle. Therefore, the camera is fixed, and the laser sensor can move.
After the laser sensor approaches the process execution, the front-back deflection angle and the left-right deflection angle of the car roof can be calculated according to the laser sensor. The solving method is illustrated as follows: referring to fig. 8, if circle 1 is the initial position, the distances measured at the left and right rear sides are HG, AC, respectively, and the distance between the two sensors is AH. The circle 2 is the position of parking again, and if the sensor does not move along with the sensor, the measured position is HD, AB, and in actual use, the sensor approaches the position according to the visual positioning condition, so that the distance of the sensor after moving along with the sensor only needs to be calculated. The sensor following movements I and J are the new positions of the sensors, the measured positions are IM, JK, as shown, the car angle does not change much. The sensor does not move when the template is manufactured, and the deflection angle of the vehicle body in a certain direction can be calculated through the distance relation of the sensor; in real-time detection, after the sensor moves by an approximation method, the deflection angle of the vehicle body in a certain direction can be calculated by the sensor distance relation. The calculation formula is as follows:
Figure RE-GDA0003934814450000221
theta is the included angle between AH and GC and is the vehicle body angle during template manufacture
Figure RE-GDA0003934814450000222
Theta' is the included angle between AH and MK, and is the angle of the vehicle body in real-time positioning
The inclination angle of the vehicle body in the designated direction can be calculated by the formula when the template is manufactured and positioned in real time. According to the above definition, the vehicle body a angle is the vehicle body left and right inclination angle, the beta angle is the vehicle body front and back inclination angle, theta of the formula corresponds to the vehicle body a angle when the template is manufactured and theta 'corresponds to the vehicle body a angle when the vehicle body is positioned in real time when the vehicle body left and right inclination angle is calculated, and the deflection angle delta, specifically alpha when the vehicle body is positioned in real time can be obtained through (theta' -theta) Δ ,β Δ And gamma Δ
Example (c): assuming that the station is at the camera station on the right side of the roof, according to the picture display, the change of the characteristic position of the roof is calculated to be K around the Y-axis direction (the left and right directions of the vehicle body) through the camera, and the control center controls the displacement of the left and right sensor adjusting device 4 on the right side of the roof to move by the K value.
Assuming that the station is at the camera station on the right side of the roof, according to the picture display, the change of the characteristic position of the roof is calculated to be L around the X-axis direction (the front and back directions of the vehicle body) through the camera, and the control center controls the left and right adjusting device 4 of the top right sensor to move for the displacement of the L value. The other stations work in the same way.
The 6 fixed cameras respectively collect characteristic point information, and the 7 ranging sensors measure distances.
It is known that the overhead sensor height 12,6 is spaced H1 apart
The height of the overhead sensor is 6, 15H 2
The height of the right sensor is 30, 24 equal to the height of the right sensor and is H3 away
Left sensor height 36, 42 is H4 apart
The sensor template measurements were set as: h 12 ,H 6 ,H 15 ,H 30 ,H 24 ,H 36 ,H 42
The measured values of the sensor during template manufacturing are set as follows: h is 12 ,h 6 ,h 15 ,h 30 ,h 24 ,h 36 ,h 42
The measured values after the sensor corrects the position during real-time measurement are as follows: h is 12-1 ,h 6-1 ,h 15-1 ,h 30-1 ,h 24-4 ,h 36-1 ,h 42-1
And (3) starting to manufacture a template when the vehicle body is in place, feeding distance values measured by 7 sensors back to the control center, and calculating the amplification rate.
And photographing by the camera to record the characteristic point information.
The vehicle body is put in place again, and real-time measurement is started;
the overhead sensor measures a distance value h 12-1 ,h 6-1 And calculating the amplification ratio and sending the amplification ratio to a control center.
The top two cameras calculate the offset angle in the Z-axis direction and the in-plane offset of the XY coordinate system and send data to the control center.
The control center adjusts the position of the sensor according to the camera parameters.
The offset angles of the vehicle body around the X-axis direction and the Y-axis direction are calculated by means of the top three sensors, and the rotation angles of the vehicle body in the directions of three X, Y and Z coordinate axes are obtained.
The angle calculation formula during template manufacturing is as follows:
rotation angle around X axis direction: a is 0 =tan -1 (h 15 -h 6 )/H 2
Rotation angle around Y-axis direction: beta is a 0 =tan -1 (h 12 -h 6 )/H 1 The angle calculation formula during real-time positioning is as follows:
rotation angle around X axis direction: a is 1 =tan -1 (h 15-1 -h 6-1 )/H 2
Rotation angle around Y-axis direction: beta is a 1 =tan -1 (h 12-1 -h 6-1 )/H 1
By (beta) 10 ) Can obtain beta Δ By passing through (a) 1 -a 0 ) Can obtain alpha Δ
The corrected value of the sensor position is transmitted to the control center to be used as a magnification correction value for the camera.
It is to be understood that the exemplary embodiments described herein are illustrative and not restrictive. While one or more embodiments of the present invention have been described with reference to the accompanying drawings, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the following claims.

Claims (10)

1. A method for detecting the relative surface of an object to be detected is characterized in that a vehicle body is conveyed to a flat plate of a production line, the vehicle body is conveyed to a flat plate production line with a limiting device in sequence through a crane, when the vehicle body runs to a visual detection position through the production line, the flat plate is controlled by the limiting device arranged below to stop, a stop signal is obtained by a visual system, and then user operation selection is carried out, the visual system carries out single-station operation or full-station operation according to the selection of a user, a preparation flow of positioning and punching each station is required to be carried out before the single-station and full-station real-time positioning operation, namely a distortion calibration flow, an N-point calibration flow, an amplification rate obtaining flow and a template manufacturing flow, in the single-station real-time positioning flow and the full-station real-time positioning flow, the position of a laser sensor is adjusted to be close to the position of the template manufacturing, so that accurate laser sensor values are obtained, the preparation flows of positioning and punching each station respectively use each corresponding detection part to obtain the measured values of the image and the corresponding offset angle of the laser sensor when the image and the offset angle of the corresponding camera are calculated, and the offset angle of the corresponding detection part of the camera are obtained by the camera, and then the camera is converted to obtain the offset angle of the corresponding image of the image obtained by the camera on the camera in the camera; the physical positions of the feature points are further calculated through images acquired by the cameras of the station detection components, and according to the feature information of the images and the acquired three-dimensional deflection angle information of the vehicle body during template manufacturing and real-time positioning, the position information and the punching posture of the punching point are accurately calculated, so that accurate positioning and punching are realized.
2. The method for detecting the correlation surface of the measured object according to claim 1, wherein the single-station manual operation process comprises distortion calibration, N-point calibration of the punching robot, magnification acquisition, template preparation and real-time positioning operation, and the specific implementation process comprises the following steps:
if the user selects distortion calibration, a camera distortion calibration process is executed, and distortion calibration files under different angles of the characteristic surface are obtained by executing the camera distortion calibration process, wherein the files can be used for correcting image distortion when the characteristic surface and the camera surface form different angles;
if the user selects 'punching robot N point calibration', executing a punching robot N point calibration flow, and executing the punching robot N point calibration flow to obtain an N point calibration file so as to realize conversion of image coordinates into physical coordinates under a punching robot coordinate system;
if the user selects 'magnification acquisition', executing a magnification acquisition process, acquiring a first-order polynomial relation function of the height of the feature plane and the image magnification by executing the magnification acquisition process, and realizing image coordinate conversion of the feature plane at different heights according to the function so as to acquire accurate physical coordinates of the feature points;
if the user selects 'template making', executing a template making process, and obtaining the position information of the image processing characteristic points and the position information of the punching points on the punching surface acquired by the camera by executing the template making process, thereby obtaining the position relation between the characteristic points and the punching points;
and if the user selects the real-time positioning, executing a single-station real-time positioning process, and positioning the currently selected station in real time to obtain the position information of the punching point through the single-station real-time positioning process so as to control the punching robot to perform punching operation.
3. The method for detecting the correlation surface of the measured object according to claim 2, wherein the distortion calibration process comprises the following steps:
and controlling the checkerboard calibration plate to the initial position. Placing the calibration plate on an object to be detected, namely the position close to the detection height of the detection characteristic point on the automobile, and controlling the laser sensor to acquire height information of each position after the calibration plate is placed;
calculating the offset angle between the calibration plate and the camera surface according to the acquired height information of the laser sensor;
if the angle meets the requirement, triggering a camera to shoot and acquire an image;
carrying out distortion calibration on the collected image to generate a distortion calibration file;
and after the distortion calibration file is generated, the distortion calibration file is loaded on the image acquired by the camera for distortion correction.
4. The method of claim 3, wherein calculating the offset angle between the calibration plate and the camera surface based on the obtained height information of the laser sensor further comprises: 3 laser sensor of distortion calibration in-process installation, wherein mark each side installation of board left and right back both sides, top left side distance measuring sensor and top right side distance measuring sensor promptly, laser sensor is installed to the right front, top front side distance measuring sensor promptly, the distance between top left side distance measuring sensor and the top right side distance measuring sensor is marked as H1, the distance of top right side distance measuring sensor and top front side distance measuring sensor is marked as H2, the height that top left side distance measuring sensor obtained is marked as H1, the height that top right side distance measuring sensor obtained is marked as H2, the height that top front side distance measuring sensor obtained is marked as H3. The offset angles that can be obtained from the three heights include a left-right offset angle and a front-back offset angle from the camera face, where the left-right offset angle is denoted as θ 1 and the front-back offset angle is denoted as θ 2. The calculation formula is as follows:
Figure FDA0003708663630000021
Figure FDA0003708663630000022
judging whether the obtained theta 1 and theta 2 are stable in a set threshold range, if so, determining that the chessboard pattern calibration plate is in a stable state, otherwise, determining that the chessboard pattern calibration plate is still in a motion state, at the moment, judging whether the angle measurement frequency exceeds a set value, if so, ending the distortion calibration operation, and prompting that the calibration plate is not stable and the calibration plate is required to be placed again, otherwise, continuing to measure the theta 1 and the theta 2;
and (3) carrying out distortion calibration on the left deflection angle, the right deflection angle and the front deflection angle at intervals of 1 degree within the range of +/-10 degrees, keeping the front deflection angle and the rear deflection angle as 0 degree when the left deflection angle and the right deflection angle are calibrated, controlling the angle deviation of the chessboard grid calibration plate from-10 degrees to 10 degrees each time to carry out distortion calibration, and carrying out the calibration process of the front deflection angle and the rear deflection angle. Judging whether the currently measured angle is in a required range according to the current calibration process, if the distortion calibration at the left deflection angle and the right deflection angle of minus 10 degrees is currently carried out, judging whether theta 1 is in a threshold range near minus 10 degrees set by a system, simultaneously ensuring that theta 2 is 0 degree, if the conditions are met, considering that the angle is in the required range, carrying out next processing, and if not, controlling the motion of the checkerboard calibration plate to adjust the checkerboard calibration plate angle so as to enable the calibration plate angle to meet the angle requirement.
5. The method according to claim 2, wherein the magnification acquisition includes a magnification of ± 10cm of a standard distance h of the measured surface of the vehicle body from the laser sensor, and the magnification and distance fitting function is obtained by fitting.
6. The method of claim 5, wherein the step of obtaining a magnification of ± 10cm of the standard distance h between the measured surface of the vehicle body and the laser sensor and fitting the magnification to the distance fitting function comprises: the specific measurement process is from h-10cm to h +10cm, the measurement is carried out at intervals of 5mm every time, the calibration plate is controlled to reach the h-10cm, then the distance from the upper laser sensor to the calibration plate is obtained, and the left and right deflection angles theta 1 'and the front and back deflection angles theta 2' of the calibration plate are obtained by adopting an angle calculation method in the distortion calibration process;
judging whether theta 1 'and theta 2' are stable or not, if not, indicating that the calibration plate is not completely static, judging whether the measurement times exceed a set value or not, if so, ending the amplification rate acquisition, waiting for the operation of a user, and if not, continuously acquiring the distance from the laser sensor to the calibration plate and calculating the angle;
if theta 1 'and theta 2' are stable, whether the theta 1 'and the theta 2' are in the required range is judged, when the camera and the detected surface are completely vertical, the acquired image is subjected to distortion correction to remove the distortion influence of the lens, and then no distortion exists in each position of the image, so that the theta 1 'and the theta 2' are required to be about 0 DEG in order to remove the influence of the distortion introduced by the angle on the magnification, and if the conditions are met, the camera is triggered to take a picture to obtain the image and store the image. If the theta 1 'and the theta 2' are not in the required range, adjusting the angle of the calibration plate and then obtaining the theta 1 'and the theta 2' again;
and judging whether the images at all the height positions are acquired completely, if so, processing all the corresponding images at all the heights to obtain the pixel spacing of the checkerboards at the same positions of the calibration plate, selecting the checkerboards in the middle of the images at the checkerboard positions, wherein the distortion of the middle positions of the images is minimum, the obtained pixel spacing error is minimum, and then obtaining the proportional relation between the pixel spacing and the physical size according to the actual physical size of the checkerboards, namely the image magnification of the calculated height. And if the images at all height positions are not acquired, adjusting the height of the calibration plate to acquire the image at the next height.
And after the amplification factor of each height is obtained, performing first-order polynomial fitting according to the corresponding height to obtain fitting coefficients k and b, namely m = k x h + b, wherein m is the amplification factor, h is a distance value measured by the laser sensor, and the obtained fitting coefficients k and b are stored.
7. The method of claim 2, wherein the template preparation comprises: the system judges whether the punching robot switches the tool center point TCP or not, namely, the system switches the position of the TCP of the punching robot from the marking head TCP to the punching head TCP, and after the switching is finished, the punching robot carries out position movement and posture adjustment in a workpiece coordinate system of the punching robot by taking the punching head TCP as a standard;
if not, the system firstly controls the punching robot to switch the tool TCP, and controls each laser sensor to acquire the distance from the vehicle body after switching is finished; if yes, each laser sensor acquires the distance from the vehicle body;
the system judges whether the distance acquired by each laser sensor is stable, namely, each distance value is smaller than a set threshold value;
if not, the system counts the measurement times and judges whether the times exceed a set value, if not, the system continues to judge whether the distance acquired by each laser sensor is stable, if so, the system prompts that the vehicle body is not placed stably, and returns to judge whether to click a button for starting to manufacture the template;
if yes, calculating deflection angles of the vehicle body in all directions according to the distance from the vehicle body obtained by each laser sensor; judging whether the obtained deflection angle of each direction is within a set range;
if not, prompting that the position of the vehicle body is abnormal, needing the user to reset the position of the vehicle body, and returning to judge whether to click a button for starting to manufacture the template for execution;
if yes, executing a vehicle body feature obtaining process;
the system judges whether the vehicle body characteristic acquisition is abnormal or not;
if not, the system controls the punching robot to enable the punching head to move to the punching position; the system respectively records the intersection point coordinates, the straight line angles and the punching point position coordinates of the feature points; storing the position data of the corresponding laser sensor, and returning to judge whether to click a button for starting to manufacture the template for execution;
if so, prompting the user to confirm the position of the vehicle body according to the abnormal type, and ending the template making process.
8. The method for detecting the correlation surface of the measured object according to claim 7, wherein the process for acquiring the vehicle body characteristics in the template manufacturing comprises the following steps:
the system controls a camera to acquire a vehicle body characteristic surface image;
according to the vehicle body deflection angle, the system calls a corresponding distortion calibration file to carry out image distortion correction;
according to the characteristics of the vehicle body metal plates, the system matches characteristic templates at seams among the metal plates;
the system judges whether the characteristics are successfully matched, if not, the system prompts that the characteristics of the vehicle body are not found, the system asks for confirming the position of the vehicle body, and the process is ended; if so, correcting the positions of the found features, and then respectively searching two straight lines at the joint;
the system judges whether the two straight lines are searched successfully, if not, the system prompts that a seam at the characteristic position of the vehicle body is not found, and the system asks for confirming the position of the vehicle body, and the process is finished; if yes, solving an intersection point of the two found straight lines, and carrying out coordinate conversion on the intersection point to obtain a physical coordinate.
9. The method for detecting the relevant surface of the measured object according to claim 2, wherein the real-time positioning and punching process comprises the following steps:
carrying out a vehicle body angle acquisition process;
the system judges whether the acquisition of the vehicle body angle is successful; if not, prompting abnormal positioning, returning to continuously judging whether to press a start real-time positioning button. If yes, executing a vehicle body feature obtaining process;
the system judges whether the vehicle body characteristic acquisition is successful or not; if not, prompting that the vehicle body characteristic is not found or prompting that the seam at the position of the vehicle body characteristic is not found, asking to confirm the position of the vehicle body, returning to continuously judge whether to click a start real-time positioning button or not; if so, calculating to obtain the position of a new punching point according to the characteristic point, the punching point, the deflection angle of the vehicle body in each direction and the coordinates of the characteristic point positioned in real time in the template manufacturing process;
the system judges whether the vehicle body offset position is in the set range, if not, the system prompts the vehicle body position to be abnormal,
the user needs to replace the position of the vehicle body, and returns to continuously judge whether to click the start real-time positioning button; if so, controlling the punching robot to move above the new punching point after adjusting the posture, and acquiring the distance from the new punching point in real time by the laser sensor;
and judging whether the distance reaches a set range. If not, the punching robot continues to approach the new punching point for a certain distance, and then whether the distance reaches the set range is judged; if yes, the system controls the punching robot to punch. And after the real-time positioning is finished, returning to continuously judging whether the real-time positioning starting button is clicked or not.
10. The method for detecting the relevant surface of the object to be detected according to claim 9, wherein a total of 6 stations in the full-station automatic operation flow sequentially execute the single-station real-time positioning process for the top, the front side and the rear side of the vehicle body, wherein the single-station real-time positioning process is executed for the stations on the left and right sides of the roof first, and after the single-station real-time positioning process on the two sides of the roof is completed, three angles of the vehicle body in the three-dimensional direction relative to the template manufacturing vehicle body can be obtained, and the left and right deflection angles and the rotation angle in the camera plane can be used for the rotation angle required in the side single-station real-time positioning process.
CN202210715537.0A 2022-06-22 2022-06-22 Method for detecting relevant surface of measured object Pending CN115493489A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210715537.0A CN115493489A (en) 2022-06-22 2022-06-22 Method for detecting relevant surface of measured object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210715537.0A CN115493489A (en) 2022-06-22 2022-06-22 Method for detecting relevant surface of measured object

Publications (1)

Publication Number Publication Date
CN115493489A true CN115493489A (en) 2022-12-20

Family

ID=84466721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210715537.0A Pending CN115493489A (en) 2022-06-22 2022-06-22 Method for detecting relevant surface of measured object

Country Status (1)

Country Link
CN (1) CN115493489A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116045855A (en) * 2023-02-20 2023-05-02 广东九纵智能科技有限公司 Novel multi-axis linkage visual inspection equipment and station consistency calibration method thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116045855A (en) * 2023-02-20 2023-05-02 广东九纵智能科技有限公司 Novel multi-axis linkage visual inspection equipment and station consistency calibration method thereof
CN116045855B (en) * 2023-02-20 2023-08-08 广东九纵智能科技有限公司 Multi-axis linkage visual inspection equipment and station consistency calibration method thereof

Similar Documents

Publication Publication Date Title
CN109665307B (en) Work system, work execution method for article, and robot
JP4191080B2 (en) Measuring device
US7532949B2 (en) Measuring system
US6470271B2 (en) Obstacle detecting apparatus and method, and storage medium which stores program for implementing the method
US20150202776A1 (en) Data generation device for vision sensor and detection simulation system
EP1637836A1 (en) Device and method of supporting stereo camera, device and method of detecting calibration, and stereo camera system
CN109483539A (en) Vision positioning method
JP2007160486A (en) Off-line programming device
TWI724977B (en) Calibration apparatus and calibration method for coordinate system of robotic arm
CN109191527B (en) Alignment method and device based on minimum distance deviation
CN115493489A (en) Method for detecting relevant surface of measured object
CN110370316A (en) It is a kind of based on the robot TCP scaling method vertically reflected
CN113211431A (en) Pose estimation method based on two-dimensional code correction robot system
CN111609847A (en) Automatic planning method of robot photographing measurement system for sheet parts
CN115841516A (en) Method and device for modeling dynamic intrinsic parameters of camera
CN108871194B (en) Visual sensor work functions quick recovery method
CN109751987A (en) A kind of vision laser locating apparatus and localization method for mechanical actuating mechanism
CN115493490A (en) Detection and positioning method for automobile assembly
CN115493488A (en) System for detecting relevant surface of measured object
CN115183677A (en) Detection positioning system for automobile assembly
CN110039520B (en) Teaching and processing system based on image contrast
WO2022075303A1 (en) Robot system
CN110672009B (en) Reference positioning, object posture adjustment and graphic display method based on machine vision
CN113715935A (en) Automatic assembling system and automatic assembling method for automobile windshield
US20230306634A1 (en) Target detection method and detection device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination