CN115183677A - Detection positioning system for automobile assembly - Google Patents

Detection positioning system for automobile assembly Download PDF

Info

Publication number
CN115183677A
CN115183677A CN202210715543.6A CN202210715543A CN115183677A CN 115183677 A CN115183677 A CN 115183677A CN 202210715543 A CN202210715543 A CN 202210715543A CN 115183677 A CN115183677 A CN 115183677A
Authority
CN
China
Prior art keywords
sensor
vehicle body
punching
station
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210715543.6A
Other languages
Chinese (zh)
Inventor
陈浙泊
林建宇
陈一信
潘凌锋
陈龙威
叶雪旺
余建安
张一航
陈镇元
吴荻苇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Research Institute of Zhejiang University Taizhou
Original Assignee
Research Institute of Zhejiang University Taizhou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Research Institute of Zhejiang University Taizhou filed Critical Research Institute of Zhejiang University Taizhou
Priority to CN202210715543.6A priority Critical patent/CN115183677A/en
Publication of CN115183677A publication Critical patent/CN115183677A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the invention discloses a detection positioning system for automobile assembly, which comprises a top detection part, a limiting device, a left detection part and a right detection part, wherein a vehicle body is conveyed to a production line flat plate firstly, the vehicle body is conveyed to a flat plate production line with the limiting device in sequence through a crane, when the vehicle body runs to a visual detection position through the production line, the flat plate is controlled by the limiting device arranged below to stop, a signal is obtained by a visual system, and then user operation selection is carried out, the visual system carries out single-station operation or full-station operation according to the selection of a user, a preparation flow of positioning and punching of each station is carried out before the single-station and full-station real-time positioning operation, namely a distortion calibration flow, an N point calibration flow, an amplification rate acquisition flow and a template manufacturing flow, and the position information and the punching attitude of a punching point are accurately calculated according to the characteristic information of an image and the acquired three-dimensional deflection angle information of the vehicle body during template manufacturing and real-time positioning.

Description

Detection positioning system for automobile assembly
Technical Field
The invention belongs to the technical field of automobile production, and particularly relates to a detection positioning system for automobile assembly.
Background
An automatic driving automobile is an intelligent automobile which realizes unmanned driving through a computer system. The automatic driving automobile depends on the cooperation of artificial intelligence, visual calculation, radar, monitoring device and global positioning system, so that the computer can operate the motor vehicle automatically and safely without the active operation of human.
With the rise of the technology of the automatic driving automobile, more and more enterprises are put into the research and development of the automatic driving automobile. The current technologies such as artificial intelligence, visual monitoring and calculation, radar and the like tend to be mature, and the technologies are effectively integrated, so that an automatic driving system is realized. The automatic driving system is arranged in a manual driving automobile to modify the automobile, so that the manual driving automobile can be modified into an automatic driving automobile, and the automatic driving system can realize safe and reliable automatic driving of a learned lane after long-term testing.
The automatic driving system comprises a plurality of hardware such as radars, a vision system and the like, and the hardware devices are installed after manual punching is carried out on the appointed installation position of the automobile body of the manual driving automobile in the current automobile refitting factory. From the above, the modified or newly-manufactured automatic driving automobile has various problems of low precision, low efficiency, poor repeatability and the like in positioning manual punching.
Disclosure of Invention
In view of the technical problems, the invention provides a detection positioning system for automobile assembly, which is used for realizing high-precision positioning and punching of a punching position of an automobile body of a modified automatic-driving automobile and realizing automatic and intelligent production of a whole modification line.
In order to solve the technical problems, the invention adopts the following technical scheme:
a detecting and positioning system for automobile assembly is characterized by comprising a top detecting part, a limiting device, a left detecting part and a right detecting part, wherein a vehicle body is conveyed to a production line flat plate firstly, the vehicle body is conveyed to a flat plate production line with the limiting device in sequence through a crane, when the vehicle body runs to a visual detection position through the production line, the flat plate is controlled by the limiting device arranged below to stop, a signal is acquired by a visual system to further perform user operation selection, the visual system performs single-station operation or full-station operation according to the selection of a user, a preparation flow of positioning and punching of each station is required to be performed before the single-station and full-station real-time positioning operation, namely a distortion calibration flow, an N-point calibration flow, an amplification rate acquisition flow and a template manufacturing flow, in the single-station real-time positioning flow and the full-station real-time positioning flow, the method comprises the steps of adjusting the position of a laser sensor to approach the position of a template during manufacturing through the position of a characteristic point in template manufacturing, so as to obtain an accurate laser sensor value, respectively using a detection part corresponding to each station to obtain and process an image and a laser sensor measurement value in a preparation process of positioning and punching of each station, firstly controlling the laser sensor in a top detection part to approach the position in a single-station real-time positioning process and a full-station real-time positioning process so as to obtain an accurate laser sensor value, further calculating a deflection angle of a vehicle body on a non-camera surface, then calculating the characteristics of the image obtained by a camera and the image characteristics of the template during manufacturing so as to obtain a vehicle body deflection angle of a camera surface, and using the corresponding deflection angle to process distortion correction and image coordinate conversion of each station; the physical positions of the characteristic points are further calculated through images acquired by the cameras of the station detection components, and the position information and the punching posture of the punching point are accurately calculated according to the characteristic information of the images and the acquired three-dimensional deflection angle information of the vehicle body during template manufacturing and real-time positioning, so that accurate positioning punching is realized.
Preferably, the top detection component comprises a top right side detection camera, a first top right side light source, a second top right side light source, a top right side sensor left and right adjusting device, a top right side sensor front and back adjusting device, a top right side distance measuring sensor, a top left side detection camera, a first top left side light source, a second top left side light source, a top left side sensor left and right adjusting device, a top left side sensor front and back adjusting device, a top left side distance measuring sensor, a top front side sensor left and right adjusting device, a top front side sensor front and back adjusting device and a top front side distance measuring sensor.
Preferably, the limiting device comprises a front and rear right wheel limit, a left and right front wheel limit, a front and rear left wheel limit, a front and right left front wheel limit, a front and right rear wheel limit, a front and rear left rear wheel limit, and a left and right rear wheel limit.
Preferably, the right side detecting component comprises a right front side distance measuring sensor, a right front side sensor front-back adjusting device, a right front side sensor up-down adjusting device, a right front side detecting camera, a first right front side light source, a second right front side light source, a right rear side distance measuring sensor, a right rear side sensor front-back adjusting device, a right rear side sensor up-down adjusting device, a right rear side detecting camera, a first right rear side light source and a second right rear side light source.
Preferably, the left side detecting part comprises a left front side distance measuring sensor, a left front side sensor front and back adjusting device, a left front side sensor up and down adjusting device, a left front side detecting camera, a first left front side light source, a second left front side light source, a left rear side distance measuring sensor, a left rear side sensor up and down adjusting device, a left rear side sensor front and back adjusting device, a left rear side detecting camera, a first left rear side light source and a second left rear side light source.
Preferably, the camera distortion calibration process is used for obtaining distortion calibration files of the characteristic surface at different angles, and the distortion calibration files are used for correcting image distortion when the characteristic surface and the camera surface form different angles.
Preferably, the punching robot N-point calibration process is used to obtain an N-point calibration file, and implement conversion of image coordinates into physical coordinates in a punching robot coordinate system.
Preferably, the magnification acquisition process is configured to obtain a first-order polynomial relation function of the feature surface height and the magnification, and realize image coordinate conversion of the feature surface at different heights according to the relation function, so as to obtain accurate physical coordinates of the feature points.
Preferably, the template making process is configured to obtain position information of the image processing feature point acquired by the camera and position information of the punching point on the punching surface, and accordingly obtain a positional relationship between the feature point and the punching point.
Preferably, the full-station automatic operation flow sequentially executes a single-station real-time positioning process for the top, the front side and the rear side of the vehicle body on the total of 6 stations, wherein the single-station real-time positioning process is executed for the stations on the left and right sides of the vehicle roof, and after the single-station real-time positioning process for the two sides of the vehicle roof is completed, three angles of the vehicle body relative to the vehicle body manufactured by the template in the three-dimensional direction can be obtained, and the left deflection angle, the right deflection angle and the rotation angle in the camera plane can be used for the rotation angle required in the side single-station real-time positioning process.
The invention has the following beneficial effects:
(1) In the system scheme, the machine vision technology is combined with a high-precision sensor to determine the position relation of positioning points of different curved surfaces, so that the high-precision positioning of the characteristic-free curved surface of the vehicle body and the punching operation of the punching robot are realized.
And (4) positioning image features by a machine vision technology to obtain the positions of the feature points and the rotation angles of the feature surfaces. And the front-back deflection angle and the left-right deflection angle of the vehicle body characteristic surface are obtained through a plurality of high-precision sensors, so that the rotation angle of the vehicle body characteristic surface in the three-dimensional direction is obtained.
(2) Positioning a featureless curved surface and punching by a punching robot: and through visual positioning of the characteristic surface and the rotation angle of the characteristic surface of the vehicle body in the three-dimensional direction and by combining the relative position relation between the characteristic points in the characteristic surface and the punching points, the position and the punching posture of the punching points are obtained in real-time positioning detection, and finally the punching robot is controlled to perform punching operation.
(3) In real-time detection, the laser sensor approaches the position of a camera surface during template manufacturing.
And roughly measuring by a laser sensor and roughly positioning visually. In the real-time positioning detection, the height of the detected characteristic surface is obtained through a laser sensor, and the rotation angle and the position offset of the characteristic point relative to the characteristic point during template manufacturing are obtained through a vision system. And controlling the laser sensor to adjust the measuring position on the camera surface through the vision coarse positioning result, so that the laser sensor runs to the measuring position when the template is manufactured, and then carrying out fine measurement on the laser sensor. And (3) carrying out accurate positioning on the characteristic information of the characteristic surface visually through the accurate measurement result (obtaining an accurate height value) of the laser sensor.
(4) A high-precision positioning method for a curved surface of a vehicle body during planar three-dimensional rotation and translation.
And obtaining an accurate height value of the measured characteristic surface through a laser sensor. And carrying out image distortion correction by using distortion correction files of corresponding images at different heights. And fitting the magnifications of different heights to obtain a fitting function of the height and the magnification, and calculating the magnification according to the height during real-time detection and positioning. And converting the pixel coordinates of the feature points of the image of the detected surface in real-time detection into coordinates under the pixel coordinate standard of the template manufacturing image through the magnification. And obtaining an N-point calibration file by N-point calibration of the punching robot and the camera, and obtaining the translation amount of the characteristic points on the physical position after the acquired vehicle body characteristic image is subjected to coordinate conversion of the N-point calibration. And obtaining the rotation angle of the three-dimensional plane, and calculating the position of the punching point of the curved surface of the vehicle body according to the angle and the translation amount of the characteristic point.
(5) The multi-station positioning and punching of a plurality of different surfaces of the vehicle body are realized without human intervention.
The method comprises the steps of firstly positioning the top surface characteristics of a vehicle body, acquiring the characteristic rotation angle of the vehicle body through the characteristic image of the top surface of the vehicle body, acquiring the left and right rotation angles of a characteristic surface in a three-dimensional space through laser sensors at left and right stations of a vehicle roof, and acquiring the front and rear rotation angles of the vehicle body through laser sensors at front and rear stations of the vehicle roof, so that the rotation angles of the vehicle body in 3 directions in the three-dimensional space can be acquired, and further the positioning of the characteristic-free punching points corresponding to the top surface is realized. And positioning the characteristic surface of the side surface through the three-dimensional rotation angle, and further acquiring the position and the punching posture of the characteristic-free punching point of the side surface.
Drawings
FIG. 1 is a schematic structural diagram of a top detection component and a limiting component of a detection positioning system for automobile assembly according to an embodiment of the invention;
FIG. 2 is a schematic structural diagram of a right side detection component of the detection positioning system for automobile assembly according to the embodiment of the invention;
FIG. 3 is a schematic structural diagram of a left side detecting component of the detecting and positioning system for automobile assembly according to the embodiment of the invention;
FIG. 4 is a general flow chart illustrating the implementation of the system for detecting and locating vehicle assembly according to the embodiment of the present invention;
FIG. 5 is a flow chart of a single-station manual operation of the inspection positioning system for vehicle assembly according to the embodiment of the present invention;
FIG. 6 is a flow chart of the full-station automatic operation of the detecting and positioning system for automobile assembly according to the embodiment of the invention;
FIG. 7 is a schematic view of an off-angle in the detecting and positioning system for automobile assembly according to the embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
The invention provides a detection positioning system for automobile assembly, which comprises a top detection part, a limiting device, a left side detection part and a right side detection part, wherein a vehicle body is conveyed to a production line flat plate firstly, the vehicle body is conveyed to a flat plate production line with the limiting device in sequence through a crane, the position deviation of each time the vehicle body is conveyed to the flat plate is ensured to be within +/-5 cm, the angle deviation is within +/-5 degrees, when the vehicle body runs to a visual detection position through the production line, the flat plate is controlled by the limiting device arranged below to stop, a stop signal is obtained by a visual system, and then user operation selection is carried out, the visual system carries out single-station operation or full-station operation according to the selection of a user, a preparation flow of positioning and punching of each station is carried out before the real-time positioning operation of the single station and the full station, and the preparation flows comprise a distortion calibration flow, an N-point calibration flow, an amplification rate obtaining flow and a template manufacturing flow, in the single-station real-time positioning process and the full-station real-time positioning process, the position of a laser sensor is adjusted through the position of a characteristic point in template manufacturing to enable the position to approach the position in the template manufacturing, so that an accurate laser sensor value is obtained, the preparation process of positioning and punching of each station respectively uses a detection part corresponding to each station to obtain and process an image and a laser sensor measurement value, the laser sensor in a top detection part is controlled to approach the position in the single-station real-time positioning process and the full-station real-time positioning process to obtain an accurate laser sensor value, further the deflection angle of a vehicle body on a non-camera surface is calculated, the vehicle body deflection angle of a camera surface is calculated through the characteristics of the image obtained by a camera and the image characteristics in the template manufacturing, the distortion correction, the laser sensor positioning and the processing of each station, processing by using a corresponding deflection angle during image coordinate conversion; the physical positions of the characteristic points are further calculated through images acquired by the cameras of the station detection components, and the position information and the punching posture of the punching point are accurately calculated according to the characteristic information of the images and the acquired three-dimensional deflection angle information of the vehicle body during template manufacturing and real-time positioning, so that accurate positioning punching is realized.
Further, in an embodiment of the present invention, referring to fig. 1 to 3, the top detection component includes a top right detection camera 1, a first top right light source 2, a second top right light source 3, a top right sensor left-right adjustment device 4, a top right sensor front-back adjustment device 5, a top right ranging sensor 6, a top left detection camera 7, a first top left light source 8, a second top left light source 9, a top left sensor left-right adjustment device 10, a top left sensor front-back adjustment device 11, a top left ranging sensor 12, a top front sensor left-right adjustment device 13, a top front sensor front-back adjustment device 14, and a top front ranging sensor 15. The top left detection camera 7 is arranged above the position of the roof of the left side of the rear seat of the vehicle, the camera is vertically downward, and the top left detection camera 7 is approximately positioned above the position of the roof for effective measurement of the height of about 100cm; the first top left light source 8 and the second top left light source 9 are relatively vertical, wherein the direction of the first top left light source 8 is parallel to the front-back direction of the vehicle, the direction of the second top left light source 9 is parallel to the left-right direction of the vehicle, the illumination directions of the first top left light source 8 and the second top left light source 9 are pointed to the vicinity of the center of the camera view, and the light source height is approximately 30cm away from the vehicle roof; the height of top left side range sensor 12 apart from roof 20cm, adjusting device 10 controls top left side sensor front and back adjusting device 11 and realizes the side-to-side motion, and adjusting device control top left side range sensor 12 realizes the seesaw around top left side sensor, is equivalent to top left side range sensor 12 can carry out the side-to-side motion around the roof plane, and sensor motion adjustment mechanism wholly is located near left side vision system is close to the rear of a vehicle top. The top right detection camera 1 is located approximately at an effective measurement height of approximately 100cm above the position of the roof on the right side of the rear seats of the vehicle, with the camera facing vertically downwards. The first top right light source 2 and the second top right light source 3 are relatively vertical, wherein the direction of the first top right light source 3 is parallel to the front and back direction of the vehicle, the second top right light source 2 is parallel to the left and right direction of the vehicle, the first top right light source and the second top right light source are arranged in parallel to the top of the vehicle, the illumination direction points to the vicinity of the center of the visual field of the camera, and the height of the light sources is about 30cm away from the top of the vehicle. The height of the distance measuring sensor 6 on the left side of the top is 20cm away from the top of the vehicle, the left and right adjusting device 4 of the sensor on the right side of the top controls the front and back adjusting device 5 of the sensor on the right side of the top to realize the left and right movement, the front and back adjusting device of the sensor on the right side of the top controls the distance measuring sensor 6 on the right side of the top to realize the front and back movement, which is equivalent to that the distance measuring sensor 12 on the right side of the top can carry out the front and back left and right movement on the plane of the vehicle top, and the sensor movement adjusting mechanism is wholly positioned near the top of the vehicle tail near the right side vision system. The height of the top front ranging sensor 15 from the roof is 20cm, the top front sensor left-right adjusting device 13 controls the top front sensor front-back adjusting device 14 to move left and right, the top front sensor front-back adjusting device 14 controls the top front ranging sensor 15 to move front and back, the top front ranging sensor 15 can move front and back left and right on the plane of the roof equivalently, and the sensor movement adjusting mechanism is integrally located near the roof on the right side of the front row of the vehicle.
The limiting device comprises a right front wheel front-back limiting 16, a right front wheel left-right limiting 17, a left front wheel front-back limiting 18, a left front wheel left-right limiting 19, a right rear wheel left-right limiting 20, a right rear wheel front-back limiting 21, a left rear wheel front-back limiting 22 and a left rear wheel left-right limiting 23. Wherein the right front wheel front-rear limit 16 and the left front wheel front-rear limit 18 are respectively positioned in the front of the left and right tires at the front of the automobile to limit the forward deviation of the automobile. The right rear wheel front-rear limit 21 and the left rear wheel front-rear limit 22 are respectively positioned behind rear tires of the automobile to limit backward deviation of the automobile. The right front wheel left-right limit 17 and the right rear wheel left-right limit 20 are respectively positioned on the side surfaces of the front and rear tires on the right side of the automobile to limit the automobile to shift to the right. The left front wheel left and right limit 19 and the left rear wheel left and right limit 23 are respectively positioned on the side surfaces of the front and rear tires on the left side of the automobile, and limit the automobile to shift leftwards.
The right side detection means includes a right front side distance measurement sensor 24, a right front side sensor front-rear adjustment device 25, a right front side sensor up-down adjustment device 26, a right front side detection camera 27, a first right front side light source 28, a second right front side light source 29, a right rear side distance measurement sensor 30, a right rear side sensor front-rear adjustment device 31, a right rear side sensor up-down adjustment device 32, a right rear side detection camera 33, a first right rear side light source 34, and a second right rear side light source 35. The right front side detection camera 27 is located approximately at the right side of the position where the right front door of the vehicle is linked with the vehicle body, the effective measurement distance is about 100cm, the height from the ground is more than 80cm, and the camera surface faces to the left in parallel with the front-back direction of the vehicle. The first right front light 28 and the second right front light 29 are relatively vertical, wherein the first right front light 28 is parallel to the front and back direction of the vehicle, the second right front light 29 is parallel to the up and down direction of the vehicle, the two are parallel to the side of the vehicle body, the light direction points to the vicinity of the center of the camera view field, and the height of the light source is about 30cm away from the right side of the vehicle. The distance from the right front side distance measuring sensor 24 to the vehicle right front panel sheet metal is 20cm, the right front side sensor up-down adjusting device 26 controls the right front side sensor front-back adjusting device 25 to realize up-down movement, the right front side sensor front-back adjusting device 25 controls the right front side distance measuring sensor 24 to realize front-back movement, the right front side distance measuring sensor 24 can perform front-back up-down movement on the vehicle right side plane equivalently, and the sensor movement adjusting mechanism is wholly positioned near the vehicle head of the right front side distance measuring system. The right rear side detection camera 33 is located approximately at the right side of the connecting position of the right rear fender and the right rear bumper of the vehicle, the effective measurement distance is about 100cm, the height from the ground is more than 80cm, and the camera surface faces to the left in parallel with the front-rear direction of the vehicle. The first right rear side light source 34 is relatively perpendicular to the second right rear side light source 35, wherein the direction of the first right rear side light source 34 is parallel to the front-rear direction of the vehicle, the direction of the second right rear side light source 35 is parallel to the up-down direction of the vehicle, the first right rear side light source and the second right rear side light source are arranged in parallel to the side face of the vehicle, the illumination direction points to the vicinity of the center of the camera view field, and the height of the light source is approximately 30cm away from the right side of the vehicle. The distance between the right rear side distance measuring sensor 30 and the front right panel metal plate of the vehicle is 20cm, the right rear side sensor up-down adjusting device 32 controls the right rear side sensor front-back adjusting device 31 to move up and down, the right rear side sensor front-back adjusting device 31 controls the right rear side distance measuring sensor 30 to move back and forth, the right rear side distance measuring sensor 30 can move back and forth on the right plane of the vehicle equivalently, and the sensor movement adjusting mechanism is integrally positioned near the rear part of the vehicle far away from the rear right side vision system.
The left side detection means includes a left front side distance measuring sensor 36, a left front side sensor front-rear adjusting device 37, a left front side sensor up-down adjusting device 38, a left front side detection camera 39, a first left front side light source 40, a second left front side light source 41, a left rear side distance measuring sensor 42, a left rear side sensor up-down adjusting device 43, a left rear side sensor front-rear adjusting device 44, a left rear side detection camera 45, a first left rear side light source 46, and a second left rear side light source 47. The left front side detection camera 39 is located approximately at an effective measurement distance of about 100cm on the left side of the position where the left front door of the vehicle is linked with the vehicle body, the height from the ground is more than 80cm, and the camera surface faces to the right in parallel with the front-rear direction of the vehicle. The first left front light source 40 is perpendicular to the second left front light source 41, wherein the first left front light source 40 is parallel to the front-back direction of the vehicle, the second left front light source 41 is parallel to the up-down direction of the vehicle, the two are installed parallel to the side of the vehicle body, the illumination direction points to the vicinity of the center of the camera view, and the height of the light source is about 30cm away from the left side of the vehicle. The left front side distance measuring sensor 36 is 20cm away from the vehicle right front panel metal plate, the left front side sensor up-down adjusting device 38 controls the left front side sensor front-back adjusting device 37 to realize up-down movement, the left front side sensor front-back adjusting device 37 controls the left front side distance measuring sensor 36 to realize front-back movement, the left front side distance measuring sensor 36 can perform front-back up-down movement on the vehicle left side plane equivalently, and the sensor movement adjusting mechanism is integrally positioned near the vehicle head of the left front side vision system. The left rear side detection camera 44 is located approximately at an effective measuring distance of approximately 100cm on the left side of the connecting position of the left rear fender and the left rear bumper of the vehicle, and has a height greater than 80cm from the ground, and the camera face faces right in parallel with the front-rear direction of the vehicle. The first left rear side light source 46 and the second left rear side light source 47 are relatively vertical, wherein the direction of the first left rear side light source 46 is parallel to the front and rear direction of the vehicle, the direction of the second left rear side light source 47 is parallel to the up and down direction of the vehicle, the first left rear side light source and the second left rear side light source are arranged in parallel to the side face of the vehicle, the illumination direction points to the vicinity of the center of the visual field of the camera, and the height of the light source is approximately 30cm away from the right side of the vehicle. The left rear side distance measuring sensor 42 is 20cm away from the front panel on the right side of the vehicle, the left rear side sensor up-down adjusting device 43 controls the left rear side sensor front-back adjusting device 45 to move up and down, the left rear side sensor front-back adjusting device 45 controls the left rear side distance measuring sensor 42 to move back and forth, the left rear side distance measuring sensor 42 can move back and forth and up and down on the plane on the left side of the vehicle equivalently, and the sensor movement adjusting mechanism is integrally positioned near the tail of the vehicle far away from the left rear side vision system.
The detection positioning system for the automobile assembly is operated by adopting the following method, the positioning function of the relevant surface is realized through the relevant curved surface positioning and punching system based on machine vision, and when an object to be detected runs to the positioning system, the positioning and punching of the relevant surface positioning position are realized through the positioning method of the laser sensor, the positioning method of the machine vision and the position relation of the relevant surface. Referring to fig. 4, the following steps are included:
after the positioning system is powered on and started, the system starting initialization process is executed firstly, and the process comprises the following steps: detecting the hardware state, including detecting the connection state of the two-side punching robot, the cameras of all stations and the laser sensor; if all the hardware is normally connected, performing the next operation, otherwise performing related hardware connection abnormity alarm and prompting a user to perform related hardware maintenance; confirming the position of the punching robot, and if the position of the punching robot is not at the zero return position, carrying out zero return operation on the punching robot according to the zero return track; reading related detection parameters in the system operation process;
after the system is started and initialized, the positioning system executes different processes according to different operations of a user:
if the user selects the single-station operation, executing a single-station manual operation process; wherein single-station manual operation flow can accomplish the preliminary treatment operation before the real-time location of appointed station and the location of single-station punches, mainly includes: the method comprises a station selection process, a camera distortion calibration process, a punching robot N point calibration process, a magnification acquisition process, a template manufacturing process and a real-time positioning process.
If the user selects the full-station operation, the positioning system executes the full-station automatic operation process; and the full-station automatic operation flow finishes the positioning and punching operation of all stations to be punched according to a set sequence, and after the positioning and punching operation of all stations is finished, the punching robot is operated to the zero-resetting point according to the zero-resetting track and waits for the arrival of the next detected object. In the execution process of the full-station automatic operation flow, if the operation needing to be executed is carried out, the current punching operation is stopped through the emergency pause, and the punching robot is reset to zero.
In one embodiment of the invention, before the system performs full-station automatic operation, single-station manual operation is required, image distortion correction files, a fitting function of magnification and height, a coordinate conversion file between an image and a punching robot and a position relation between an image characteristic surface and a punching point on a positioned punching surface of each station are obtained through the single-station manual operation, and automatic positioning and punching operation of each station is realized through the obtained files and data. Referring to fig. 5, a flow chart of the single-station manual operation is shown, in which a single-station manual operation interface provides operation buttons of distortion calibration, N-point calibration of the punching robot, magnification acquisition, template making, and real-time positioning, and corresponding processes can be executed according to related operations of a user. The specific implementation process is as follows:
the single-station operation flow firstly executes the station selection flow. Before entering the single-station manual operation interface, a user needs to select a station to be operated first, and then enters the manual operation interface after the selection is completed.
If the user clicks the distortion calibration button, the camera distortion calibration process is executed. By executing the camera distortion calibration process, distortion calibration files under different angles of the characteristic surface are obtained, and the files can be used for image distortion correction when the characteristic surface and the camera surface form different angles.
And if the user clicks a button of 'calibrating N points of the punching robot', executing a calibrating process of the N points of the punching robot. And obtaining an N-point calibration file by executing an N-point calibration process of the punching robot, and realizing the conversion of the image coordinate into a physical coordinate under a coordinate system of the punching robot.
If the user clicks the "magnification acquisition" button, the magnification acquisition procedure will be executed. By executing the magnification acquisition process, a first-order polynomial relation function of the height of the characteristic surface and the image magnification is obtained, and according to the function, the image coordinate conversion of the characteristic surface at different heights can be realized, so that the accurate physical coordinates of the characteristic points are obtained.
If the user clicks the 'template making' button, the template making process is executed. By executing the template making process, the position information of the image processing characteristic points acquired by the camera and the position information of the punching points on the punching surface can be obtained, and the position relation between the characteristic points and the punching points can be obtained according to the position information.
If the user clicks the real-time positioning button, the single-station real-time positioning process is executed. Through the single-station real-time positioning process, the current selected station is positioned in real time to obtain the position information of the punching point, and the punching robot is further controlled to perform punching operation. In the real-time positioning process, an emergency pause operation can be performed, and after a user presses an emergency pause button, an emergency pause processing process is executed.
When the system is in the single-station operation standby state, the user can quit the single-station operation after pressing the quit button.
In one embodiment of the invention, cameras are required to be installed at all stations of the detected object for visual positioning, and the detected characteristic surface and the camera surface present different angles due to different placement positions of the detected object, so that the camera distortion calibration flow of the system can calibrate the distortion of the characteristic surface and the camera surface at different angles. In the template making process, calling a corresponding distortion calibration file according to the measured angles of the characteristic surface and the camera surface to perform image distortion correction. Distortion calibration files with different deflection angles formed by a calibration plate and a camera surface can be generated through a camera distortion calibration process, each distortion calibration file consists of perspective distortion of an image with the corresponding deflection angle and radial distortion parameters of the image, and the distortion files are used for image distortion correction. The camera distortion calibration process comprises the following steps:
and judging whether the user clicks a distortion calibration button or not, if so, carrying out distortion calibration, and otherwise, waiting for user operation.
And controlling the checkerboard calibration plate to the initial position. And placing the calibration plate at the object to be detected, namely the position close to the detection height of the detection characteristic point on the automobile, and controlling the laser sensor to acquire the height information of each position after the calibration plate is placed.
And calculating the offset angle between the calibration plate and the camera surface according to the acquired height information of the laser sensor. 3 laser sensor of installation among the distortion calibration process, wherein calibrate board left and right rear both sides and respectively install a laser sensor, top left side range finding sensor and top right side range finding sensor promptly, a laser sensor is installed to the right front, top front side range finding sensor promptly, the distance between top left side range finding sensor and the top right side range finding sensor is marked as H1, the distance of top right side range finding sensor and top front side range finding sensor is marked as H2, the height that top left side range finding sensor obtained is marked as H1, the height that top right side range finding sensor obtained is marked as H2, the height that top front side range finding sensor obtained is marked as H3. The offset angles that can be obtained from the three heights include a left-right offset angle and a front-back offset angle from the camera face, where the left-right offset angle is denoted as θ 1 and the front-back offset angle is denoted as θ 2. The calculation formula is as follows:
Figure BDA0003708668290000091
Figure BDA0003708668290000092
and judging whether the acquired theta 1 and theta 2 are stable in a set threshold range, if so, determining that the chessboard pattern calibration plate is in a stable state, otherwise, determining that the chessboard pattern calibration plate is still in a moving state, judging whether the angle measurement frequency exceeds a set value, if so, ending the distortion calibration operation, and prompting that the calibration plate is not stable and the calibration plate is required to be placed again, otherwise, continuing to measure the theta 1 and the theta 2.
And (3) carrying out distortion calibration on the left deflection angle, the right deflection angle and the front deflection angle at intervals of 1 degree within the range of +/-10 degrees, keeping the front deflection angle and the rear deflection angle as 0 degree when the left deflection angle and the right deflection angle are calibrated, controlling the angle deviation of the chessboard grid calibration plate from-10 degrees to 10 degrees each time to carry out distortion calibration, and carrying out the calibration process of the front deflection angle and the rear deflection angle. Judging whether the currently measured angle is in a required range according to the current calibration process, if the distortion calibration at the position of a left deflection angle and a right deflection angle of minus 10 degrees is currently carried out, judging whether theta 1 is in a threshold range near minus 10 degrees set by a system, simultaneously ensuring that theta 2 is 0 degree, if the conditions are met, considering that the angle is in the required range, carrying out next processing, and if not, controlling the motion of the checkerboard calibration plate to adjust the checkerboard calibration plate angle, so that the calibration plate angle meets the angle requirement.
And if the angle meets the requirement, triggering the camera to take a picture and acquire the image.
And carrying out distortion calibration on the acquired image to generate a distortion calibration file.
And after the distortion calibration file is generated, the distortion calibration file is loaded on the image acquired by the camera for distortion correction.
The horizontal pixel pitch and the vertical pixel pitch of the checkerboard at the upper, lower, left, right and middle positions in the checkerboard calibration plate are respectively obtained.
Judging whether the horizontal pixel spacing and the vertical pixel spacing of 5 directions are within a threshold range near a standard value, if so, indicating that the distortion calibration is successful, carrying out correct distortion correction on the image at the current angle, and storing a generated distortion calibration file and a left-right deflection angle and a front-back deflection angle of a corresponding calibration plate; and if not, indicating that the distortion correction fails, re-acquiring the distance from the laser sensor to the calibration plate, calculating the angle of the calibration plate and further re-calibrating the distortion of the current angle.
If the distortion calibration of the current angle is successful, judging whether all the angles finish the distortion calibration, if so, finishing the distortion calibration and waiting for the operation of a user, otherwise, adjusting the angle of the calibration plate and continuing the distortion calibration of the next angle.
In an embodiment of the present invention, the vision system performs feature positioning to obtain pixel coordinates of the feature point, the punching robot operates according to a coordinate system defined by the punching robot, and the punching robot needs to perform positioning of the punching point according to the position of the feature point detected by the vision, so that the vision coordinates need to be converted into coordinates of the punching robot, and the punching robot can be controlled by relative position deviation of the vision detection after the two coordinate systems are combined. The system generates N-point calibration files through the N-point calibration process of the punching robot, and realizes the conversion of visual coordinates and punching robot coordinates through the N-point calibration files in real-time positioning. The N-point calibration process of the punching robot comprises the following steps:
and after entering the N-point calibration bounding surface, waiting for the user to carry out N-point calibration operation, carrying out N-point calibration if the user clicks an N-point calibration button, and otherwise, waiting for the user to operate.
And the punching robot makes a world coordinate system according to the characteristic surface of the vehicle body. The punching robot uses a world coordinate system in the operation process, and the coordinate system can be remanufactured according to the actual working condition. The system takes the roof camera surface of the vehicle body as an XY coordinate system, the vertical direction as the Z direction of the coordinate system, and the world coordinate system is re-manufactured by adopting the mode, so that the directions of the visual coordinate system and the XY coordinate system of the punching robot are kept consistent.
The punching robot is used for setting a tool TCP (tool center point), the position of the tool TCP of the punching robot is set to the position of the N-point calibration marking head, and a posture calculation reference point of the punching robot is set to the marking head. The attitude calculation reference point of the punching robot is converted into the marking head, so that the attitude change of the punching robot cannot influence the coordinate of the marking head, and the coordinate of the marking head is also the actual position of the punching robot in the N-point calibration process.
And acquiring the distance from the upper laser sensor to the marking plate, and calculating the front-back deflection angle and the left-right deflection angle of the marking plate.
And judging whether the angle is within the angle range of distortion correction, if so, controlling the punching robot to perform N-point rounding point marking within the camera view field range, otherwise, prompting a user to adjust the angle of the calibration plate and waiting for user operation.
After the punching robot finishes the N-point rounding and point marking, the punching robot is controlled to quit the marking area and leave the view field range.
And controlling a camera to collect images, and calling distortion calibration files under corresponding angles respectively according to the measured left and right deflection angles and front and back deflection angles of the marking plate and the camera surface to perform distortion correction.
And finding the center pixel coordinates of the N marked points by a center finding algorithm.
And performing N-point calibration according to the circle center pixel coordinates of the N calibration points and the physical coordinates corresponding to the punching robot to obtain an N-point calibration file, wherein the N-point calibration file is a conversion matrix of a camera visual coordinate system and a punching robot coordinate system on an XY two-dimensional plane, and the pixel coordinate value of the camera can be converted into the physical coordinate value of the punching robot through the conversion matrix.
And after the N-point calibration file is obtained, carrying out N-point coordinate conversion on the current image so as to verify whether the generated calibration file is correct or not.
And performing circle search on the marking points in the image after the distortion correction is finished at present, and finding respective circle center pixel coordinates of the N points. And calling the generated N point calibration file to convert the circle center pixel coordinates of the N points into physical coordinates of the punching robot.
And comparing the obtained physical coordinates of the N points with the physical coordinates of the N points recorded in the dotting process of the punching robot to obtain statistical data. And judging whether the difference value is within a set threshold range, if so, storing the N-point calibration file and the height value measured by the corresponding laser sensor. Otherwise, prompting that the coordinate conversion is abnormal after the N point calibration, and please re-perform the N point calibration to wait for the user operation.
In one embodiment of the invention, due to the physical characteristics of machine vision imaging, when the measured surface is at different heights, the measured object has the characteristic of being large and small in size in the image. When the physical coordinates of the feature points are converted, the distance between the feature surface of the template manufacturing process and the feature surface of the real-time positioning punching process relative to the laser sensor are different, so that the coordinate standards of the same images acquired in the two processes are different, the position deviation exists when the coordinate conversion is directly carried out, the system converts the feature point coordinates of the images acquired in the two processes into image coordinates when the N points are calibrated, and then the image coordinates are converted into physical coordinates. Since the image magnification can quantitatively represent the size of the measured object of the image at different heights, the process is converted by the magnification at different heights. The image magnification acquisition flow includes the steps of:
after entering a magnification acquisition interface, waiting for a user to perform magnification acquisition operation, if the user clicks a magnification acquisition button, acquiring magnification, and otherwise, waiting for user operation;
by combining the tool precision at the vehicle body positioning and punching station and the condition of the vehicle body, the system obtains the amplification rate of +/-10 cm of the standard distance h between the measured surface of the vehicle body and the laser sensor, and fits to obtain a fitting function of the amplification rate and the distance. The specific measurement process is from (standard distance h-10 cm) to (standard distance h +10 cm), and the measurement is carried out at intervals of 5 mm. Therefore, the calibration plate is controlled to reach the position (with the standard distance h-10 cm), the distance from the upper laser sensor to the calibration plate is obtained, and the left-right deflection angle theta 1 'and the front-back deflection angle theta 2' of the calibration plate are obtained by adopting an angle calculation method in the distortion calibration process.
And judging whether the theta 1 'and the theta 2' are stable or not, if not, indicating that the calibration plate is not completely static, judging whether the measurement times exceed a set value or not, if so, ending the amplification rate acquisition, waiting for the operation of a user, and if not, continuously acquiring the distance from the laser sensor to the calibration plate and calculating the angle.
If theta 1 'and theta 2' are kept stable, whether theta 1 'and theta 2' are within the required range or not is judged. When the camera and the measured surface are kept completely vertical, the distortion influence of the lens is removed through distortion correction of the collected image, and no distortion exists in each position of the image, so that in order to remove the influence of the distortion introduced by the angle on the magnification, the theta 1 'and the theta 2' are required to be about 0 degrees and within +/-0.1 degrees, and if the conditions are met, the camera is triggered to shoot to obtain the image and store the image. If theta 1 'and theta 2' are not in the required range, the angle of the calibration plate is adjusted, and then theta 1 'and theta 2' are obtained again.
And judging whether the images at all the height positions are acquired completely, if so, processing all the corresponding images at all the heights to obtain the pixel spacing of the checkerboards at the same positions of the calibration plate, selecting the checkerboards in the middle of the images at the checkerboard positions, wherein the distortion of the middle positions of the images is minimum, the obtained pixel spacing error is minimum, and then obtaining the proportional relation between the pixel spacing and the physical size according to the actual physical size of the checkerboards, namely the image magnification of the calculated height. And if the images at all height positions are not acquired completely, adjusting the height of the calibration plate to acquire the image at the next height.
And after the amplification factor of each height is obtained, performing first-order polynomial fitting according to the corresponding height to obtain fitting coefficients k and b, namely m = k x h + b, wherein m is the amplification factor, h is a distance value measured by the laser sensor, and the obtained fitting coefficients k and b are stored.
In one embodiment of the present invention, the template manufacturing process includes the following steps:
clicking a template making button, and entering a template making flow interface;
and judging whether a button for starting to make the template is clicked or not.
If not, judging whether to click the template making exit button, if so, exiting the template making interface, otherwise, not operating the system and waiting for the user to click the template making start button.
If yes, the system judges whether the punching robot switches the tool TCP (tool center point), namely the system switches the position of the punching robot TCP from the marking head TCP to the punching head TCP, and after the switching is finished, the punching robot carries out position movement and posture adjustment in a workpiece coordinate system of the punching robot by taking the punching head TCP as a standard.
If not, the system firstly controls the punching robot switching tool TCP, and controls the laser sensors (including 3 on the top and 4 on the side) to acquire the distance from the vehicle body after the switching is finished. If yes, each laser sensor (including 3 on the top and 4 on the side) acquires the distance from the vehicle body.
The system judges whether the distance acquired by each laser sensor is stable, namely, each distance value is smaller than a set threshold value.
If not, the system counts the measuring times and judges whether the times exceed a set value or not, if not, the system is continuously executed to judge whether the distance acquired by each laser sensor is stable or not, namely, each distance value is smaller than a set threshold value, if so, the system prompts that the vehicle body is not placed stably, and the system returns to judge whether a button for starting to manufacture the template is clicked or not.
If so, the yaw angle of the vehicle body in each direction (the rotation angle α of the vehicle body about the Y-axis direction, the rotation angle β of the vehicle body about the X-axis direction, and the rotation angle γ of the vehicle body about the Z-axis direction) is calculated from the distances from the vehicle body obtained by the respective laser sensors (three sensors at the top and two sensors on the right or left side).
And judging whether the obtained deflection angle of each direction is in a set range.
If not, prompting that the position of the vehicle body is abnormal, needing the user to replace the position of the vehicle body, and returning to judge whether to click a button for starting to make the template for execution.
And if so, executing a vehicle body feature acquisition process.
The system judges whether the vehicle body characteristic acquisition is abnormal.
If not, the system controls the punching robot to enable the punching head to move to the punching position; the system respectively records the coordinates of the intersection point of the characteristic points, the straight line angle and the coordinates of the punching point position; and storing the position data of the corresponding laser sensor, and returning to judge whether to click a button for starting to manufacture the template for execution.
If yes, prompting a user to confirm the position of the vehicle body according to the abnormal type (including the vehicle body feature not found and the seam at the position where the vehicle body feature is not found), and ending the template manufacturing process.
Further, in an embodiment of the present invention, a vehicle body feature obtaining process in template manufacturing is as follows:
the system controls the camera to acquire the characteristic surface image of the vehicle body.
And according to the vehicle body deflection angle, the system calls a corresponding distortion calibration file to carry out image distortion correction.
According to the characteristics of the metal plates of the vehicle body, the system matches characteristic templates at seams between the metal plates.
The system determines whether the feature lookup is successful, i.e., whether the matching is successful. If not, the system prompts that the vehicle body characteristics are not found, and then the system asks for confirming the vehicle body position, and the process is ended; if so, position correction is carried out on the found features, and then two straight lines at the joint are respectively searched.
The system determines whether the two lines are successfully found. If not, prompting that no seam at the characteristic position of the vehicle body is found, and asking for confirming the position of the vehicle body, and ending the process; if yes, solving an intersection point of the two found straight lines, and carrying out coordinate conversion on the intersection point to obtain a physical coordinate.
In an embodiment of the present invention, the real-time positioning and punching process includes the following steps:
clicking a real-time positioning and punching button to enter a real-time positioning and punching flow interface;
and judging whether to press a real-time positioning starting button.
If not, judging whether the exiting single-station operating button is clicked, if so, exiting the real-time positioning and punching interface, and if not, stopping the system and waiting for a user to click the starting real-time positioning button.
And if so, carrying out a vehicle body angle acquisition process.
The system judges whether the acquisition of the vehicle body angle is successful; if not, prompting the positioning abnormity, returning to continuously judging whether to press a real-time positioning starting button. And if so, executing a vehicle body feature acquisition process.
The system judges whether the vehicle body characteristic acquisition is successful; if not, prompting that the vehicle body characteristic is not found or prompting that the seam at the position of the vehicle body characteristic is not found, asking to confirm the position of the vehicle body, returning to continuously judge whether to click a start real-time positioning button or not; if yes, calculating to obtain the position of a new punching point according to the characteristic point, the punching point, the deflection angle of the vehicle body in each direction and the characteristic point coordinate positioned in real time in the template manufacturing process.
The system judges whether the vehicle body offset position is within a set range. If not, the abnormal position of the vehicle body is prompted,
the user needs to replace the position of the vehicle body, and returns to continuously judge whether to click the start real-time positioning button; if yes, controlling the punching robot to move to the position above the new punching point after the posture of the punching robot is adjusted, and acquiring the distance from the new punching point in real time by the laser sensor.
And judging whether the distance reaches a set range. If not, the punching robot continues to approach the new punching point for a certain distance, and then whether the distance reaches the set range is judged; if yes, the system controls the punching robot to punch. And after the real-time positioning is finished, returning to continuously judging whether the real-time positioning starting button is clicked or not.
Further, the vehicle body angle obtaining process in the real-time positioning punching process is as follows:
and executing a sensor approaching template point process.
The system judges whether the approach is completed; if not, the sensor approaching template point process is continuously executed. If yes, the system calculates and obtains the non-camera surface deflection angle according to the distance measured by the laser sensor.
And executing a vehicle body characteristic acquisition process to obtain a camera surface deflection angle. Thus, the rotation angles of the vehicle body in three directions are obtained.
The above-mentioned automobile body angle obtains the flow and fixes a position to punch and punch two kinds of circumstances in real time of automobile body side in real time to automobile body top and have a difference:
taking positioning and punching on the top of the vehicle body as an example, the workpiece surface coordinate system established by the punching robot is as follows:
Figure BDA0003708668290000141
wherein, the positive direction of the Y axis is the direction of the vehicle head.
The non-camera surface deflection angle obtained by real-time positioning and punching on the top of the vehicle body comprises a rotation angle alpha (a left-right inclination angle of the vehicle body) of the vehicle body around the Y-axis direction and a rotation angle beta (a front-back inclination angle of the vehicle body) of the vehicle body around the X-axis direction, and the top camera surface deflection angle refers to a rotation angle gamma (a rotation angle in a vehicle top plane parallel to the top camera surface) of the vehicle body around the Z-axis direction.
The camera face deflection angle acquired by real-time positioning and punching on the side face of the car body refers to a rotation angle beta of the car body around the X-axis direction acquired by a side camera, and the non-camera face deflection angle comprises a rotation angle alpha around the Y-axis direction and a rotation angle gamma of the car body around the Z-axis direction acquired by a top camera.
In one embodiment of the invention, because angle changes exist at different positions of the curved surface space, the point laser sensor may have position deviation during distance acquisition, so that the acquired distance is inaccurate. The sensor approaching template point flow in the vehicle body angle acquisition flow is as follows:
and each laser sensor acquires the distance from the vehicle body and performs rough positioning.
The system judges whether the distance acquired by each laser sensor is stable, namely, each distance value is smaller than a set threshold value. If not, the system counts the measurement times and judges whether the times exceed a set value, if not, the system returns to continue coarse positioning, and if so, the system prompts that the vehicle body is not stably placed, and the process is ended.
If yes, calculating the deflection angle of each direction of the vehicle body according to the distance from the vehicle body obtained by each laser sensor.
The system judges whether the deflection angles are all in a set range; if not, prompting that the position of the vehicle body is abnormal, needing the user to replace the vehicle body, and ending the process; and if so, executing a vehicle body feature acquisition process.
And solving the position difference between the target point and the template point, calculating the deviation value of the sensor moving to the template point and controlling the sensor to move.
After moving, the laser sensor measures the distance from the vehicle body, and calculates the deflection angle of the vehicle body in each direction according to the distance from the vehicle body.
The sensor approaching template point flow is different for the two conditions of real-time positioning and punching on the top of the vehicle body and real-time positioning and punching on the side surface of the vehicle body:
each laser sensor in the real-time positioning and punching of the top of the vehicle body is a top sensor (6, 12, 15), and the calculated deflection angle of each direction of the vehicle body comprises a rotation angle alpha of the vehicle body around the Y-axis direction and a rotation angle beta of the vehicle body around the X-axis direction.
Each laser sensor in the real-time positioning and punching of the side surface of the vehicle body comprises a top sensor (6, 12) and a sensor which corresponds to the positioning and punching position and is used for measuring the working distance of a camera.
The method comprises the steps of obtaining the vehicle body characteristics in real-time positioning punching, wherein the vehicle body characteristic obtaining process is consistent with the vehicle body characteristic obtaining process in template manufacturing, only solving the intersection point of two found straight lines, carrying out coordinate conversion on the intersection point to obtain a physical coordinate, changing the method into the method for solving the intersection point of the two found straight lines, and converting the intersection point of the two straight lines into the pixel coordinate of a template image according to the height of the real-time characteristic point and the height of the template characteristic point. And coordinate conversion is performed on the intersection points to obtain physical coordinates.
The formula for converting the intersection point (namely the characteristic point) of two straight lines into the pixel coordinate of the template image in the process of acquiring the vehicle body characteristics during real-time positioning and punching is as follows:
Figure BDA0003708668290000151
Figure BDA0003708668290000152
wherein m is 0 The magnification of the feature point in the template production is shown, which is the magnification of the feature point in the template productionSubstituting the distance value measured by the station sensor into a first-order fitting function in the magnification acquisition flow to obtain the distance value; m is a unit of 1 The amplification rate of the characteristic points in real-time positioning is represented, and is obtained by substituting the distance values measured after the position of the corresponding station sensors is corrected in real-time positioning into a first-order fitting function in the amplification rate acquisition flow; point (X, Y) represents the pixel coordinates of a feature point in real-time localization, point (X) center ,Y center ) The pixel coordinates of the image center point in the real-time localization are represented, and the points (X ', Y') represent the pixel coordinates of the feature points in the real-time localization after the magnification compensation.
Coordinate conversion is carried out on the points (X ', Y') to obtain physical coordinate points (X) 1 ,Y 1 )。
Calculating new punching point position in real-time positioning punching process
Figure BDA0003708668290000161
The specific calculation process of (2) is as follows:
case 1: the deflection angles of the three directions of the vehicle body measured in the real-time positioning process are the same as those measured in the template manufacturing process;
Figure BDA0003708668290000162
Figure BDA0003708668290000163
wherein, point (X) 0 ,Y 0 ) Physical coordinates, points, obtained by coordinate conversion of pixel coordinates representing characteristic points in template fabrication
Figure BDA0003708668290000164
Representing the physical coordinates of the punch points in the template fabrication.
Case 2: the deflection angles (alpha and beta) of the vehicle body measured in the real-time positioning process are the same as those in the template manufacturing process;
Figure BDA0003708668290000165
Figure BDA0003708668290000166
wherein the angle γ Δ Equal to the rotation angle gamma of the car body at the time of real-time positioning 1 Minus the rotation angle gamma of the car body during the fabrication of the template 0
Figure BDA0003708668290000167
Shows that the vehicle body rotates by an angle gamma relative to the template during manufacture when positioned in real time Δ The resulting X-axis offset of the punching point,
Figure BDA0003708668290000168
shows that the car body rotates by an angle gamma relative to the template during manufacture during real-time positioning Δ The resulting Y-axis offset of the punching point.
Case 3: the vehicle body deflection angle beta measured in real-time positioning is the same as that in template manufacturing;
(1) If the line segment formed by the punching point and the characteristic point in the template manufacturing process does not have an included angle with the template plane;
Figure BDA0003708668290000169
Figure BDA00037086682900001610
wherein the angle alpha Δ Equal to the angle of rotation alpha of the vehicle body at the time of real-time positioning 1 Subtracting the rotation angle a of the vehicle body during the template manufacture 0
Figure BDA00037086682900001611
The X-axis direction offset in the case 2 is expressed by multiplying cos (. Alpha.) Δ ) This is a change in the amount of offset in the X-axis direction in case 2 due to the left-right inclination of the vehicle body, i.e., means that the amount of offset becomes its in the plane of the pattern plateAnd (6) projecting.
(2) If the line segment formed by the punching point and the characteristic point has an included angle eta with the template plane when the template is manufactured, the punching point is lower than the characteristic point and inclines downwards relative to the characteristic point (or the punching point is higher than the characteristic point and inclines upwards relative to the characteristic point);
Figure BDA00037086682900001612
Figure BDA00037086682900001613
wherein, the angle eta X Equal to the projection angle of the included angle formed by the line segment formed by the perforating point and the characteristic point and the template plane in the plane formed by the X axis and the Z axis,
Figure BDA00037086682900001614
and the X-axis offset which needs to be compensated when the left and right inclination angles are different and the included angle eta exists when the vehicle body is positioned in real time relative to the template.
(3) If the line segment formed by the punching point and the characteristic point in the template manufacturing process forms an included angle eta with the template plane, the punching point is higher than the characteristic point, and the punching point inclines downwards relative to the characteristic point (or the punching point is lower than the characteristic point and the punching point inclines upwards relative to the characteristic point);
Figure BDA0003708668290000171
Figure BDA0003708668290000172
case 4: the vehicle body deflection angle alpha measured in real-time positioning is the same as that in template manufacturing;
(1) If the line segment formed by the punching point and the characteristic point in the template manufacturing process does not have an included angle with the template plane;
Figure BDA0003708668290000173
Figure BDA0003708668290000174
wherein the angle beta Δ Equal to the angle of rotation beta of the vehicle body at the time of real-time positioning 1 Minus the rotation angle beta of the vehicle body during the fabrication of the template 0
Figure BDA0003708668290000175
The amount of Y-axis direction shift in case 2 is expressed by multiplying cos (. Beta.) by Δ ) This is a change in the Y-axis direction offset amount in case 2 due to the vehicle body tilting forward and backward, i.e., means that the offset amount becomes its projection on the template plane.
(2) If the line segment formed by the punching point and the characteristic point has an included angle eta with the template plane when the template is manufactured, the punching point is lower than the characteristic point and inclines downwards relative to the characteristic point (or the punching point is higher than the characteristic point and inclines upwards relative to the characteristic point);
Figure BDA0003708668290000176
Figure BDA0003708668290000177
wherein, the angle eta Y The projection angle of the included angle formed by the line segment formed by the punching point and the characteristic point and the template plane in the plane formed by the Y axis and the Z axis is equal to the projection angle of the included angle formed by the line segment formed by the punching point and the characteristic point and the template plane in the template manufacturing process.
Figure BDA0003708668290000178
And the Y-axis offset which needs to be compensated when the front and rear inclination angles are different and an included angle eta exists during the manufacturing of the relative template during the real-time positioning of the vehicle body is represented.
(3) If the line segment formed by the punching point and the characteristic point in the template manufacturing process forms an included angle eta with the template plane, the punching point is higher than the characteristic point, and the punching point inclines downwards relative to the characteristic point (or the punching point is lower than the characteristic point and the punching point inclines upwards relative to the characteristic point);
Figure BDA0003708668290000179
Figure BDA00037086682900001710
case 5: the measured inclination angles alpha and beta of the vehicle body during real-time positioning are different from those during template manufacturing;
(1) If the line segment formed by the punching point and the characteristic point in the template manufacturing process does not have an included angle with the template plane;
Figure BDA00037086682900001711
Figure BDA00037086682900001712
(2) If the line segment formed by the punching point and the characteristic point has an included angle eta with the template plane when the template is manufactured, the punching point is lower than the characteristic point and inclines downwards relative to the characteristic point (or the punching point is higher than the characteristic point and inclines upwards relative to the characteristic point);
Figure BDA0003708668290000181
Figure BDA0003708668290000182
(3) If the line segment formed by the punching point and the characteristic point in the template manufacturing process forms an included angle eta with the template plane, the punching point is higher than the characteristic point, and the punching point inclines downwards relative to the characteristic point (or the punching point is lower than the characteristic point and the punching point inclines upwards relative to the characteristic point);
Figure BDA0003708668290000183
Figure BDA0003708668290000184
in one embodiment of the invention, the full-station automatic operation flow finishes the positioning and punching operation of all stations to be punched according to a set sequence, and after the positioning and punching operation of all stations is finished, the punching robot is operated to a zeroing point according to a zeroing track and waits for the arrival of the next detected object. As shown in FIG. 6, the fully-automatic operation process includes the following steps:
and after entering the all-station operation interface, waiting for a user to click an all-station operation button, executing a vehicle in-place judgment process if the user clicks the all-station operation, and otherwise, waiting for the user to operate.
The vehicle body operation assembly line is composed of a plurality of large flat plates, tires at the front and the rear of the vehicle body and tire side limiting devices are installed on the large flat plates, the large flat plates sequentially move forwards after the vehicle body is placed at limiting positions, when the vehicle body moves to a visual positioning station, the limiting devices at the bottom of the large flat plates trigger limiting signals, therefore, the vehicle body in-place judgment process firstly judges whether the limiting signals are received, if the limiting signals are received, a laser sensor at the visual positioning station is controlled to acquire a height value away from the roof in real time, if the height value is kept stable within set time, the vehicle body is considered to be in place and kept stable, a single-station real-time positioning process is executed, otherwise, the height value of the laser sensor is continuously acquired, and whether the height value is kept stable within the set time is judged.
The system sequentially executes a single-station real-time positioning process on the top of the vehicle body, the left side and the right side of the front side and the rear side of the vehicle body, and 6 stations in total. The single-station real-time positioning process is executed on the stations on the left side and the right side of the roof firstly, because the front-back deflection angle and the left-right deflection angle of the top relative to the camera surface are minimum, the rotating angle of the top in the camera surface can be accurately measured through vision, and the three-dimensional angle deviation of the car body relative to the template during manufacturing is acquired in the mode to be the most accurate. After the single-station real-time positioning process of the two sides of the roof is completed, three angles of the car body relative to the template manufacturing car body in the three-dimensional direction can be obtained, wherein the left deflection angle, the right deflection angle and the rotation angle in the camera plane can be used for the rotation angle required in the single-station real-time positioning process of the side face. The system is provided with 7 laser sensors, wherein the roof is provided with 3 laser sensors for acquiring the front-back deflection angle and the left-right deflection angle of the vehicle body camera surface, and the left-right laser sensor below the roof camera is also used for acquiring the actual vehicle body height; the front station and the rear station on the left side and the right side of the vehicle body are respectively provided with 1 laser sensor for obtaining the distance of the vehicle body, so that the system is provided with 7 laser sensors in total.
And judging whether all stations complete punching, if so, controlling the punching robot to return to a standby point to wait for the arrival of the next vehicle body, and otherwise, continuing to position the vehicle body of the next station in real time.
The surface of the detected object, namely the automobile, is not a complete plane, the radian of a metal plate on the surface of the automobile exists, if the position of the sensor is kept unchanged, when a template is manufactured, the distance information acquired by the laser sensor and the actual measurement result are that the laser sensor can have deviation. The angular offset of the vehicle obtained by the distance is not the correct angle. Therefore, the camera is fixed, and the laser sensor can move.
After the laser sensor approaches the process execution, the front-back deflection angle and the left-right deflection angle of the car roof can be calculated according to the laser sensor. The solving method is illustrated as follows: referring to fig. 7, if circle 1 is the initial position, the distances measured at the left and right rear sides are HG, AC, respectively, and the distance between the two sensors is AH. Circle 2 is the position of parkking again, and if the sensor does not follow the removal, the position of measuring is HD, AB, and in the in-service use, the sensor will carry out the position according to the vision location condition and approach, so only need calculate the distance after the sensor follows the removal. The sensor following movements I and J are the new positions of the sensor, the measured positions are IM, JK, as shown, the car angle does not change much. The sensor does not move when the template is manufactured, and the deflection angle of the vehicle body in a certain direction can be calculated through the distance relation of the sensor; in real-time detection, after the sensor moves by an approximation method, the deflection angle of the vehicle body in a certain direction can be calculated by the sensor distance relation. The calculation formula is as follows:
Figure BDA0003708668290000191
theta is the included angle between AH and GC and is the vehicle body angle during template manufacture
Figure BDA0003708668290000192
Theta' is the included angle between AH and MK, and is the angle of the vehicle body in real-time positioning
The inclination angle of the vehicle body in the designated direction can be calculated during template manufacturing and real-time positioning through the formula. According to the above definition, the vehicle body alpha angle is the vehicle body left and right inclination angle, the beta angle is the vehicle body front and back inclination angle, theta of the formula corresponds to the vehicle body a angle when the template is manufactured and theta 'corresponds to the vehicle body a angle when the vehicle body is positioned in real time when the vehicle body left and right inclination angle is calculated, and the deflection angle delta when the vehicle body is positioned in real time can be obtained through (theta' -theta), specifically alpha Δ ,β Δ And gamma Δ
Example (c): assuming that the station is at the camera station on the right side of the roof, according to the display of the pictures, the change of the characteristic position of the roof is calculated to be K around the Y-axis direction (the left and right directions of the vehicle body) through the camera, and the control center controls the left and right adjusting devices 4 of the top right sensor to move for the displacement of the K value.
Assuming that the station is at the camera station on the right side of the roof, according to the picture display, the change of the characteristic position of the roof is calculated to be L around the X-axis direction (the front and back directions of the vehicle body) through the camera, and the control center controls the left and right adjusting device 4 of the top right sensor to move for the displacement of the L value. The other stations work in the same way.
The 6 fixed cameras respectively collect characteristic point information, and the 7 ranging sensors measure distances.
The ceiling sensor height 12,6 is known to be at a distance H1
The height of the overhead sensor is 6, 15H 2
The height of the right sensor is 30, 24 equal to the height of the right sensor and is H3 away
Left sensor equal height 36, 42 is H4 apart
The sensor template measurements were set as: h 12 ,H 6 ,H 15 ,H 30 ,H 24 ,H 36 ,H 42
The measured values of the sensor during template manufacturing are set as follows: h is a total of 12 ,h 6 ,h 15 ,h 30 ,h 24 ,h 36 ,h 42
The measured values after the sensor corrects the position during real-time measurement are as follows: h is a total of 12-1 ,h 6-1 ,h 15-1 ,h 30-1 ,h 24-1 ,h 36-1 ,h 42-1
And (4) starting to manufacture a template when the vehicle body is in place, feeding the distance values measured by the 7 sensors back to the control center, and calculating the amplification rate.
And photographing by the camera to record the characteristic point information.
The vehicle body is put in place again, and real-time measurement is started;
the overhead sensor measures a distance value h 12-1 ,h 6-1 And calculating the amplification ratio and sending the amplification ratio to a control center.
The top two cameras calculate the offset angle in the Z-axis direction and the in-plane offset of the XY coordinate system and send data to the control center.
The control center adjusts the position of the sensor according to the camera parameters.
The offset angles of the vehicle body around the X-axis direction and the Y-axis direction are calculated by means of the top three sensors, and the rotation angles of the vehicle body in the XYZ three coordinate axis directions are obtained.
The angle calculation formula during template manufacturing is as follows:
rotation angle around X axis direction: alpha is alpha 0 =tan -1 (h 15 -h 6 )/H 2
Rotation angle around Y-axis direction: beta is a 0 =tan -1 (h 12 -h 6 )/H 1
The angle calculation formula during real-time positioning is as follows:
rotation angle around X axis direction: alpha (alpha) ("alpha") 1 =tan -1 (h 15-1 -h 6-1 )/H 2
Rotation angle around Y-axis direction: beta is a beta 1 =tan -1 (h 12-1 -h 6-1 )/H 1
By (beta) 10 ) Can obtain beta Δ By passing through (alpha) 10 ) Can obtain alpha Δ
The corrected value of the sensor position is transmitted to the control center to be used as a magnification correction value for the camera.
It should be understood that the exemplary embodiments described herein are illustrative and not restrictive. While one or more embodiments of the present invention have been described with reference to the accompanying drawings, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the following claims.

Claims (10)

1. A detecting and positioning system for automobile assembly is characterized by comprising a top detecting part, a limiting device, a left detecting part and a right detecting part, wherein a vehicle body is conveyed to a production line flat plate firstly, the vehicle body is conveyed to a flat plate production line with the limiting device in sequence through a crane, when the vehicle body runs to a visual detection position through the production line, the flat plate is controlled by the limiting device arranged below to stop, a signal is acquired by a visual system to further perform user operation selection, the visual system performs single-station operation or full-station operation according to the selection of a user, a preparation flow of positioning and punching of each station is required to be performed before the single-station and full-station real-time positioning operation, namely a distortion calibration flow, an N-point calibration flow, an amplification rate acquisition flow and a template manufacturing flow, in the single-station real-time positioning flow and the full-station real-time positioning flow, the method comprises the steps that the position of a laser sensor is adjusted through the position of a feature point in template manufacturing to enable the position of the laser sensor to approach the position of the template during manufacturing, so that an accurate laser sensor value is obtained, detection parts corresponding to stations are used for obtaining and processing images and laser sensor measurement values in a preparation process of positioning and punching of each station, the laser sensor in a top detection part is controlled to approach the position in a single-station real-time positioning process and a full-station real-time positioning process to obtain an accurate laser sensor value, the deflection angle of a vehicle body on a non-camera surface is calculated, the vehicle body deflection angle of a camera surface is calculated through the characteristics of the images obtained by a camera and the characteristics of the images during template manufacturing, and distortion correction and image coordinate conversion of each station are processed by using the corresponding deflection angle; the physical positions of the feature points are further calculated through images acquired by the cameras of the station detection components, and according to the feature information of the images and the acquired three-dimensional deflection angle information of the vehicle body during template manufacturing and real-time positioning, the position information and the punching posture of the punching point are accurately calculated, so that accurate positioning and punching are realized.
2. The system of claim 1, wherein the top detection component comprises a top right detection camera, a first top right light source, a second top right light source, a top right sensor left and right adjustment device, a top right sensor front and back adjustment device, a top right ranging sensor, a top left detection camera, a first top left light source, a second top left light source, a top left sensor left and right adjustment device, a top left sensor front and back adjustment device, a top left ranging sensor, a top front sensor left and right adjustment device, a top front sensor front and back adjustment device, and a top front ranging sensor.
3. The system of claim 1, wherein the limiting means comprises a front right wheel front-rear limiting means, a front right wheel left-right limiting means, a front left wheel front-rear limiting means, a front left wheel left-right limiting means, a rear right wheel front-rear limiting means, a rear left wheel front-rear limiting means, and a rear left wheel left-right limiting means.
4. The automotive equipped test positioning system of claim 1, wherein the right side test component comprises a right front side distance measuring sensor, a right front side sensor front and rear adjusting device, a right front side sensor up and down adjusting device, a right front side test camera, a first right front side light source, a second right front side light source, a right rear side distance measuring sensor, a right rear side sensor front and rear adjusting device, a right rear side sensor up and down adjusting device, a right rear side test camera, a first right rear side light source, a second right rear side light source.
5. The system of claim 1, wherein the left detection component comprises a left front distance measurement sensor, a left front sensor front-back adjustment device, a left front sensor up-down adjustment device, a left front detection camera, a first left front light source, a second left front light source, a left rear distance measurement sensor, a left rear sensor up-down adjustment device, a left rear sensor front-back adjustment device, a left rear detection camera, a first left rear light source and a second left rear light source.
6. The system for detecting and positioning automobile assembly according to claim 1, wherein the camera distortion calibration process is used for obtaining distortion calibration files at different angles of the feature plane, and the distortion calibration files are used for correcting image distortion when the feature plane and the camera plane form different angles.
7. The system for detecting and positioning automobile assembly according to claim 1, wherein the N-point calibration process of the punching robot is used for obtaining an N-point calibration file, so as to realize the conversion of image coordinates into physical coordinates in a coordinate system of the punching robot.
8. The system for detecting and positioning automobile assembly according to claim 1, wherein the magnification acquisition process is used for obtaining a first-order polynomial relation function of the height of the feature plane and the magnification, and image coordinate conversion of the feature plane at different heights is realized according to the relation function, so that accurate physical coordinates of the feature points are obtained.
9. The system for detecting and positioning automobile assembly according to claim 1, wherein the template making process is used for obtaining the position information of the image processing feature points acquired by the camera and the position information of the punching points on the punching surface, so as to obtain the position relationship between the feature points and the punching points.
10. The system as claimed in claim 1, wherein the full-station automatic operation flow sequentially performs the single-station real-time positioning process for a total of 6 stations on the top, front and rear sides of the vehicle body, wherein the stations on the left and right sides of the roof perform the single-station real-time positioning process first, and after the single-station real-time positioning process on the two sides of the roof is completed, three angles of the vehicle body in the three-dimensional direction relative to the template manufacturing vehicle body can be obtained, and wherein the left and right deflection angles and the rotation angle in the camera plane can be used for the rotation angle required in the side single-station real-time positioning process.
CN202210715543.6A 2022-06-22 2022-06-22 Detection positioning system for automobile assembly Pending CN115183677A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210715543.6A CN115183677A (en) 2022-06-22 2022-06-22 Detection positioning system for automobile assembly

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210715543.6A CN115183677A (en) 2022-06-22 2022-06-22 Detection positioning system for automobile assembly

Publications (1)

Publication Number Publication Date
CN115183677A true CN115183677A (en) 2022-10-14

Family

ID=83514982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210715543.6A Pending CN115183677A (en) 2022-06-22 2022-06-22 Detection positioning system for automobile assembly

Country Status (1)

Country Link
CN (1) CN115183677A (en)

Similar Documents

Publication Publication Date Title
CN112122840B (en) Visual positioning welding system and welding method based on robot welding
JP4191080B2 (en) Measuring device
CN109665307B (en) Work system, work execution method for article, and robot
US7532949B2 (en) Measuring system
EP2554940B1 (en) Projection aided feature measurement using uncalibrated camera
US6470271B2 (en) Obstacle detecting apparatus and method, and storage medium which stores program for implementing the method
WO2020121396A1 (en) Robot calibration system and robot calibration method
JPH0691571A (en) Method and apparatus for calibrating picture image guided robot
US7502504B2 (en) Three-dimensional visual sensor
CN109483539A (en) Vision positioning method
US20210291376A1 (en) System and method for three-dimensional calibration of a vision system
CN104002602A (en) Laser activation device with machining precision correcting function and laser activation method
CN115493489A (en) Method for detecting relevant surface of measured object
CN111609847A (en) Automatic planning method of robot photographing measurement system for sheet parts
CN109751987A (en) A kind of vision laser locating apparatus and localization method for mechanical actuating mechanism
CN116560062B (en) Microscope focusing anti-collision control method
CN115183677A (en) Detection positioning system for automobile assembly
CN115493488A (en) System for detecting relevant surface of measured object
CN115493490A (en) Detection and positioning method for automobile assembly
EP3895855A1 (en) Robot control system and robot control method
WO2022075303A1 (en) Robot system
CN110370272A (en) It is a kind of based on the robot TCP calibration system vertically reflected
JPH03259705A (en) Angle measuring instrument for bending machine
CN113715935A (en) Automatic assembling system and automatic assembling method for automobile windshield
CN114310940B (en) Workpiece positioning method, device, medium and working machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination