CN114536156B - Shoe upper polishing track generation method - Google Patents

Shoe upper polishing track generation method Download PDF

Info

Publication number
CN114536156B
CN114536156B CN202011336516.5A CN202011336516A CN114536156B CN 114536156 B CN114536156 B CN 114536156B CN 202011336516 A CN202011336516 A CN 202011336516A CN 114536156 B CN114536156 B CN 114536156B
Authority
CN
China
Prior art keywords
polishing
camera
calibration plate
gesture
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011336516.5A
Other languages
Chinese (zh)
Other versions
CN114536156A (en
Inventor
何飞
曹超
郑之增
刘炎南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Tianji Industrial Intelligent System Co Ltd
Original Assignee
Guangdong Tianji Industrial Intelligent System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Tianji Industrial Intelligent System Co Ltd filed Critical Guangdong Tianji Industrial Intelligent System Co Ltd
Priority to CN202011336516.5A priority Critical patent/CN114536156B/en
Publication of CN114536156A publication Critical patent/CN114536156A/en
Application granted granted Critical
Publication of CN114536156B publication Critical patent/CN114536156B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B24GRINDING; POLISHING
    • B24BMACHINES, DEVICES, OR PROCESSES FOR GRINDING OR POLISHING; DRESSING OR CONDITIONING OF ABRADING SURFACES; FEEDING OF GRINDING, POLISHING, OR LAPPING AGENTS
    • B24B19/00Single-purpose machines or devices for particular grinding operations not covered by any other main group
    • AHUMAN NECESSITIES
    • A43FOOTWEAR
    • A43DMACHINES, TOOLS, EQUIPMENT OR METHODS FOR MANUFACTURING OR REPAIRING FOOTWEAR
    • A43D8/00Machines for cutting, ornamenting, marking or otherwise working up shoe part blanks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B24GRINDING; POLISHING
    • B24BMACHINES, DEVICES, OR PROCESSES FOR GRINDING OR POLISHING; DRESSING OR CONDITIONING OF ABRADING SURFACES; FEEDING OF GRINDING, POLISHING, OR LAPPING AGENTS
    • B24B27/00Other grinding machines or devices
    • B24B27/0084Other grinding machines or devices the grinding wheel support being angularly adjustable
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B24GRINDING; POLISHING
    • B24BMACHINES, DEVICES, OR PROCESSES FOR GRINDING OR POLISHING; DRESSING OR CONDITIONING OF ABRADING SURFACES; FEEDING OF GRINDING, POLISHING, OR LAPPING AGENTS
    • B24B27/00Other grinding machines or devices
    • B24B27/02Bench grinders
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B24GRINDING; POLISHING
    • B24BMACHINES, DEVICES, OR PROCESSES FOR GRINDING OR POLISHING; DRESSING OR CONDITIONING OF ABRADING SURFACES; FEEDING OF GRINDING, POLISHING, OR LAPPING AGENTS
    • B24B49/00Measuring or gauging equipment for controlling the feed movement of the grinding tool or work; Arrangements of indicating or measuring equipment, e.g. for indicating the start of the grinding operation
    • B24B49/12Measuring or gauging equipment for controlling the feed movement of the grinding tool or work; Arrangements of indicating or measuring equipment, e.g. for indicating the start of the grinding operation involving optical means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/005Manipulators for mechanical processing tasks
    • B25J11/0065Polishing or grinding
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P70/00Climate change mitigation technologies in the production process for final industrial or consumer products
    • Y02P70/10Greenhouse gas [GHG] capture, material saving, heat recovery or other energy efficient measures, e.g. motor control, characterised by manufacturing processes, e.g. for rolling metal or metal working

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Constituent Portions Of Griding Lathes, Driving, Sensing And Control (AREA)
  • Finish Polishing, Edge Sharpening, And Grinding By Specific Grinding Devices (AREA)

Abstract

The invention relates to a shoe upper polishing track generation method, which comprises the following steps: providing an upper polishing device, wherein the upper polishing device comprises two oppositely arranged machine vision modules; calibrating the machine vision module to determine a workpiece coordinate system; unifying the workpiece coordinate systems determined by the two machine vision modules respectively to obtain a position relation matrix of the two machine vision modules; marking the position to be polished on the upper; enabling the two machine vision modules to respectively scan marks on the upper and obtain two tracks to be polished; and splicing and aligning the two tracks to be polished according to the position relation matrix to determine the polishing track of the upper. After the machine vision modules are well calibrated, only two machine vision modules are required to scan the shoe uppers, and the polishing tracks obtained by respective scanning are spliced and aligned, so that the polishing track of the shoe uppers can be obtained, the operation is convenient and quick, and even if the shoe type is replaced, the polishing track of the shoe type after replacement can be obtained very quickly.

Description

Shoe upper polishing track generation method
Technical Field
The invention relates to the technical field of shoemaking, in particular to a shoe upper polishing track generation method.
Background
In the shoe industry, uppers are polished to provide a secure bond between the uppers and the soles. The currently adopted upper polishing methods mainly comprise two methods: 1. by adopting manual polishing, the polishing mode can not ensure the polishing quality, and has high labor intensity and low working efficiency; 2. the industrial robot is adopted for polishing, the polishing mode needs to manually operate a polishing head for offline editing to generate polishing tracks of each shoe type, the operation process is time-consuming, the production efficiency is low, and the requirement on quality of operators is high.
Disclosure of Invention
Based on this, it is necessary to provide an upper grinding track generation method capable of quickly obtaining an upper grinding track.
A shoe upper polishing track generation method comprises the following steps:
providing an upper polishing device, wherein the upper polishing device comprises two oppositely arranged machine vision modules;
calibrating the machine vision module to determine a workpiece coordinate system;
unifying the workpiece coordinate systems determined by the two machine vision modules respectively to obtain a position relation matrix of the two machine vision modules;
marking the position to be polished on the upper;
enabling the two machine vision modules to respectively scan marks on the upper and obtain two tracks to be polished;
and splicing and aligning the two tracks to be polished according to the position relation matrix to determine the polishing track of the upper.
According to the shoe upper polishing track generation method, after the machine vision modules are calibrated, only two machine vision modules are required to scan the shoe upper, and polishing tracks obtained by respective scanning are spliced and aligned, so that the polishing track of the shoe upper can be obtained, the operation is convenient and quick, and the efficiency is high. In addition, when the shoe type is replaced, only the position to be polished on the replaced shoe type is required to be marked, and the polishing track of the shoe type after replacement can be obtained quickly through the scanning treatment of the two machine vision modules.
In one embodiment, the machine vision module includes a camera and a line laser generator; the upper grinding apparatus also includes a base for mounting the upper.
In one embodiment, the step of calibrating the machine vision module to determine the object coordinate system includes the steps of:
analyzing the calibration plate picture shot by the camera to obtain a camera correction calibration matrix;
enabling the line laser generator to project laser lines onto the calibration plate, and analyzing the calibration plate picture with the laser lines shot by the camera to obtain a laser surface calibration matrix;
the calibration plate is arranged on the base, so that the calibration plate and the camera move relatively, and a plurality of calibration plate pictures shot by the camera are analyzed to obtain a camera gesture calibration matrix;
and combining the camera correction calibration matrix, the laser surface calibration matrix and the camera posture calibration matrix to obtain the workpiece coordinate system.
In one embodiment, the step of unifying the workpiece coordinate systems determined by the two machine vision modules to obtain the positional relationship matrix of the two machine vision modules includes the following steps:
moving the base to a standby position, and mounting the calibration plate on the base;
photographing the calibration plate by the camera to obtain a plane picture;
converting the plane picture into a space point cloud;
and splicing and aligning the space point clouds obtained by the two cameras to obtain a position relation matrix capable of aligning the space point clouds.
In one embodiment, the step of enabling the two machine vision modules to scan the marks on the upper and obtain two tracks to be polished includes:
mounting the upper to the base;
causing both of said cameras to scan indicia on said upper, respectively;
and converting the pictures shot by the two cameras into space point clouds according to the respective workpiece coordinate systems, and respectively obtaining two tracks to be polished formed by the space point clouds.
In one embodiment, after the step of determining the polishing track of the upper by stitching and aligning the two tracks to be polished according to the alignment relation matrix, the method further includes the following steps:
respectively obtaining polishing tracks of the upper in a first posture and a second posture, wherein a fixed rotation angle exists between the first posture and the second posture;
converting the polishing track in the second gesture according to a rotation translation matrix between the first gesture and the second gesture;
and splicing the converted polishing track with the polishing track in the first posture.
In one embodiment, in the step of converting the grinding track in the second posture according to a rotational translation matrix between the first posture and the second posture, the rotational translation matrix is obtained by:
mounting the calibration plate on the base in the first attitude;
converting the first gesture picture shot by the camera into a first gesture point cloud of a calibration plate;
mounting the calibration plate on the base in the second attitude;
converting the second gesture picture shot by the camera into a second gesture point cloud of the calibration plate;
and splicing and aligning the second gesture point cloud of the calibration plate with the first gesture point cloud of the calibration plate to obtain a rotation translation matrix capable of aligning the second gesture point cloud with the first gesture point cloud of the calibration plate.
In one embodiment, after the step of determining the polishing track of the upper by stitching and aligning the two tracks to be polished according to the alignment relation matrix, the method further includes the following steps:
performing trial polishing on the upper along the polishing track;
and carrying out position correction on the point position deviating from the actual position in the polishing track.
In one embodiment, the step of performing position correction on the point position deviating from the actual position in the polishing track includes the following steps:
selecting the number of the mark point deviating from the actual position in the polishing track;
selecting a degree of freedom to be adjusted for the marker point;
selecting a step size to be adjusted each time;
and adjusting the key until the marking point coincides with the actual position.
In one embodiment, in the step of marking the position to be polished on the upper, the color of the mark is different from the color of the upper.
Drawings
FIG. 1 is a flowchart of a method for generating an upper grinding track according to an embodiment of the present invention;
fig. 2 is a schematic structural view of an embodiment of the upper polishing device provided in step S11 shown in fig. 1.
Detailed Description
In order that the above objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to the appended drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. The present invention may be embodied in many other forms than described herein and similarly modified by those skilled in the art without departing from the spirit of the invention, whereby the invention is not limited to the specific embodiments disclosed below.
In the description of the present invention, it should be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", "axial", "radial", "circumferential", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element being referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present invention.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
In the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," "secured," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly, through intermediaries, or both, may be in communication with each other or in interaction with each other, unless expressly defined otherwise. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In the present invention, unless expressly stated or limited otherwise, a first feature "up" or "down" a second feature may be the first and second features in direct contact, or the first and second features in indirect contact via an intervening medium. Moreover, a first feature being "above," "over" and "on" a second feature may be a first feature being directly above or obliquely above the second feature, or simply indicating that the first feature is level higher than the second feature. The first feature being "under", "below" and "beneath" the second feature may be the first feature being directly under or obliquely below the second feature, or simply indicating that the first feature is less level than the second feature.
It will be understood that when an element is referred to as being "fixed" or "disposed" on another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. The terms "vertical," "horizontal," "upper," "lower," "left," "right," and the like are used herein for illustrative purposes only and are not meant to be the only embodiment.
Referring to fig. 1 and 2, a method for generating an upper polishing track according to an embodiment of the present invention includes the following steps.
S11, providing an upper polishing device, wherein the upper polishing device comprises two oppositely arranged machine vision modules.
Fig. 2 is a schematic diagram of an embodiment of an upper polishing device 900. The upper polishing device 900 includes a table 910 and a beam 920 mounted on the table 910, and two machine vision modules 930 are fixedly mounted on the beam 920 at intervals. Specifically, the machine vision module 930 includes a camera 931 and a line laser generator 932. The line laser generator 932 can emit surface laser light, that is, the emitted laser light can obtain a laser line after being irradiated onto the target object. The camera 931 is capable of taking a picture, and the camera 931 is specifically an industrial camera 931 to have a high resolution. In particular, the upper polishing device 900 further includes a base 940 for mounting the upper, the base 940 acting as a mounting platform for the upper. Further, the base 940 is disposed on the table 910 and is capable of moving relative to the table 910, and thus, relative to the machine vision module 930. Further, the machine vision module 930 is disposed on two sides of the moving track of the base 940, and is suspended above the moving track, so as to be able to photograph the object mounted on the base 940 when the base 940 moves.
In one embodiment, upper polishing device 900 includes a manipulator 950 and a polishing head 960 mounted to a distal end of manipulator 950, where manipulator 950 is capable of moving polishing head 960 to polish an upper mounted to base 940. In addition, the upper polishing device 900 further includes a control center (not shown) connected to the machine vision module 930 to receive the picture taken by the camera 931 and analyze the picture; the control center is connected with the robot 950 to control the robot 950 to move in a specific trajectory.
S12, calibrating the machine vision module to determine a workpiece coordinate system.
By calibrating the machine vision module, a workpiece coordinate system capable of reflecting the actual coordinates of the target object can be determined, and the characteristic information in the two-dimensional picture shot by the camera can be converted into the characteristic real information in the three-dimensional coordinate system through the workpiece coordinate system. It can be appreciated that for a photograph with multiple mark points taken by a camera, three-dimensional point cloud data can be obtained after transformation of the workpiece coordinate system.
Specifically, S12 includes the steps of:
s121, analyzing the calibration plate picture shot by the camera to obtain a camera correction calibration matrix.
The camera calibration method has the function of calibrating the camera and determining distortion conditions existing when the camera shoots. And for the image information of the target object shot by the camera, the actual image information under the condition of no distortion can be reversely obtained through the camera correction calibration matrix, so that the image is corrected.
Specifically, according to the field of view of the camera, a calibration plate with proper size is prepared, and the calibration plate is a plate with a standard checkerboard with black and white alternating. And in the field of view of the camera, the calibration plates are placed at different positions and in different postures, corresponding pictures are shot by the camera, and the pictures are calculated and analyzed by utilizing an algorithm, so that a camera correction calibration matrix can be obtained.
S122, enabling the line laser generator to project laser lines onto the calibration plate, and analyzing the calibration plate picture with the laser lines shot by the camera to obtain a laser surface calibration matrix.
Specifically, the calibration plate is placed at a plurality of height positions, and laser generated by the laser generator is projected onto the calibration plate at each height position, at this time, the camera is made to acquire pictures of the calibration plate, and then the calibration plate is provided with a light bar illuminated by the laser. And analyzing and processing a plurality of calibration plate pictures with light bars by using an algorithm, so that a laser surface calibration matrix can be obtained. According to the laser surface calibration matrix, two-dimensional point coordinates on the picture can be converted into three-dimensional point coordinates.
S123, mounting the calibration plate on the base, enabling the calibration plate to move relative to the camera, and analyzing a plurality of calibration plate pictures shot by the camera to obtain a camera gesture calibration matrix.
Specifically, the base is moved so as to drive the calibration plate to gradually move towards the camera, in the process of the movement of the calibration plate, the camera continuously shoots so as to obtain a plurality of calibration plate pictures, and the camera gesture calibration matrix can be obtained by carrying out algorithm analysis on the plurality of calibration plate pictures. Because the mounting posture of the camera is not standard (such as mounting inclination), the difference exists between the shot picture and the picture shot in the standard posture, and according to the camera posture calibration matrix, the actual picture information in the standard posture can be obtained reversely, so that the picture is corrected.
S124, combining the camera correction calibration matrix, the laser surface calibration matrix and the camera posture calibration matrix to determine a workpiece coordinate system.
The camera correction calibration matrix, the laser surface calibration matrix and the camera posture calibration matrix are combined, so that the two-dimensional picture with certain distortion and inclination deformation obtained through shooting is corrected, and a real workpiece coordinate system capable of reflecting the actual coordinates of the target object is obtained. It can be understood that the object coordinate system is a coordinate system determined based on the calibrated camera, and the photos with a plurality of mark points taken by the camera can be converted into a three-dimensional point cloud capable of reflecting real coordinates under the object coordinate system through analysis and calculation.
S13, unifying the workpiece coordinate systems determined by the two machine vision modules respectively to obtain a position relation matrix of the two machine vision modules.
And respectively shooting pictures with a plurality of mark points by the two cameras, and obtaining corresponding three-dimensional point clouds in respective workpiece coordinate systems after analyzing and converting the pictures. In order to unify the three-dimensional point clouds of two object coordinate systems into one global coordinate system, the positional relationship of the two cameras needs to be calibrated. Thus, through the transformation of the position relation matrix, two groups of point clouds based on two independent coordinate systems can be unified into one coordinate system.
Specifically, S13 includes the steps of:
s131, the base is moved to a standby position, and the calibration plate is mounted on the base.
The calibration plate can be fixed on the base by means of the cylindrical blocks pressing the calibration plate or the small magnet pieces and the base to adsorb and clamp the calibration plate. The base is moved to a stand-by position of the table.
S132, enabling the camera to photograph the calibration plate, and obtaining a plane picture.
And S133, converting the plane picture into a space point cloud.
And S134, splicing and aligning the space point clouds obtained by the two cameras to obtain a position relation matrix capable of aligning the space point clouds.
The method comprises the steps of obtaining the outlines of the space point clouds through algorithm processing on the space point clouds obtained by two cameras at the same time, performing rough matching on the outlines of the two space point clouds to obtain a position relation matrix capable of enabling the space point clouds to be approximately aligned, and then performing continuous fine matching to obtain the position relation matrix capable of enabling the two space point clouds to be completely aligned in an iteration mode. The obtained position relation matrix can realize higher-precision splicing alignment of the space point clouds, namely, the point clouds under two independent coordinate systems can be unified to a global coordinate system.
S14, marking the position to be polished on the upper.
The position to be polished on the upper is marked so that the camera can identify the locus of the position to be polished, which is formed by the marks. In particular, the color of the indicia needs to be different from the color of the upper so that the camera can be easily identified.
Specifically, in one embodiment, the indicia are paper tape. At this time, the color of the paper tape is selected according to the color of the upper, so that the color of the paper tape is different from the color of the upper. For example: the black upper is selected from white paper tape, and the white upper is selected from black paper tape. And then determining the position of the upper to be polished, and sticking paper tape along the position of the upper to be polished. In other embodiments, the upper may be scored at a location to be polished, and the location to be polished may be marked by a line obtained by the scoring, as the color of the score is different from the color of the upper.
S15, enabling the two machine vision modules to scan marks on the shoe upper respectively, and obtaining two tracks to be polished.
The two machine vision modules respectively scan marks on the uppers, two space point clouds under the respective workpiece coordinate systems are obtained through analysis, and the space point clouds form the track to be polished under the respective workpiece coordinate systems.
Specifically, S15 includes the steps of:
s151, attaching the upper to the base.
S152 causes the two cameras to scan the marks on the upper, respectively.
S153, converting the pictures shot by the two cameras into space point clouds according to the respective workpiece coordinate systems of the two cameras, and respectively obtaining two tracks to be polished formed by the space point clouds.
S16, splicing and aligning the two tracks to be polished according to the position relation matrix to determine the polishing track of the upper.
According to the standard position relation matrix obtained in the step S13, the tracks to be polished obtained by shooting with the two cameras can be spliced and aligned to be unified under a global coordinate system to form polishing track point clouds, and at the moment, the polishing track point clouds can reflect the real polishing track of the shoe upper.
According to the shoe upper polishing track generation method, after the machine vision modules are calibrated, only two machine vision modules are required to scan the shoe upper, and polishing tracks obtained by respective scanning are spliced and aligned, so that the polishing track of the shoe upper can be obtained, the operation is convenient and quick, and the efficiency is high. In addition, when the shoe type is replaced, only the position to be polished on the replaced shoe type is required to be marked, and then the polishing track of the shoe type after the replacement can be obtained quickly through the scanning treatment of the two machine vision modules.
In the actual operation process, after one scanning of the two machine vision modules, blind areas are easy to exist on the positions of the toe and the shoe tail of the upper, and the condition that the shoe is not scanned easily occurs, so that the polishing track point cloud of the upper with good quality and integrity cannot be obtained. It is therefore necessary to scan the upper in two positions. At this time, the upper polishing track generation method further includes the following steps.
S17, polishing tracks of the upper in a first posture and a second posture are respectively obtained, and a fixed rotation angle exists between the first posture and the second posture.
Defining that when the length direction of the upper is consistent with the moving direction of the base, the upper takes a first posture, and two cameras mainly scan the two opposite sides of the upper; and when the width direction of the upper is consistent with the moving direction of the base, the upper is in a second posture, and at the moment, the two cameras mainly scan the toe cap and the tail of the upper. It will be appreciated that in this case there is a 90 degree angle of rotation between the first and second attitudes.
It can be understood that when the upper is in the first posture, the two cameras are made to scan the marks on the upper, and after the space point clouds under the respective workpiece coordinate systems are obtained, the polishing track of the upper in the first posture is obtained after the space point clouds are unified through the positional relationship matrix in the step S16. By using the same method, the polishing track of the upper in the second posture can be obtained.
S18, converting the polishing track in the second posture according to a rotation translation matrix between the first posture and the second posture.
Specifically, step S18 includes the steps of:
s181, the calibration plate is mounted on the base in a first posture.
And installing the calibration plate on the base, and regarding the current posture of the calibration plate as the first posture of the upper.
S182, converting the first posture picture shot by the camera into a first posture point cloud of the calibration plate.
And scanning and photographing the calibration plate by using a camera, and obtaining a first posture point cloud of the calibration plate in a first posture after data processing. It can be understood that the first gesture point cloud is point cloud information, which is obtained by processing data, of pictures shot by two cameras and is unified to a global coordinate system through stitching alignment.
S183, mounting the calibration plate on the base in the second posture.
The calibration plate is mounted on the base after being rotated by a predetermined angle from the first posture, and the posture of the calibration plate can be regarded as a second posture of the upper.
S184, converting the second posture picture shot by the camera into a second posture point cloud of the calibration plate.
And scanning and photographing the calibration plate by using a camera, and obtaining a second posture point cloud of the calibration plate in a second posture after data processing. It can be understood that the second gesture point cloud is point cloud information obtained by processing the data of the pictures shot by the two cameras and unified into a global coordinate system through stitching and alignment.
S185, splicing and aligning the second gesture point cloud of the calibration plate and the first gesture point cloud of the calibration plate to obtain a rotation translation matrix capable of aligning the second gesture point cloud and the first gesture point cloud.
Under the same global coordinate system, the first attitude point cloud and the second attitude point cloud are overlapped, and the second attitude point cloud needs to be subjected to rotary translation processing so that the second attitude point cloud can be returned to an original position under the first attitude to be spliced with the first attitude point cloud. In the process of rotationally translating the second gesture point cloud to be spliced and aligned with the first gesture point cloud, a rotational translation matrix enabling the second gesture point cloud to be aligned with the first gesture point cloud can be obtained through multiple iterative computations.
It can be understood that, because the rotation translation matrix between the first gesture and the second gesture cannot be known when scanning the upper, the rotation translation matrix of the upper under the two gestures can be obtained by respectively installing the calibration plate on the base under the first gesture and the second gesture and solving the rotation translation matrix of the calibration plate under the two gestures. Therefore, after the rotation translation matrix between the first gesture and the second gesture is obtained, the polishing track point cloud in the second gesture can be converted into the first gesture.
And S19, splicing the converted polishing track with the polishing track in the first posture.
And splicing the polishing track in the converted second posture with the polishing track in the first configuration to obtain the complete polishing track of the toe cap, the shoe tail and the two sides of the upper.
S21, performing trial polishing on the upper along the polishing track.
According to the point cloud data of the polishing track obtained by splicing, the control center controls the mechanical arm to move so as to drive the polishing head to polish the shoe upper in a trial manner along the track defined by the point cloud.
S22, performing position correction on the point position deviating from the actual position in the polishing track.
In the process of trial polishing, if the actual motion trail of the polishing head deviates from the position to be polished on the upper, the point cloud data can be optimized through correction.
Specifically, S22 includes the steps of:
s221, selecting the number of the mark point deviating from the actual position in the polishing track.
And the point cloud data stored by the control center can be displayed through a display screen connected with the control center. In addition, the individual marker points in the track have been numbered automatically, i.e. each marker point has a number. For the mark points to be corrected, the number corresponding to the mark point can be selected on the display screen, and the adjustment operation is performed on the mark points.
S222, selecting the degree of freedom to be adjusted for the mark point.
And adjusting according to 6 degrees of freedom of a global coordinate system where the mark points are located, wherein the 6 degrees of freedom are respectively x-axis translation, y-axis translation and z-axis translation along the global coordinate system, rotation around the x-axis, rotation around the y-axis and rotation around the z-axis.
In another embodiment, a local coordinate system can be established based on the polishing posture of the polishing head, and the polishing head can be more convenient to polish along the polishing track in an optimal posture by adjusting 6 degrees of freedom of the marking points in the local coordinate system. At this time, X, Y, Z coordinates of the selected marker point in the global coordinate system are displayed on the display screen, and euler angles of the local coordinate system relative to the global coordinate system are represented by α, β, γ, respectively.
S223, selecting the step length to be adjusted each time.
A total of 0.001, 0.01, 0.1, 1, 10 options are selected for each secondary adjustment step in the step box on the display screen. For example, after selecting step 1, only 1mm distance adjustment can be achieved per operation. The target total step size adjustment can be achieved quickly by combining different step size options. For example, by selecting 2 steps 10,5 steps 1,3 steps 0.1, a total step adjustment of 25.3mm can be quickly achieved.
S224, the key is adjusted until the mark point coincides with the actual position.
The display screen is provided with the W key and the D key, adjustment can be realized through manual operation, one step length is increased by pressing the W key, and one step length is reduced by pressing the D key, so that the marking point is subjected to angle and position correction, and the optimization of the whole polishing track is realized.
According to the shoe upper polishing track generation method, the machine vision module is calibrated in advance, the rotation translation matrix of the shoe upper in the two scanning postures is obtained, so that the machine vision module can scan the shoe upper in the two postures respectively, and the shoe upper polishing track with good quality and integrity is obtained after data conversion and integration, and the operation is convenient and quick. Further, for or obtaining the shoe upper polishing track, the polishing track can be optimized through trial polishing and correction, so that the generated polishing track is more in line with the real polishing track.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (7)

1. The shoe upper polishing track generation method is characterized by comprising the following steps of:
providing an upper polishing device, wherein the upper polishing device comprises two oppositely arranged machine vision modules and a base for mounting the upper, and the machine vision modules comprise a camera and a line laser generator;
analyzing the calibration plate picture shot by the camera to obtain a camera correction calibration matrix;
enabling the line laser generator to project laser lines onto the calibration plate, and analyzing the calibration plate picture with the laser lines shot by the camera to obtain a laser surface calibration matrix;
the calibration plate is arranged on the base, so that the calibration plate and the camera move relatively, and a plurality of calibration plate pictures shot by the camera are analyzed to obtain a camera gesture calibration matrix;
combining the camera correction calibration matrix, the laser surface calibration matrix and the camera posture calibration matrix to determine a workpiece coordinate system;
moving the base to a standby position, installing the calibration plate on the base, enabling the camera to shoot the calibration plate to obtain a plane picture, converting the plane picture into space point clouds, and splicing and aligning the space point clouds obtained by the two cameras to obtain a position relation matrix capable of enabling the space point clouds to be aligned;
marking the position to be polished on the upper;
enabling the two machine vision modules to respectively scan marks on the upper and obtain two tracks to be polished;
and splicing and aligning the two tracks to be polished according to the position relation matrix to determine the polishing track of the upper.
2. The method for generating an upper grinding track according to claim 1, wherein the step of causing the two machine vision modules to scan the marks on the upper and obtain two tracks to be ground respectively includes:
mounting the upper to the base;
causing both of said cameras to scan indicia on said upper, respectively;
and converting the pictures shot by the two cameras into space point clouds according to the respective workpiece coordinate systems, and respectively obtaining two tracks to be polished formed by the space point clouds.
3. The upper grinding track generation method according to claim 1, further comprising, after the step of determining the grinding track of the upper by stitching and aligning two of the tracks to be ground according to the alignment relation matrix, the steps of:
respectively obtaining polishing tracks of the upper in a first posture and a second posture, wherein a fixed rotation angle exists between the first posture and the second posture;
converting the polishing track in the second gesture according to a rotation translation matrix between the first gesture and the second gesture;
and splicing the converted polishing track with the polishing track in the first posture.
4. A method of generating a grinding track for uppers according to claim 3, wherein in the step of converting the grinding track in the second posture based on a rotational-translational matrix between the first posture and the second posture, the rotational-translational matrix is obtained by:
mounting the calibration plate on the base in the first attitude;
converting the first gesture picture shot by the camera into a first gesture point cloud of a calibration plate;
mounting the calibration plate on the base in the second attitude;
converting the second gesture picture shot by the camera into a second gesture point cloud of the calibration plate;
and splicing and aligning the second gesture point cloud of the calibration plate with the first gesture point cloud of the calibration plate to obtain a rotation translation matrix capable of aligning the second gesture point cloud with the first gesture point cloud of the calibration plate.
5. The upper grinding track generation method according to claim 1, further comprising, after the step of determining the grinding track of the upper by stitching and aligning two of the tracks to be ground according to the alignment relation matrix, the steps of:
performing trial polishing on the upper along the polishing track;
and carrying out position correction on the point position deviating from the actual position in the polishing track.
6. The upper grinding track generation method according to claim 5, wherein the step of performing position correction on the point position deviating from the actual position in the grinding track includes the steps of:
selecting the number of the mark point deviating from the actual position in the polishing track;
selecting a degree of freedom to be adjusted for the marker point;
selecting a step size to be adjusted each time;
and adjusting the key until the marking point coincides with the actual position.
7. The method for generating a polishing track for an upper according to claim 1, wherein in the step of marking the position to be polished on the upper, a color of the mark is different from a color of the upper.
CN202011336516.5A 2020-11-25 2020-11-25 Shoe upper polishing track generation method Active CN114536156B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011336516.5A CN114536156B (en) 2020-11-25 2020-11-25 Shoe upper polishing track generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011336516.5A CN114536156B (en) 2020-11-25 2020-11-25 Shoe upper polishing track generation method

Publications (2)

Publication Number Publication Date
CN114536156A CN114536156A (en) 2022-05-27
CN114536156B true CN114536156B (en) 2023-06-16

Family

ID=81660521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011336516.5A Active CN114536156B (en) 2020-11-25 2020-11-25 Shoe upper polishing track generation method

Country Status (1)

Country Link
CN (1) CN114536156B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116713820B (en) * 2023-05-29 2024-03-26 东莞市捷圣智能科技有限公司 Polishing method, polishing system, polishing medium and polishing device for shoe upper processing

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN208259185U (en) * 2018-05-14 2018-12-21 黑金刚(福建)自动化科技股份公司 A kind of sole grinding apparatus of view-based access control model
CN210158123U (en) * 2019-04-25 2020-03-20 广东弓叶科技有限公司 EVA foaming sole equipment of polishing

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ITMI981211A1 (en) * 1998-06-01 1999-12-01 Cerim S P A Off Mec FOOTWEAR ASSEMBLING MACHINE WITH PROGRAMMABLE MEANS OF VISUAL REFERENCE
US8849620B2 (en) * 2011-11-18 2014-09-30 Nike, Inc. Automated 3-D modeling of shoe parts
JP6462986B2 (en) * 2014-02-07 2019-01-30 キヤノン株式会社 Robot control method, article manufacturing method, and control apparatus
CN105666274B (en) * 2016-02-03 2018-03-09 华中科技大学 A kind of service plate edging method of view-based access control model control
CN208573118U (en) * 2018-06-01 2019-03-05 深圳市智能机器人研究院 A kind of sole automatically grinding equipment
CN110051083B (en) * 2019-04-25 2020-11-24 广东弓叶科技有限公司 EVA foaming sole polishing method and device
CN110245599A (en) * 2019-06-10 2019-09-17 深圳市超准视觉科技有限公司 A kind of intelligent three-dimensional weld seam Auto-searching track method
CN110150793B (en) * 2019-06-28 2024-02-06 泉州轻工职业学院 System and method for sole polishing process based on industrial robot
CN111109766B (en) * 2019-12-16 2021-10-22 广东天机工业智能系统有限公司 Shoe upper grinding device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN208259185U (en) * 2018-05-14 2018-12-21 黑金刚(福建)自动化科技股份公司 A kind of sole grinding apparatus of view-based access control model
CN210158123U (en) * 2019-04-25 2020-03-20 广东弓叶科技有限公司 EVA foaming sole equipment of polishing

Also Published As

Publication number Publication date
CN114536156A (en) 2022-05-27

Similar Documents

Publication Publication Date Title
US20230028351A1 (en) Laser patterning skew correction
TWI290363B (en) Method and system for marking a workpiece such as a semiconductor wafer and laser marker for use therein
US8223208B2 (en) Device and method for calibrating an imaging device for generating three dimensional surface models of moving objects
US20130063563A1 (en) Transprojection of geometry data
US20130060369A1 (en) Method and system for generating instructions for an automated machine
CN113305849B (en) Intelligent flat groove cutting system and method based on composite vision
CN106625713A (en) Method of improving gumming accuracy of gumming industrial robot
CN111460955A (en) Image recognition and processing system on automatic tracking dispensing equipment
CN113601333B (en) Intelligent flexible polishing method, device and equipment
CN114536156B (en) Shoe upper polishing track generation method
KR100502560B1 (en) Apparatus and Method for Registering Multiple Three Dimensional Scan Data by using Optical Marker
US20200338919A1 (en) Laser marking through the lens of an image scanning system
CN111983896B (en) High-precision alignment method for 3D exposure machine
CN111355894B (en) Novel self-calibration laser scanning projection system
CN113330487A (en) Parameter calibration method and device
CN112304250B (en) Three-dimensional matching equipment and method between moving objects
TWM561212U (en) Calibration equipment
CN111294527B (en) Infrared lens active imaging correction device and method
KR20180040316A (en) 3D optical scanner
KR20050071424A (en) Adjustment file making apparatus for laser marking system and the method thereof
CN209021585U (en) A kind of turntable workpiece assembly guiding structure
CN214558380U (en) Laser processing system capable of quickly positioning mechanical arm to three-dimensional coordinate system
JP2018031745A (en) Correction tool manufacturing method and three-dimensional measuring device
CN115908588A (en) Binocular camera positioning method of satellite antenna operation robot in tunnel
JP2023125925A (en) Method for correcting operation program, welding system, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant