CN114596261A - Wear detection method, device, terminal and medium based on three-dimensional reconstruction of tool nose - Google Patents
Wear detection method, device, terminal and medium based on three-dimensional reconstruction of tool nose Download PDFInfo
- Publication number
- CN114596261A CN114596261A CN202210096485.3A CN202210096485A CN114596261A CN 114596261 A CN114596261 A CN 114596261A CN 202210096485 A CN202210096485 A CN 202210096485A CN 114596261 A CN114596261 A CN 114596261A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- tool nose
- nose
- tool
- volume
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0008—Industrial image inspection checking presence/absence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T90/00—Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a wear detection method, a wear detection device, a wear detection terminal and a wear detection medium based on three-dimensional reconstruction of a tool nose, wherein the detection method is used for collecting sequence images of the tool nose; obtaining depth information of the tool nose based on the sequence images; constructing a three-dimensional point cloud closed model of the tool nose based on the depth information; performing curved surface reconstruction on the three-dimensional point cloud closed model to obtain a three-dimensional grid model of the tool nose; obtaining the volume of the tool nose based on the three-dimensional grid model; and comparing the volume with a pre-acquired standard volume to obtain a wear result of the tool nose, wherein the standard volume is the volume when the tool nose is not worn. The method includes the steps of directly collecting sequence images of the tool tip, obtaining depth information of the tool tip according to the sequence images, constructing a three-dimensional model of the tool tip through curved surface reconstruction, comparing the calculated volume of the tool tip with the volume of the tool tip which is not worn, and directly and effectively obtaining the wear condition of the tool tip.
Description
Technical Field
The invention relates to the technical field of cutter detection, in particular to a wear detection method, a wear detection device, a wear detection terminal and a wear detection medium based on three-dimensional reconstruction of a cutter tip.
Background
With the rapid development of the manufacturing industry, the dependence on the machine tool is stronger and stronger, the accuracy of the machine tool is higher and higher, and the requirement on machining the required cutter is higher and higher. When a machine tool is used for machining, the surface quality and precision of a product are not up to standard due to the fact that a milling cutter is abraded and the edge is broken, secondary reworking or even direct abandonment is needed, and unnecessary loss and expenditure are brought.
At present, the cutter abrasion detection is mainly based on the detection of the cutter abrasion width and the extraction of the abrasion area, the important information of the cutter abrasion depth is lost by the detection means, and the cutter abrasion condition cannot be reliably detected. The existing few three-dimensional reconstruction methods detect the wear condition of the cutter by performing three-dimensional reconstruction on the overall appearance of the cutter, and cannot directly and effectively detect the wear condition of the cutter point. And the tool nose is the main grinding and damaging part of the milling cutter in the processing process, and the grinding and damaging detection of the three-dimensional reconstruction of the tool nose is the most direct and effective method for evaluating the service life of the tool.
Thus, there is still a need for improvement and development of the prior art.
Disclosure of Invention
The invention mainly aims to provide a wear detection method, a wear detection device, a wear detection terminal and a wear detection medium based on three-dimensional reconstruction of a tool nose, and aims to solve the problem that the wear condition of the tool nose cannot be directly and effectively detected in the prior art.
In order to achieve the above object, the present invention provides a wear detection method based on three-dimensional reconstruction of a nose, wherein the method comprises:
collecting a sequence image of the tool nose;
obtaining depth information of the tool nose based on the sequence images;
constructing a three-dimensional point cloud closed model of the tool nose based on the depth information;
performing curved surface reconstruction on the three-dimensional point cloud closed model to obtain a three-dimensional grid model of the tool nose;
obtaining the volume of the tool nose based on the three-dimensional grid model;
and comparing the volume with a pre-acquired standard volume to obtain a wear result of the tool nose, wherein the standard volume is the volume when the tool nose is not worn.
Optionally, the obtaining depth information of the tool tip based on the sequence of images includes:
sequentially acquiring the definition value of each pixel point of each image frame in the sequence image;
based on the positions of the pixel points, sequentially obtaining the pixel points with the maximum definition value at each position in all image frames of the sequence image;
obtaining the depth value of the pixel point with the maximum definition value based on the position of the image frame to which the pixel point with the maximum definition value belongs in the sequence image;
and obtaining the depth information of the tool nose based on the depth values of all the positions.
Optionally, the sequentially obtaining the sharpness value of each pixel point of each image frame in the sequence image includes:
obtaining high-frequency information of the image frame according to non-subsampled shear wave transformation;
based on the high-frequency information, obtaining gradient values of each pixel point of the image frame in different scales and different shearing directions according to a Laplace algorithm;
and carrying out weighted average on the gradient values in different scales and different shearing directions to obtain a definition value.
Optionally, the acquiring the sequence images of the lancet tip includes:
setting the inclination angle of the shooting device;
and continuously shooting the images of the tool nose to obtain the sequence images based on the inclination angle.
Optionally, constructing a three-dimensional point cloud closed model of the tool nose based on the depth information includes:
constructing a three-dimensional point cloud model of the tool nose based on the depth information;
and adding point cloud information at the bottom and the side of the three-dimensional point cloud model to obtain the closed three-dimensional point cloud model.
Optionally, constructing a three-dimensional point cloud model of the tool nose based on the depth information includes:
generating a point cloud three-dimensional coordinate based on the depth information and the position of a pixel point corresponding to the depth information;
and obtaining a three-dimensional point cloud model of the tool nose based on the point cloud three-dimensional coordinates.
Optionally, the obtaining the volume of the tool tip based on the three-dimensional mesh model includes:
and based on the three-dimensional grid model, obtaining the volume of the tool nose according to a point cloud minimum envelope polyhedron method.
In order to achieve the above object, the second aspect of the present invention further provides a wear detection apparatus based on three-dimensional reconstruction of a nose, including:
the image acquisition module is used for acquiring a sequence image of the tool nose;
the depth information acquisition module is used for acquiring the depth information of the tool nose based on the sequence images;
the three-dimensional model acquisition module is used for constructing a three-dimensional point cloud closed model of the tool nose based on the depth information;
the curved surface reconstruction module is used for performing curved surface reconstruction on the three-dimensional point cloud closed model to obtain a three-dimensional mesh model of the tool nose;
the tool nose volume acquisition module is used for acquiring the volume of the tool nose based on the three-dimensional grid model;
and the wear result acquisition module is used for comparing the volume with a pre-acquired standard volume to acquire the wear result of the tool nose, wherein the standard volume is the volume when the tool nose is not worn.
A third aspect of the present invention provides a smart terminal, including a memory, a processor, and a wear detection program based on three-dimensional reconstruction of a cutting edge, stored in the memory and executable on the processor, wherein the wear detection program based on three-dimensional reconstruction of a cutting edge implements any one of the steps of the wear detection method based on three-dimensional reconstruction of a cutting edge when executed by the processor.
A fourth aspect of the present invention provides a computer-readable storage medium, in which a wear detection program based on three-dimensional reconstruction of a nose is stored, and when being executed by a processor, the wear detection program based on three-dimensional reconstruction of a nose realizes any one of the steps of the wear detection method based on three-dimensional reconstruction of a nose.
Therefore, in the scheme of the invention, the sequence images of the tool nose are collected; obtaining depth information of the tool nose based on the sequence images; constructing a three-dimensional point cloud closed model of the tool nose based on the depth information; performing curved surface reconstruction on the three-dimensional point cloud closed model to obtain a three-dimensional grid model of the tool nose; obtaining the volume of the tool nose based on the three-dimensional grid model; and comparing the volume with a pre-acquired standard volume to obtain a wear result of the tool nose, wherein the standard volume is the volume when the tool nose is not worn. Compared with the prior art, the scheme of the invention directly collects the sequence images of the tool tip, obtains the depth information of the tool tip according to the sequence images, constructs a three-dimensional model of the tool tip through curved surface reconstruction, and compares the calculated volume of the tool tip with the volume of the tool tip which is not worn so as to directly and effectively obtain the wear condition of the tool tip.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic flow chart of a wear detection method based on three-dimensional reconstruction of a tool nose according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the implementation scenario of FIG. 1 in accordance with the present invention;
FIG. 3 is a schematic flow chart illustrating the implementation of step S200 in FIG. 1;
FIG. 4 is a schematic structural diagram of a wear detection device based on three-dimensional reconstruction of a tool nose according to an embodiment of the present invention;
fig. 5 is a schematic block diagram of an internal structure of an intelligent terminal according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when …" or "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted depending on the context to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings of the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.
With the rapid development of the manufacturing industry, the dependence on the machine tool is stronger and stronger, the accuracy of the machine tool is higher and higher, and the requirement on machining the required cutter is higher and higher. When a machine tool is used for machining, the surface quality and precision of a product are not up to standard due to the fact that a milling cutter is abraded and the edge is broken, secondary reworking or even direct abandonment is needed, and unnecessary loss and expenditure are brought.
At present, the tool wear detection is mainly based on the detection of the tool wear width and the extraction of the wear area, and the detection means lose important information of the tool wear depth and cannot reliably detect the tool wear condition. The existing few three-dimensional reconstruction methods detect the wear condition of the cutter by performing three-dimensional reconstruction on the full view of the cutter, and cannot directly and effectively detect the wear condition of the cutter point. And the tool nose is the main wear part of the milling cutter in the processing process, and the wear detection of the three-dimensional reconstruction of the tool nose is the most direct and effective method for evaluating the service life of the tool.
The scheme of the invention directly acquires the sequence images of the tool tip, acquires the depth information of the tool tip according to the sequence images, constructs a three-dimensional model of the tool tip through curved surface reconstruction, and compares the calculated volume of the tool tip with the volume of the tool tip when the tool tip is not worn, thereby directly and effectively acquiring the wear condition of the tool tip.
Exemplary method
As shown in fig. 1, an embodiment of the present invention provides a wear detection method based on three-dimensional reconstruction of a nose, specifically, the method includes the following steps:
step S100: collecting a sequence image of the tool nose;
specifically, the tool nose refers to the part of the cutting edge with the worst working condition and the weakest structure, the strength and the heat dissipation condition are poor, the part is the most main worn-out part of the tool in the machining process, and the service life of the tool can be judged most directly and effectively by analyzing the condition of the tool nose. After the sequence images of the tool nose are collected by a focusing method, the tool nose can be subjected to three-dimensional reconstruction based on the sequence images, and a tool nose grinding damage result can be accurately obtained.
The distance between the shooting device and the cutter point is changed, the shooting device is controlled to continuously shoot, and the obtained images are combined according to the shooting sequence to form a sequence image of the cutter point.
Preferably, the sequence images of the cutting edge are collected, the shooting device shoots the cutting edge images at a certain angle with the axis of the cutter, and compared with shooting directly under the cutting edge, the depth information of the cutting edge can be displayed in the sequence images more clearly.
As shown in fig. 2, it was tested that when the photographing device (including a microscope and an industrial camera, not shown in the figure) in the embodiment forms an angle of 45 ° with the horizontal plane, the sequential images of the cutting edge of the milling cutter are photographed to better reflect the depth information of the cutting edge. Of course, the inclination angle of the shooting device can be set according to the actual processing scene. After the tilt angle of the photographing device is set, the machine tool spindle is controlled to move toward the photographing device along the dotted line shown in fig. 2, and the photographing device is controlled to continuously photograph the image composition sequence images of the cutting edge. If the numerical control machine tool is a five-axis numerical control machine tool, the cutter shaft can directly move along the dotted line in the figure; otherwise, a numerical control program for controlling the Z axis and the Y axis of the spindle can be programmed, so that the stepping distance of the Z axis and the Y axis of the machine tool spindle is kept the same and is kept unchanged all the time, and the spindle can move along the dotted line all the time.
Step S200: obtaining depth information of the tool nose based on the sequence image;
specifically, the distance between the cutting edge and the shooting device is continuously changed while the shooting device shoots the cutting edge image. The focusing position of the shooting device is changed, and each position of the cutter tip is converted between a focusing state and a defocusing state, so that pixel points of the same position of the cutter tip at the corresponding position in a certain frame of image of the sequence image are very clear, and pixel points at the corresponding position in another frame of image appear fuzzy. That is, a sharpest pixel point exists in a certain frame of the sequence image corresponding to each position on the tool nose. Therefore, the depth information of the cutter position corresponding to the pixel point can be calculated according to the position of the image frame, to which the clearest pixel point corresponding to a certain position of the cutter tip belongs, in the sequence image. After the above operations are repeated, the depth information of all the positions of the tool nose is obtained, that is, the depth information of the tool nose is obtained.
Step S300: constructing a three-dimensional point cloud closed model of the tool nose based on the depth information;
in particular, the depth information reflects the depth of each position of the tip. And acquiring the depth (corresponding to the value of the acquired Z coordinate) of each tool nose position from the depth information, acquiring the coordinate (corresponding to the value of the acquired XY coordinate) of the position based on the tool coordinate system, and combining the two to obtain the three-dimensional coordinate of each point cloud for constructing the three-dimensional point cloud. And constructing a three-dimensional point cloud model reflecting the current situation of the tool nose according to the three-dimensional coordinates of all the point clouds. According to the method, the volume of the tool nose needs to be calculated on the obtained three-dimensional point cloud model, but the obtained three-dimensional point cloud model is not closed, so that the three-dimensional point cloud model needs to be closed to obtain a closed model, namely: and supplementing corresponding point cloud data according to the data lacking in the established three-dimensional point cloud model, so that the three-dimensional point cloud model is closed. In this embodiment, the shooting device shoots the tool nose pattern obliquely, and in the constructed three-dimensional point cloud model, the bottom and the side surfaces are not sealed, and point cloud data needs to be supplemented to the bottom and the side surfaces, so that the sealing of the three-dimensional point cloud model is realized.
Step S400: performing curved surface reconstruction on the three-dimensional point cloud closed model to obtain a three-dimensional grid model of the tool nose;
step S500: obtaining the volume of the tool nose based on the three-dimensional grid model;
specifically, a three-dimensional grid model of the tool nose is obtained by adopting the existing curved surface reconstruction algorithm according to the three-dimensional point cloud coordinates in the three-dimensional point cloud closed model; and then calculating the volume of the three-dimensional grid model to obtain the volume of the tool nose. The volume of the tool nose is calculated according to the three-dimensional grid model, various curved surface volume calculation methods can be adopted, and the volume of the tool nose is calculated and obtained according to a point cloud minimum envelope polyhedron method in the embodiment.
Step S600: and comparing the volume with a pre-acquired standard volume to obtain a wear result of the tool nose, wherein the standard volume is the volume of the tool nose when the tool nose is not worn.
Specifically, the wear degree of the cutting edge can be obtained by obtaining the volume of the cutting edge in a worn state and comparing the obtained volume with the volume obtained in advance when the cutting edge is not worn. The volume of the unworn knife tip may be obtained in advance by acquiring sequence images of the unworn knife tip with reference to the above steps S100 to S500, and performing three-dimensional modeling processing to obtain the volume of the unworn knife tip.
In summary, the present embodiment collects the sequence images of the blade tip first; analyzing the sequence image, determining the clearest pixel point, and calculating to obtain the depth information of the tool nose according to the sequence number of the image frame of the clearest pixel point in the sequence image; constructing a three-dimensional point cloud closed model of the tool nose according to the depth information and the position coordinate information of the tool nose; performing curved surface reconstruction on the three-dimensional point cloud closed model to obtain a three-dimensional grid model of the tool nose; calculating the volume of the three-dimensional grid model to obtain the volume of the worn tool nose; and comparing the volume with the volume before abrasion so as to determine the abrasion result of the tool tip, such as: analyzing the abrasion position of the tool nose, analyzing the volume of the tool nose abraded, and determining the abrasion degree of the tool nose according to the volume ratio of the lost volume to the tool nose.
In this embodiment, as shown in fig. 3, the step S200: obtaining depth information of the tool nose specifically comprises:
step S210: sequentially acquiring the definition value of each pixel point of each frame of image in the sequence image;
specifically, the sharpness value is calculated for each pixel point of each frame of image in the sequence image in sequence. In this embodiment, high-frequency information of each pixel point is obtained according to non-subsampled shear wave transformation;
wherein N is the total number of the sequence images; j is a non-downsampling shear wave transformation decomposition scale; l is the non-subsampled shear wave transformation shear direction;a pixel point with coordinates (x, y) in each image frame; f. ofLi(x, y) are low frequency coefficients;is high frequency information.
Then based on the obtained high-frequency information, calculating gradient values of each pixel point under different scales j and different shearing directions l of each frame image i according to a Laplacian algorithm
And then fusing the gradient values of each pixel point under different scales j and different shearing directions l by adopting a simple average method or a weighted average methodObtaining the definition value of each pixel point in each frame of image in the sequence image
Step S220: based on the positions of the pixel points, sequentially obtaining the pixel points with the maximum definition value at each position in all images of the sequence image;
step S230: obtaining the depth value of the pixel point with the maximum definition value based on the position of the image to which the pixel point with the maximum definition value belongs in the sequence image;
specifically, the sequence images are image sets continuously obtained when the blade edge images are shot, the shooting devices are the same, the shot objects are the same, therefore, the image size of each frame in the sequence images is the same, and the same pixel point coordinates (x, y) in all the image frames correspond to the same position of the cutter.
Comparing definition values of pixel points under the same coordinate (x, y) in all image frames of the sequence image, and taking the height of the image where the pixel point with the maximum definition is located as the depth value z (x, y) of the pixel point.
Step S240: obtaining the depth information of the tool nose based on the depth values of all the positions;
specifically, enumerating each position of the tool nose, and obtaining depth values of pixel points under coordinates corresponding to all the positions, wherein a set formed by the depth values is the depth information of the tool nose.
In conclusion, the gradient value is calculated for the acquired high-frequency information of the pixel points, and the definition value is acquired according to the gradient value, so that the depth information of the pixel points is accurately positioned, the subsequently acquired three-dimensional coordinates are accurate, the constructed three-dimensional point cloud model can accurately reflect the appearance of the tool nose, the calculated volume is more accurate, and the naturally acquired abrasion result is more accurate.
It should be noted that the method of the present invention can be used to analyze the wear results of the cutting edge of not only a large-sized cutting tool but also a micro-sized cutting tool (e.g., less than 2 mm). And the abrasion detection can be carried out only by fixing a sequence image of the tool nose acquired at an angle, and the shooting device is not required to be shot from multiple angles for carrying out three-dimensional reconstruction on the whole appearance of the tool, so that the structure of the shooting device is complex and the reconstruction region has redundancy.
Exemplary device
As shown in fig. 4, an embodiment of the present invention further provides a wear detection apparatus based on three-dimensional reconstruction of a tool nose, corresponding to a wear detection method based on three-dimensional reconstruction of a tool nose, where the wear detection apparatus based on three-dimensional reconstruction of a tool nose includes:
the image acquisition module 600 is used for acquiring a sequence image of the tool nose;
specifically, the tool nose refers to the part of the cutting edge with the worst working condition and the weakest structure, the strength and the heat dissipation condition are poor, the part is the most main worn-out part of the tool in the machining process, and the service life of the tool can be judged most directly and effectively by analyzing the condition of the tool nose. After the sequence images of the tool nose are collected by a focusing method, the tool nose can be subjected to three-dimensional reconstruction based on the sequence images, and a tool nose grinding damage result can be accurately obtained.
The distance between the shooting device and the cutter point is changed, the shooting device is controlled to continuously shoot, and the obtained images are combined according to the shooting sequence to form a sequence image of the cutter point.
Preferably, the sequence images of the cutting edge are collected, the shooting device shoots the cutting edge images at a certain angle with the axis of the cutter, and compared with shooting directly under the cutting edge, the depth information of the cutting edge can be displayed in the sequence images more clearly.
As shown in fig. 2, when the photographing device (including the microscope and the industrial camera, not shown in the figure) in the embodiment forms an angle of 45 ° with the horizontal plane, the depth information of the cutting edge is better reflected by the sequence image of the cutting edge of the milling cutter. Of course, the inclination angle of the shooting device can be set according to the actual processing scene. After the tilt angle of the photographing device is set, the machine tool spindle is controlled to move toward the photographing device along the dotted line shown in fig. 2, and the photographing device is controlled to continuously photograph the image composition sequence images of the cutting edge.
A depth information obtaining module 610, configured to obtain depth information of the tool tip based on the sequence image;
specifically, the distance between the cutting edge and the shooting device is continuously changed while the shooting device shoots the cutting edge image. The focusing position of the shooting device is changed, and each position of the cutter tip is converted between a focusing state and a defocusing state, so that pixel points of the same position of the cutter tip at the corresponding position in a certain frame of image of the sequence image are very clear, and pixel points at the corresponding position in another frame of image appear fuzzy. That is, a sharpest pixel point exists in a certain frame of the sequence image corresponding to each position on the tool nose. Therefore, the depth information of the cutter position corresponding to the pixel point can be calculated according to the position of the image frame, to which the clearest pixel point corresponding to a certain position of the cutter tip belongs, in the sequence image. After the above operations are repeated, the depth information of all the positions of the tool nose is obtained, that is, the depth information of the tool nose is obtained.
A three-dimensional model obtaining module 620, configured to construct a three-dimensional point cloud closed model of the tool nose based on the depth information;
in particular, the depth information reflects the depth of the tip at each position. And acquiring the depth (corresponding to the value of the acquired Z coordinate) of each tool nose position from the depth information, acquiring the coordinate (corresponding to the value of the acquired XY coordinate) of the position based on the tool coordinate system, and combining the two to obtain the three-dimensional coordinate of each point cloud for constructing the three-dimensional point cloud. And constructing a three-dimensional point cloud model reflecting the current situation of the tool nose according to the three-dimensional coordinates of all the point clouds. According to the method, the volume of the tool nose needs to be calculated on the obtained three-dimensional point cloud model, but the obtained three-dimensional point cloud model is not closed, so that the three-dimensional point cloud model needs to be closed to obtain a closed model, namely: and supplementing corresponding point cloud data according to the data lacking in the established three-dimensional point cloud model, so that the three-dimensional point cloud model is closed. In this embodiment, the shooting device shoots the tool nose pattern obliquely, and if the bottom and the side surfaces of the three-dimensional point cloud model are not sealed, point cloud data needs to be supplemented to the bottom and the side surfaces, so that the three-dimensional point cloud model is sealed.
A curved surface reconstruction module 630, configured to perform curved surface reconstruction on the three-dimensional point cloud closed model to obtain a three-dimensional mesh model of the tool nose;
a tool tip volume obtaining module 640, configured to obtain a volume of the tool tip based on the three-dimensional grid model;
specifically, fitting by using a curved surface according to a three-dimensional point cloud coordinate in the three-dimensional point cloud closed model to obtain a three-dimensional grid model of the tool nose; and then calculating the volume of the three-dimensional grid model to obtain the volume of the tool nose. The volume of the tool nose is calculated according to the three-dimensional grid model, various curved surface volume calculation methods can be adopted, and the volume of the tool nose is calculated and obtained according to a point cloud minimum envelope polyhedron method in the embodiment.
And the wear result acquisition module 650 is configured to compare the volume with a pre-acquired standard volume to obtain a wear result of the tool tip, where the standard volume is a volume when the tool tip is not worn.
Specifically, the wear degree of the cutting edge can be obtained by obtaining the volume of the cutting edge in a worn state and comparing the obtained volume with the volume obtained in advance when the cutting edge is not worn. The volume of the unworn tool nose obtained in advance can be obtained by acquiring sequence images of the unworn tool nose and performing three-dimensional modeling processing to obtain the volume of the unworn tool nose by adopting steps S100 to S500 of an abrasion detection method based on three-dimensional reconstruction of the tool nose.
In this embodiment, specific functions of each module of the wear detection apparatus based on the three-dimensional reconstruction of the tool tip may refer to corresponding descriptions in the wear detection method based on the three-dimensional reconstruction of the tool tip, and are not described herein again.
Based on the above embodiment, the present invention further provides an intelligent terminal, and a schematic block diagram thereof may be as shown in fig. 5. The intelligent terminal comprises a processor, a memory, a network interface and a display screen which are connected through a system bus. Wherein, the processor of the intelligent terminal is used for providing calculation and control capability. The memory of the intelligent terminal comprises a nonvolatile storage medium and an internal memory. The nonvolatile storage medium stores an operation system and a wear detection program based on three-dimensional reconstruction of the cutting edge. The internal memory provides an environment for an operating system in the nonvolatile storage medium and the running of a wear detection program based on three-dimensional reconstruction of the tool tip. The network interface of the intelligent terminal is used for being connected and communicated with an external terminal through a network. When being executed by a processor, the wear detection program based on the three-dimensional reconstruction of the tool nose realizes the steps of any one of the wear detection methods based on the three-dimensional reconstruction of the tool nose. The display screen of the intelligent terminal can be a liquid crystal display screen or an electronic ink display screen.
It will be understood by those skilled in the art that the block diagram shown in fig. 5 is only a block diagram of a part of the structure related to the solution of the present invention, and does not constitute a limitation to the intelligent terminal to which the solution of the present invention is applied, and a specific intelligent terminal may include more or less components than those shown in the figure, or combine some components, or have a different arrangement of components.
In one embodiment, a smart terminal is provided, where the smart terminal includes a memory, a processor, and a wear detection program based on three-dimensional reconstruction of a nose stored on the memory and executable on the processor, and the wear detection program based on three-dimensional reconstruction of a nose performs the following operation instructions when executed by the processor:
collecting a sequence image of the tool nose;
obtaining depth information of the tool nose based on the sequence images;
constructing a three-dimensional point cloud closed model of the tool nose based on the depth information;
performing curved surface reconstruction on the three-dimensional point cloud closed model to obtain a three-dimensional grid model of the tool nose;
obtaining the volume of the tool nose based on the three-dimensional grid model;
and comparing the volume with a pre-acquired standard volume to obtain a wear result of the tool nose, wherein the standard volume is the volume when the tool nose is not worn.
The embodiment of the invention also provides a computer-readable storage medium, wherein the computer-readable storage medium stores a wear detection program based on three-dimensional reconstruction of the tool nose, and the wear detection program based on three-dimensional reconstruction of the tool nose is executed by a processor to realize the steps of any one of the wear detection methods based on three-dimensional reconstruction of the tool nose provided by the embodiment of the invention.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art would appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the above modules or units is only one logical division, and the actual implementation may be implemented by another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium and can implement the steps of the embodiments of the method when the computer program is executed by a processor. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying the above-mentioned computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the contents contained in the computer-readable storage medium can be increased or decreased as required by legislation and patent practice in the jurisdiction.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art; the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein.
Claims (10)
1. The wear detection method based on the three-dimensional reconstruction of the tool nose is characterized by comprising the following steps:
collecting a sequence image of the tool nose;
obtaining depth information of the tool nose based on the sequence images;
constructing a three-dimensional point cloud closed model of the tool nose based on the depth information;
performing curved surface reconstruction on the three-dimensional point cloud closed model to obtain a three-dimensional grid model of the tool nose;
obtaining the volume of the tool nose based on the three-dimensional grid model;
and comparing the volume with a pre-acquired standard volume to obtain a wear result of the tool nose, wherein the standard volume is the volume when the tool nose is not worn.
2. The wear detection method based on three-dimensional reconstruction of a nose according to claim 1, wherein the obtaining depth information of the nose based on the sequence of images comprises:
sequentially acquiring the definition value of each pixel point of each image frame in the sequence image;
based on the positions of the pixel points, sequentially obtaining the pixel points with the maximum definition value at each position in all image frames of the sequence image;
obtaining the depth value of the pixel point with the maximum definition value based on the position of the image frame to which the pixel point with the maximum definition value belongs in the sequence image;
and obtaining the depth information of the tool nose based on the depth values of all the positions.
3. The wear detection method based on three-dimensional reconstruction of the nose according to claim 2, wherein the sequentially obtaining the sharpness value of each pixel point of each image frame in the sequence of images comprises:
obtaining high-frequency information of the image frame according to non-subsampled shear wave transformation;
based on the high-frequency information, obtaining gradient values of each pixel point of the image frame in different scales and different shearing directions according to a Laplace algorithm;
and carrying out weighted average on the gradient values in different scales and different shearing directions to obtain a definition value.
4. The wear detection method based on three-dimensional reconstruction of a nose according to claim 1, wherein the acquiring of the sequence of images of the nose comprises:
setting the inclination angle of the shooting device;
and continuously shooting the images of the tool nose to obtain the sequence images based on the inclination angle.
5. The wear detection method based on three-dimensional reconstruction of a tool tip as claimed in claim 1, wherein the constructing of the three-dimensional point cloud closed model of the tool tip based on the depth information comprises:
constructing a three-dimensional point cloud model of the tool nose based on the depth information;
and adding point cloud information at the bottom and the side of the three-dimensional point cloud model to obtain the closed three-dimensional point cloud model.
6. The wear detection method based on three-dimensional reconstruction of a nose according to claim 5, wherein the constructing a three-dimensional point cloud model of the nose based on the depth information comprises:
generating a point cloud three-dimensional coordinate based on the depth information and the position of a pixel point corresponding to the depth information;
and obtaining a three-dimensional point cloud model of the tool nose based on the point cloud three-dimensional coordinates.
7. The wear detection method based on three-dimensional reconstruction of a nose according to claim 1, wherein obtaining the volume of the nose based on the three-dimensional mesh model comprises:
and based on the three-dimensional grid model, obtaining the volume of the tool nose according to a point cloud minimum envelope polyhedron method.
8. Wear detection device based on three-dimensional reconstruction of knife tip, characterized in that the device includes:
the image acquisition module is used for acquiring a sequence image of the tool nose;
the depth information acquisition module is used for acquiring the depth information of the tool nose based on the sequence images;
the three-dimensional model acquisition module is used for constructing a three-dimensional point cloud closed model of the tool nose based on the depth information;
the curved surface reconstruction module is used for performing curved surface reconstruction on the three-dimensional point cloud closed model to obtain a three-dimensional mesh model of the tool nose;
the tool nose volume acquisition module is used for acquiring the volume of the tool nose based on the three-dimensional grid model;
and the wear result acquisition module is used for comparing the volume with a pre-acquired standard volume to acquire the wear result of the tool nose, wherein the standard volume is the volume when the tool nose is not worn.
9. An intelligent terminal, characterized in that the intelligent terminal comprises a memory, a processor and a wear detection program based on three-dimensional reconstruction of a tool nose, which is stored on the memory and can run on the processor, and when the wear detection program based on three-dimensional reconstruction of a tool nose is executed by the processor, the steps of the wear detection method based on three-dimensional reconstruction of a tool nose according to any one of claims 1 to 7 are realized.
10. A computer-readable storage medium, wherein the computer-readable storage medium stores a wear detection program based on three-dimensional reconstruction of a nose, and when the wear detection program is executed by a processor, the method comprises the steps of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210096485.3A CN114596261A (en) | 2022-01-26 | 2022-01-26 | Wear detection method, device, terminal and medium based on three-dimensional reconstruction of tool nose |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210096485.3A CN114596261A (en) | 2022-01-26 | 2022-01-26 | Wear detection method, device, terminal and medium based on three-dimensional reconstruction of tool nose |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114596261A true CN114596261A (en) | 2022-06-07 |
Family
ID=81806544
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210096485.3A Pending CN114596261A (en) | 2022-01-26 | 2022-01-26 | Wear detection method, device, terminal and medium based on three-dimensional reconstruction of tool nose |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114596261A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115222744A (en) * | 2022-09-21 | 2022-10-21 | 江苏艾格莱德智能技术有限公司 | Cutter wear degree judgment method based on depth estimation |
CN115365890A (en) * | 2022-09-23 | 2022-11-22 | 深圳职业技术学院 | Method and device for online predicting tool wear value, intelligent terminal and storage medium |
-
2022
- 2022-01-26 CN CN202210096485.3A patent/CN114596261A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115222744A (en) * | 2022-09-21 | 2022-10-21 | 江苏艾格莱德智能技术有限公司 | Cutter wear degree judgment method based on depth estimation |
CN115222744B (en) * | 2022-09-21 | 2022-11-25 | 江苏艾格莱德智能技术有限公司 | Cutter wear degree judgment method based on depth estimation |
CN115365890A (en) * | 2022-09-23 | 2022-11-22 | 深圳职业技术学院 | Method and device for online predicting tool wear value, intelligent terminal and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109737874B (en) | Object size measuring method and device based on three-dimensional vision technology | |
JP7265639B2 (en) | Point cloud merging method that identifies and retains preferred points | |
EP1303839B1 (en) | System and method for median fusion of depth maps | |
CN114596261A (en) | Wear detection method, device, terminal and medium based on three-dimensional reconstruction of tool nose | |
US20160203387A1 (en) | Vision system and analytical method for planar surface segmentation | |
WO2014024579A1 (en) | Optical data processing device, optical data processing system, optical data processing method, and optical data processing-use program | |
KR20170068462A (en) | 3-Dimensional Model Generation Using Edges | |
CN112686950B (en) | Pose estimation method, pose estimation device, terminal equipment and computer readable storage medium | |
CN111105452B (en) | Binocular vision-based high-low resolution fusion stereo matching method | |
CN110602474B (en) | Method, device and equipment for determining image parallax | |
CN110660072B (en) | Method and device for identifying straight line edge, storage medium and electronic equipment | |
CN109001902B (en) | Microscope focusing method based on image fusion | |
TWI823419B (en) | Sample observation device and method | |
CN115439840A (en) | Aviation piece slot area identification method, device, equipment and medium | |
CN115222884A (en) | Space object analysis and modeling optimization method based on artificial intelligence | |
CN110851978A (en) | Camera position optimization method based on visibility | |
US20020065637A1 (en) | Method and apparatus for simulating the measurement of a part without using a physical measurement system | |
CN116524109B (en) | WebGL-based three-dimensional bridge visualization method and related equipment | |
CN116921932A (en) | Welding track recognition method, device, equipment and storage medium | |
CN116839473A (en) | Weld positioning and size calculating method and device, storage medium and electronic equipment | |
CN113129348B (en) | Monocular vision-based three-dimensional reconstruction method for vehicle target in road scene | |
US20220335733A1 (en) | Improvements in or relating to photogrammetry | |
CN113744245A (en) | Method and system for positioning structural reinforcing rib welding seam in point cloud | |
CN112862975A (en) | Bone data processing method, system, readable storage medium and device | |
Li et al. | Overall well-focused catadioptric image acquisition with multifocal images: a model-based method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |