CN116797553A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN116797553A
CN116797553A CN202310628134.7A CN202310628134A CN116797553A CN 116797553 A CN116797553 A CN 116797553A CN 202310628134 A CN202310628134 A CN 202310628134A CN 116797553 A CN116797553 A CN 116797553A
Authority
CN
China
Prior art keywords
target
determining
point
image
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310628134.7A
Other languages
Chinese (zh)
Inventor
杨牧
杨辉华
李建福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Techmach Corp
Original Assignee
Techmach Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Techmach Corp filed Critical Techmach Corp
Priority to CN202310628134.7A priority Critical patent/CN116797553A/en
Publication of CN116797553A publication Critical patent/CN116797553A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling

Abstract

The embodiment of the specification provides an image processing method, an image processing device, an image processing apparatus and a storage medium, wherein the image processing method comprises the following steps: obtaining a target edge image of a cutter, determining at least two target points based on the target edge image, determining corresponding matching points for the at least two target points according to a preset matching rule, determining a point group according to the at least two target points and the corresponding matching points, establishing a communication area based on the point group, and removing the image in the communication area. The method comprises the steps of obtaining a target edge image of a cutter, determining at least two target points based on the target edge image, determining corresponding matching points for the at least two target points according to a preset matching rule, determining a point group according to the at least two target points and the corresponding matching points, establishing a communication area based on the point group, and removing the image in the communication area. The precision of the cutter gap detection system is improved, and the edge is repaired more smoothly.

Description

Image processing method, device, equipment and storage medium
Technical Field
The embodiment of the specification relates to the technical field of image processing, in particular to an image processing method.
Background
In recent years, the manufacturing industry of China has undergone significant innovation, the traditional labor-intensive manufacturing task is being phased out, and the development of the manufacturing industry towards intelligence, high efficiency and data is rapid by means of artificial intelligence, high-new computer technology and powerful chips. The requirements of industrial quality guarantee and machining precision of the manufacturing industry in China are also improved, and the slitting tool is used as one of direct executors in the manufacturing and machining processes, and is often used for cutting a workpiece instead of machining in the numerical control machine tool machining process. High precision machining often has high requirements on the slitting knife, and generally, microscopic images of the cutting edge of the slitting knife need to be acquired to detect whether related defects are within an allowable range. Because the cutting edge of the slitting tool is easy to adhere, the use, transportation and field detection environment cleanliness are poor, and attachments with different positions and forms such as metal scraps, hair, cotton wool and the like are easy to adhere to the cutting edge. On one hand, attachments influence automatic focusing and clear imaging in microscopic imaging in the acquisition process, and on the other hand, the accuracy of a follow-up defect detection algorithm is seriously influenced. Therefore, accurate detection and removal of attachments are a difficult problem that must be solved in order to realize a high-precision automatic detection system for cutter openings.
Disclosure of Invention
In view of this, the present embodiment provides an image processing method. One or more embodiments of the present specification relate to an image processing apparatus, a computing device, a computer-readable storage medium, and a computer program that solve the technical drawbacks existing in the prior art.
According to a first aspect of embodiments of the present specification, there is provided an image processing method including:
acquiring a target edge image of a cutter, and determining at least two target points based on the target edge image;
determining corresponding matching point positions for the at least two target point positions according to a preset matching rule;
determining a fixed point bit group according to the at least two target point positions and the corresponding matching point positions;
and establishing a communication area based on the point location group, and performing image removal on the communication area.
In one possible implementation manner, the acquiring the target edge image of the tool and determining at least two target points based on the target edge image includes:
collecting a target edge image of the cutter through a collecting device;
determining a preset position of the target edge image, and determining a window function corresponding to the preset position; wherein the preset position is any position in the target edge image;
acquiring an initial gray value of the preset position, and acquiring a target gray value of the window under the condition of window movement based on the window function;
determining gray level variation according to the initial gray level value and the target gray level value;
and determining the score of the preset position according to the gray level variation, and determining the preset position as a target point position according to the score.
In one possible implementation manner, the determining, according to a preset matching rule, a corresponding matching point for the at least two target points includes:
selecting a first target point position from the at least two target point positions, and determining Euclidean distances between other point positions and the first target point position; wherein the other point location is any one of the at least two target point locations;
and determining the matching point positions of the first target point position from the other point positions according to Euclidean distances between the other point positions and the first target point position.
In one possible implementation manner, the determining the point location group according to the at least two target point locations and the corresponding matching point locations includes:
establishing an association relation between the first target point location and a matching point location of the first target point location;
and determining a fixed bit group according to the association relation.
In one possible implementation manner, the establishing a connected region based on the point location group, and performing image removal on the connected region, includes:
determining the area of a concave area based on the point group, and connecting the target point in the point group;
and determining a communication area based on the area of the concave area, and removing an image corresponding to the communication area from the target edge image.
In one possible implementation, the method further includes:
and determining a distance threshold, and screening the point location group according to the distance threshold to obtain a screened point location group.
In one possible implementation manner, determining a score of the preset position according to the gray level variation, and determining the preset position as a target point according to the score includes:
and determining a score threshold value, and determining the preset position as a target point position according to the score of the preset position and the score threshold value.
According to a second aspect of embodiments of the present specification, there is provided an image processing apparatus comprising:
the point position determining module is configured to acquire a target edge image of the cutter and determine at least two target point positions based on the target edge image;
the point location matching module is configured to determine corresponding matching point locations for the at least two target point locations according to a preset matching rule;
the point position collection module is configured to determine a point position group according to the at least two target point positions and the corresponding matching point positions;
and the image removing module is configured to establish a communication area based on the point location group and remove the image in the communication area.
According to a third aspect of embodiments of the present specification, there is provided a computing device comprising:
a memory and a processor;
the memory is configured to store computer executable instructions, and the processor is configured to execute the computer executable instructions, which when executed by the processor, implement the steps of the image processing method described above.
According to a fourth aspect of embodiments of the present specification, there is provided a computer-readable storage medium storing computer-executable instructions which, when executed by a processor, implement the steps of the above-described image processing method.
According to a fifth aspect of embodiments of the present specification, there is provided a computer program, wherein the computer program, when executed in a computer, causes the computer to perform the steps of the above-described image processing method.
The embodiment of the specification provides an image processing method, an image processing device, an image processing apparatus and a storage medium, wherein the image processing method comprises the following steps: obtaining a target edge image of a cutter, determining at least two target points based on the target edge image, determining corresponding matching points for the at least two target points according to a preset matching rule, determining a point group according to the at least two target points and the corresponding matching points, establishing a communication area based on the point group, and removing the image in the communication area. The method comprises the steps of obtaining a target edge image of a cutter, determining at least two target points based on the target edge image, determining corresponding matching points for the at least two target points according to a preset matching rule, determining a point group according to the at least two target points and the corresponding matching points, establishing a communication area based on the point group, and removing the image in the communication area. The precision of the cutter gap detection system is improved, and the edge is repaired more smoothly.
Drawings
FIG. 1 is a flow chart of an image processing method provided in one embodiment of the present disclosure;
FIG. 2 is a flow chart of an image processing method provided in one embodiment of the present disclosure;
FIG. 3 is a schematic view of a watershed algorithm segmentation effect of an image processing method according to an embodiment of the present disclosure;
FIG. 4 is a schematic view of a tool edge image of an image processing method according to one embodiment of the present disclosure;
fig. 5 is a schematic diagram of an experimental result of the segmentation of attachments in an image processing method according to an embodiment of the present disclosure;
fig. 6 is a schematic structural view of an image processing apparatus according to an embodiment of the present specification;
FIG. 7 is a block diagram of a computing device provided in one embodiment of the present description.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present description. This description may be embodied in many other forms than described herein and similarly generalized by those skilled in the art to whom this disclosure pertains without departing from the spirit of the disclosure and, therefore, this disclosure is not limited by the specific implementations disclosed below.
The terminology used in the one or more embodiments of the specification is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the specification. As used in this specification, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used in one or more embodiments of this specification to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
The traditional machine vision field is mainly based on obvious characteristics such as textures, gray values, shapes, colors and the like between attachments and a background to distinguish attachments from workpiece parts. A number of scholars have utilized a variety of classical image segmentation methods to separate attachments from workpieces, including: maximum inter-class variance method, maximum entropy threshold segmentation method, markov random field segmentation method, iterative threshold segmentation method and edge detection segmentation method. The traditional machine vision algorithm has weak feature extraction capability and cannot be effectively distinguished from a cutter.
Based on this, in the present specification, an image processing method is provided, and the present specification relates to an image processing apparatus, a computing device, and a computer-readable storage medium, one by one, in the following embodiments.
Referring to fig. 1, fig. 1 shows a schematic view of a scene of an image processing method according to an embodiment of the present specification.
In the application scenario of fig. 1, the computing device 101 may acquire a target edge image 102 of a tool. The computing device 101 may then determine at least two target points 103 based on the target edge image 102. Thereafter, the computing device 101 may determine corresponding matching points 104 for the at least two target points 103 according to a preset matching rule. A set of points 105 is determined from the at least two target points 103 and the corresponding matching points 104, and finally, the computing device 101 establishes a connected region 106 based on the set of points 105 and performs image removal in the connected region 106.
The computing device 101 may be hardware or software. When the computing device 101 is hardware, it may be implemented as a distributed cluster of multiple servers or terminal devices, or as a single server or single terminal device. When the computing device 101 is embodied as software, it may be installed in the hardware devices listed above. It may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein.
Referring to fig. 2, fig. 2 shows a flowchart of an image processing method according to an embodiment of the present disclosure, which specifically includes the following steps.
Step 201: and acquiring a target edge image of the cutter, and determining at least two target points based on the target edge image.
In practical application, the object segmentation on the cutter opening image is an irregular, overlapped or adhered image segmentation task, and is suitable for adopting an area-based image segmentation algorithm. The watershed algorithm is a typical region-based segmentation method, has good response to weak edges, is extremely sensitive to noise in an image, is easy to generate over-segmentation, has poor segmentation effect on the attachment of a cutter opening image, and has the segmentation effect shown in figure 3. In the deep learning field, the characteristics extracted by the network are not distinguished, and the target is not subjected to targeted learning, so that calculation resources are wasted on irrelevant characteristics such as the background, a large number of redundant parameters are contained in the deep network, the fitting phenomenon is more easy to occur, and larger calculation amount and storage space are consumed, so that the model efficiency is reduced.
In a possible implementation manner, acquiring a target edge image of the tool and determining at least two target points based on the target edge image includes: acquiring a target edge image of a cutter through an acquisition device, determining a preset position of the target edge image, and determining a window function corresponding to the preset position, wherein the preset position is any position in the target edge image, acquiring an initial gray value of the preset position, acquiring a target gray value of the window under the condition of window movement based on the window function, determining a gray level variation according to the initial gray value and the target gray level, determining a score of the preset position according to the gray level variation, and determining the preset position as a target point position according to the score.
Specifically, the edge image of the slitting tool is acquired by an acquisition device, as shown in fig. 4. The attachment removing task of the cutter microscopic image is a project label adhesion and irregular image segmentation task. In order to solve the task, the invention designs an attachment removing method based on pit matching, which utilizes pits to describe the pit condition of the image edge, and separates and removes attachment according to pits between the attachment and the cutter edge.
For example, pits in a pit area on an edge image of the slitting tool are detected. Is provided withFor edge image position +.>And a window function which represents the weight of each pixel in the detection window and is binary normal distribution with the pixel at the center point of the window as the origin. Edge image position->The gray value at +.>Detecting gray value variation amount +.>The method comprises the following steps:
(1)
pit pointThe value is much larger than the other pixels of the image.
In one possible implementation manner, determining a score of the preset position according to the gray level variation, and determining the preset position as the target point according to the score includes: and determining a score threshold value, and determining the preset position as a target point position according to the score of the preset position and the score threshold value.
Further, to increase the calculation speed, a design is madeAnd->Is->The partial differentiation of (c) can be deduced as:
(2)
calculation of eigenvalues by matrixWindow correspondence scoreThe score exceeding the threshold is the pit, and the score function is as follows:
(3)
step 202: and determining corresponding matching point positions for at least two target point positions according to a preset matching rule.
In one possible implementation manner, determining corresponding matching points for at least two target points according to a preset matching rule includes: and selecting a first target point from at least two target point positions, determining Euclidean distance between other points and the first target point, wherein the other points are any one of the at least two target point positions, and determining the matching point of the first target point from the other points according to the Euclidean distance between the other points and the first target point.
In practical application, according to the pit detection result, the Euclidean distance between pits is calculated iteratively, and the shortest distance pit pair is matched.
For example, there is a target point a, a target point B, and a target point C. And calculating the Euclidean distance L1 of the target point position A and the target point position B, calculating the Euclidean distance L2 of the target point position A and the target point position C, comparing the Euclidean distance L1 with the Euclidean distance L2, and selecting the target point position corresponding to the small Euclidean distance as the matching point of the target point position A.
Step 203: and determining the bit group according to the at least two target points and the corresponding matching points.
In one possible implementation, the method further includes: and determining a distance threshold value, and screening the point group according to the distance threshold value to obtain a screened point group.
In practical applications, to avoid over-segmentation problems, a distance threshold may be setPit pairs exceeding a threshold are excluded.
For example, there is a target point a, a target point B, and a target point C. And calculating the Euclidean distance L1 of the target point position A and the target point position B, calculating the Euclidean distance L2 of the target point position A and the target point position C, comparing the Euclidean distance L1 and the Euclidean distance L2 with a distance threshold respectively, and if the Euclidean distance L1 and the Euclidean distance L2 meet the condition of the distance threshold, selecting the target point position corresponding to the small Euclidean distance as a matching point of the target point position A. If one Euclidean distance meets the distance threshold value condition, the target point position corresponding to the Euclidean distance is taken as the matching point position of the target point position A.
In one possible implementation, determining the set of points according to the at least two target points and the corresponding matching points includes: and establishing an association relation between the first target point location and the matching point location of the first target point location, and determining a point bit group according to the association relation.
For example, there is a target point a, a target point B, and a target point C. And calculating the Euclidean distance L1 of the target point position A and the target point position B, calculating the Euclidean distance L2 of the target point position A and the target point position C, comparing the Euclidean distance L1 and the Euclidean distance L2 with a distance threshold respectively, if the Euclidean distance L1 and the Euclidean distance L2 meet the condition of the distance threshold, selecting the target point position corresponding to the small Euclidean distance as a matching point position of the target point position A, if the matching point position of the target point position A is the target point position C, and establishing an association relation between the target point position A and the target point position C.
Step 204: and establishing a communication area based on the point location group, and performing image removal on the communication area.
In one possible implementation, establishing a connected region based on the point location group, and performing image removal in the connected region includes: determining the area of a concave region based on the point group, connecting the target points in the point group, determining a communication region based on the area of the concave region, and removing the image corresponding to the communication region from the target edge image.
Specifically, the attachment communication domain on the edge of the cutter is established through pit matching information, attachment identification and removal are realized, and the algorithm is as follows:
(4)
wherein, the liquid crystal display device comprises a liquid crystal display device,and->Respectively representing the gray mean value and gray variance of the original image of the slitting tool with the attachment on the T th sheet>And->Respectively representing the gray mean and gray variance of the splitting tool image generated by the T-th algorithm, ++>Gray-scale covariance expressed as a T-th algorithm generated image and an attached original image, in order to prevent evaluation abnormality caused by zero denominator in calculation, < ->And->Constants 0.01 and 0.02 were taken respectively.
The PSNR is used for calculating the reconstruction quality of an image according to the pixel difference between a generated image and an original image, the PSNR takes dB as a calculation unit, the better the reconstruction quality is, the larger the calculation result is, and the calculation formula is as follows:
(5)
the IOU is used for calculating the spatial position correlation between the attachment dividing region and the labeling dividing region, the attachment dividing region participating in calculation is the smallest circumscribed rectangle of the attachment region identified by the algorithm, the evaluation index calculated value is in the interval of 0 to 1, and the higher the division correlation is, the larger the numerical value is relatively.
The embodiment of the specification provides an image processing method, an image processing device, an image processing apparatus and a storage medium, wherein the image processing method comprises the following steps: obtaining a target edge image of a cutter, determining at least two target points based on the target edge image, determining corresponding matching points for the at least two target points according to a preset matching rule, determining a point group according to the at least two target points and the corresponding matching points, establishing a communication area based on the point group, and removing the image in the communication area. The method comprises the steps of obtaining a target edge image of a cutter, determining at least two target points based on the target edge image, determining corresponding matching points for the at least two target points according to a preset matching rule, determining a point group according to the at least two target points and the corresponding matching points, establishing a communication area based on the point group, and removing the image in the communication area. The problem that attachments in a cutter picture reduce cutter gap detection precision is solved, the removal effect is obvious, the cutter gap detection system precision is improved, the problem that texture features in a slitting cutter picture are less prone to being split is solved, compared with a deep learning network, the edge of the repair is smoother, and the method meets the actual requirements of an industrial field. Due to the adoption of the area segmentation idea, the method has the advantages of strong extraction capability of the attached features and low consumption of calculation resources. And the attachment and the edge of the cutter are segmented in a pit matching mode, so that the edge is repaired more smoothly. Furthermore, the original edge of the cutter is completely restored, and the reliability and the stability of the cutter gap high-precision detection system are enhanced on the premise of considering the calculated amount and the precision.
Furthermore, the model precision of the deep learning experiment is selected to be towards stable DDN, JORDER and RESCAN models for comparison and verification of the effectiveness of the method. The quantitative evaluation index results are shown in table 1, and the method provided by the scheme has the best attachment removing effect. The method provided by the scheme focuses on pits between the edges of the slitting cutters and attachments, fully utilizes the position information between the pits to divide the attachments, and simultaneously smoothly repairs the edges of the cutters, so that the method has advantages in the aspects of attachment positioning and image reconstruction. Therefore, the effect of the attachment removing algorithm based on pit matching provided by the scheme is more excellent than that of the deep learning algorithm.
TABLE 1
The scheme shows the segmentation comparison result of each algorithm in fig. 5, and the segmentation result is a cutter image containing attachments, an image binarization processing result, a DDN network, a JORDER network, a RESCAN network, the scheme method and a final attachment segmentation effect diagram from top to bottom. According to analysis, in the detection results of other methods, irregular bulges appear on the joint part of the attachment and the cutter, and areas which are removed by mistake appear. The method can accurately divide the cutter area and the attachment area, has good effect of smoothly repairing the cutter edge, and does not have the phenomenon of erroneously removing the attachment. Experiments show that the pit matching attachment removing method provided by the invention is very accurate in recognition and positioning of attachments, the edge of the cutter is restored smoothly by calculating the position information between pits, and higher accuracy and good robustness can be maintained for different attachments.
Corresponding to the above method embodiments, the present disclosure further provides an image processing apparatus embodiment, and fig. 6 shows a schematic structural diagram of an image processing apparatus according to one embodiment of the present disclosure. As shown in fig. 6, the apparatus includes:
according to a second aspect of embodiments of the present specification, there is provided an image processing apparatus comprising:
the point position determining module 601 is configured to acquire a target edge image of a cutter and determine at least two target point positions based on the target edge image;
the point location matching module 602 is configured to determine corresponding matching points for the at least two target points according to a preset matching rule;
a point location aggregation module 603 configured to determine a point location group according to the at least two target points and the corresponding matching points;
the image removing module 604 is configured to establish a connected region based on the point location group and perform image removal in the connected region.
In one possible implementation, the point location determination module 601 is further configured to:
collecting a target edge image of the cutter through a collecting device;
determining a preset position of the target edge image, and determining a window function corresponding to the preset position; wherein the preset position is any position in the target edge image;
acquiring an initial gray value of the preset position, and acquiring a target gray value of the window under the condition of window movement based on the window function;
determining gray level variation according to the initial gray level value and the target gray level value;
and determining the score of the preset position according to the gray level variation, and determining the preset position as a target point position according to the score.
In one possible implementation, the point location matching module 602 is further configured to:
selecting a first target point position from the at least two target point positions, and determining Euclidean distances between other point positions and the first target point position; wherein the other point location is any one of the at least two target point locations;
and determining the matching point positions of the first target point position from the other point positions according to Euclidean distances between the other point positions and the first target point position.
In one possible implementation, the point location aggregation module 603 is further configured to:
establishing an association relation between the first target point location and a matching point location of the first target point location;
and determining a fixed bit group according to the association relation.
In one possible implementation, the image removal module 604 is further configured to:
determining the area of a concave area based on the point group, and connecting the target point in the point group;
and determining a communication area based on the area of the concave area, and removing an image corresponding to the communication area from the target edge image.
In one possible implementation, the point location aggregation module 603 is further configured to:
and determining a distance threshold, and screening the point location group according to the distance threshold to obtain a screened point location group.
In one possible implementation, the point location aggregation module 603 is further configured to:
and determining a score threshold value, and determining the preset position as a target point position according to the score of the preset position and the score threshold value.
The embodiment of the specification provides an image processing method, an image processing device, an image processing apparatus and a storage medium, wherein the image processing device comprises: obtaining a target edge image of a cutter, determining at least two target points based on the target edge image, determining corresponding matching points for the at least two target points according to a preset matching rule, determining a point group according to the at least two target points and the corresponding matching points, establishing a communication area based on the point group, and removing the image in the communication area. The method comprises the steps of obtaining a target edge image of a cutter, determining at least two target points based on the target edge image, determining corresponding matching points for the at least two target points according to a preset matching rule, determining a point group according to the at least two target points and the corresponding matching points, establishing a communication area based on the point group, and removing the image in the communication area. The problem that attachments in a cutter picture reduce cutter gap detection precision is solved, the removal effect is obvious, the cutter gap detection system precision is improved, the problem that texture features in a slitting cutter picture are less prone to being split is solved, compared with a deep learning network, the edge of the repair is smoother, and the method meets the actual requirements of an industrial field. Due to the adoption of the area segmentation idea, the method has the advantages of strong extraction capability of the attached features and low consumption of calculation resources. And the attachment and the edge of the cutter are segmented in a pit matching mode, so that the edge is repaired more smoothly. Furthermore, the original edge of the cutter is completely restored, and the reliability and the stability of the cutter gap high-precision detection system are enhanced on the premise of considering the calculated amount and the precision.
The above is a schematic scheme of an image processing apparatus of the present embodiment. It should be noted that, the technical solution of the image processing apparatus and the technical solution of the image processing method belong to the same concept, and details of the technical solution of the image processing apparatus, which are not described in detail, can be referred to the description of the technical solution of the image processing method.
Fig. 7 illustrates a block diagram of a computing device 700 provided in accordance with one embodiment of the present description. The components of computing device 700 include, but are not limited to, memory 710 and processor 720. Processor 720 is coupled to memory 710 via bus 730, and database 750 is used to store data.
Computing device 700 also includes access device 740, access device 740 enabling computing device 700 to communicate via one or more networks 760. Examples of such networks include public switched telephone networks (PSTN, public Switched Telephone Network), local area networks (LAN, local Area Network), wide area networks (WAN, wide Area Network), personal area networks (PAN, personal Area Network), or combinations of communication networks such as the internet. The access device 740 may include one or more of any type of network interface, wired or wireless, such as a network interface card (NIC, network interface controller), such as an IEEE802.11 wireless local area network (WLAN, wireless Local Area Network) wireless interface, a worldwide interoperability for microwave access (Wi-MAX, worldwide Interoperability for Microwave Access) interface, an ethernet interface, a universal serial bus (USB, universal Serial Bus) interface, a cellular network interface, a bluetooth interface, near field communication (NFC, near Field Communication).
In one embodiment of the present description, the above-described components of computing device 700, as well as other components not shown in FIG. 7, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device illustrated in FIG. 7 is for exemplary purposes only and is not intended to limit the scope of the present description. Those skilled in the art may add or replace other components as desired.
Computing device 700 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smart phone), wearable computing device (e.g., smart watch, smart glasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or personal computer (PC, personal Computer). Computing device 700 may also be a mobile or stationary server.
Wherein the processor 720 is configured to execute computer-executable instructions that, when executed by the processor, perform the steps of the image processing method described above. The foregoing is a schematic illustration of a computing device of this embodiment. It should be noted that, the technical solution of the computing device and the technical solution of the image processing method belong to the same concept, and details of the technical solution of the computing device, which are not described in detail, can be referred to the description of the technical solution of the image processing method.
An embodiment of the present disclosure also provides a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, implement the steps of the image processing method described above.
The above is an exemplary version of a computer-readable storage medium of the present embodiment. It should be noted that, the technical solution of the storage medium and the technical solution of the image processing method belong to the same concept, and details of the technical solution of the storage medium which are not described in detail can be referred to the description of the technical solution of the image processing method.
An embodiment of the present specification also provides a computer program, wherein the computer program, when executed in a computer, causes the computer to perform the steps of the image processing method described above.
The above is an exemplary version of a computer program of the present embodiment. It should be noted that, the technical solution of the computer program and the technical solution of the image processing method belong to the same conception, and details of the technical solution of the computer program, which are not described in detail, can be referred to the description of the technical solution of the image processing method.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The computer instructions include computer program code that may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the embodiments are not limited by the order of actions described, as some steps may be performed in other order or simultaneously according to the embodiments of the present disclosure. Further, those skilled in the art will appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all required for the embodiments described in the specification.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The preferred embodiments of the present specification disclosed above are merely used to help clarify the present specification. Alternative embodiments are not intended to be exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the teaching of the embodiments. The embodiments were chosen and described in order to best explain the principles of the embodiments and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. This specification is to be limited only by the claims and the full scope and equivalents thereof.

Claims (10)

1. An image processing method, comprising:
acquiring a target edge image of a cutter, and determining at least two target points based on the target edge image;
determining corresponding matching point positions for the at least two target point positions according to a preset matching rule;
determining a fixed point bit group according to the at least two target point positions and the corresponding matching point positions;
and establishing a communication area based on the point location group, and performing image removal on the communication area.
2. The method of claim 1, wherein the acquiring a target edge image of a tool and determining at least two target points based on the target edge image comprises:
collecting a target edge image of the cutter through a collecting device;
determining a preset position of the target edge image, and determining a window function corresponding to the preset position; wherein the preset position is any position in the target edge image;
acquiring an initial gray value of the preset position, and acquiring a target gray value of the window under the condition of window movement based on the window function;
determining gray level variation according to the initial gray level value and the target gray level value;
and determining the score of the preset position according to the gray level variation, and determining the preset position as a target point position according to the score.
3. The method according to claim 1, wherein determining corresponding matching points for the at least two target points according to a preset matching rule comprises:
selecting a first target point position from the at least two target point positions, and determining Euclidean distances between other point positions and the first target point position; wherein the other point location is any one of the at least two target point locations;
and determining the matching point positions of the first target point position from the other point positions according to Euclidean distances between the other point positions and the first target point position.
4. A method according to claim 3, wherein said determining a set of points from said at least two target points and said corresponding matching points comprises:
establishing an association relation between the first target point location and a matching point location of the first target point location;
and determining a fixed bit group according to the association relation.
5. The method of claim 1, wherein the establishing a connected region based on the set of points and performing image removal in the connected region comprises:
determining the area of a concave area based on the point group, and connecting the target point in the point group;
and determining a communication area based on the area of the concave area, and removing an image corresponding to the communication area from the target edge image.
6. The method as recited in claim 1, further comprising:
and determining a distance threshold, and screening the point location group according to the distance threshold to obtain a screened point location group.
7. The method according to claim 2, wherein determining the score of the preset position according to the gray level variation and determining the preset position as a target point according to the score comprises:
and determining a score threshold value, and determining the preset position as a target point position according to the score of the preset position and the score threshold value.
8. An image processing apparatus, comprising:
the point position determining module is configured to acquire a target edge image of the cutter and determine at least two target point positions based on the target edge image;
the point location matching module is configured to determine corresponding matching point locations for the at least two target point locations according to a preset matching rule;
the point position collection module is configured to determine a point position group according to the at least two target point positions and the corresponding matching point positions;
and the image removing module is configured to establish a communication area based on the point location group and remove the image in the communication area.
9. A computing device, comprising:
a memory and a processor;
the memory is configured to store computer executable instructions, and the processor is configured to execute the computer executable instructions, which when executed by the processor, implement the steps of the image processing method of any one of claims 1 to 7.
10. A computer readable storage medium storing computer executable instructions which when executed by a processor implement the steps of the image processing method of any one of claims 1 to 7.
CN202310628134.7A 2023-05-30 2023-05-30 Image processing method, device, equipment and storage medium Pending CN116797553A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310628134.7A CN116797553A (en) 2023-05-30 2023-05-30 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310628134.7A CN116797553A (en) 2023-05-30 2023-05-30 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116797553A true CN116797553A (en) 2023-09-22

Family

ID=88035399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310628134.7A Pending CN116797553A (en) 2023-05-30 2023-05-30 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116797553A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117392218A (en) * 2023-10-10 2024-01-12 钛玛科(北京)工业科技有限公司 Method and device for correcting deviation of curled material
CN117541766A (en) * 2023-10-20 2024-02-09 钛玛科(北京)工业科技有限公司 Lens spot inspection method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117392218A (en) * 2023-10-10 2024-01-12 钛玛科(北京)工业科技有限公司 Method and device for correcting deviation of curled material
CN117541766A (en) * 2023-10-20 2024-02-09 钛玛科(北京)工业科技有限公司 Lens spot inspection method and device

Similar Documents

Publication Publication Date Title
CN116797553A (en) Image processing method, device, equipment and storage medium
CN113822890A (en) Microcrack detection method, device and system and storage medium
CN110135514B (en) Workpiece classification method, device, equipment and medium
CN115272280A (en) Defect detection method, device, equipment and storage medium
CN115690102B (en) Defect detection method, defect detection apparatus, electronic device, storage medium, and program product
CN115131283A (en) Defect detection and model training method, device, equipment and medium for target object
WO2024002187A1 (en) Defect detection method, defect detection device, and storage medium
CN111209958A (en) Transformer substation equipment detection method and device based on deep learning
CN114417993A (en) Scratch detection method based on deep convolutional neural network and image segmentation
CN115471466A (en) Steel surface defect detection method and system based on artificial intelligence
CN115471476A (en) Method, device, equipment and medium for detecting component defects
CN117541766A (en) Lens spot inspection method and device
CN113780040A (en) Lip key point positioning method and device, storage medium and electronic equipment
CN112668365A (en) Material warehousing identification method, device, equipment and storage medium
CN115690101A (en) Defect detection method, defect detection apparatus, electronic device, storage medium, and program product
CN113837255B (en) Method, apparatus and medium for predicting cell-based antibody karyotype class
CN111259974B (en) Surface defect positioning and classifying method for small-sample flexible IC substrate
CN115511815A (en) Cervical fluid-based cell segmentation method and system based on watershed
CN111046878B (en) Data processing method and device, computer storage medium and computer
CN111582358B (en) Training method and device for house type recognition model, and house type weight judging method and device
CN115393847B (en) Method and device for identifying and analyzing function condition of stromal cells
CN117635606B (en) Method, device, equipment and storage medium for detecting chuck defects of laser pipe cutting machine
CN116843892B (en) AOI scene contour recognition method
Shekar Skeleton matching based approach for text localization in scene images
CN115205555B (en) Method for determining similar images, training method, information determining method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination