CN113269767A - Batch part feature detection method, system, medium and equipment based on machine vision - Google Patents
Batch part feature detection method, system, medium and equipment based on machine vision Download PDFInfo
- Publication number
- CN113269767A CN113269767A CN202110633892.9A CN202110633892A CN113269767A CN 113269767 A CN113269767 A CN 113269767A CN 202110633892 A CN202110633892 A CN 202110633892A CN 113269767 A CN113269767 A CN 113269767A
- Authority
- CN
- China
- Prior art keywords
- image
- detected
- detection
- sample
- batch
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20061—Hough transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The application provides a batch part aperture detection method and system based on machine vision, a computer readable medium and an electronic device. The method comprises the following steps: selecting a sample region containing a feature to be detected in an image of a sample part; aligning the image of the part to be detected with the image of the sample part based on an ORB algorithm; acquiring a detection area of the image of the part to be detected based on the sample area in the image of the sample part; based on an edge detection algorithm, carrying out edge detection on the detection area to obtain an edge image of the detection area; and detecting the to-be-detected features of the to-be-detected part according to the edge image based on a Hough transform method. Therefore, the calculation amount is greatly reduced, the detection speed and the detection precision are improved, meanwhile, the part to be detected is not limited by the placing direction and the placing angle, manual adjustment of the placing position and the placing direction of the part to be detected is avoided, the detection efficiency is greatly improved, and large-batch rapid part feature detection is realized.
Description
Technical Field
The application relates to the technical field of precision detection, in particular to a method and a system for detecting the aperture of a batch of parts based on machine vision, a computer readable medium and electronic equipment.
Background
In recent years, with the gradual maturity of computer vision technology, computer vision is widely applied to the field of industrial automation, and in the motor part detection process, the traditional manual detection has low efficiency, low speed, low detection precision and high working strength. Aiming at the situation, the automatic detection device can realize the advantages of high-speed feeding and reduced working strength. However, the computer vision inspection technology has a large demand for operation, and in the dimension detection and positioning of large-sized precision parts (such as a motor stator and a motor rotor), if a hardware facility with ultrahigh resolution is used, the operation amount is increased, the detection speed is reduced, and the unification of the detection precision and the detection speed is difficult to achieve. Meanwhile, the detection result is greatly influenced by the placing position and the direction of each part, the detection results of parts of unified models can be different, the position and the direction of each part need to be manually debugged for the detection of parts of large batches, and the defects of low detection efficiency and large manual labor amount are inevitably caused.
Therefore, there is a need to provide an improved solution to the above-mentioned deficiencies of the prior art.
Disclosure of Invention
An object of the present application is to provide a method, a system, a computer-readable medium and an electronic device for detecting an aperture of a batch of parts based on machine vision, so as to solve or alleviate the above problems in the prior art.
In order to achieve the above purpose, the present application provides the following technical solutions:
the application provides a batch part feature detection method based on machine vision, which comprises the following steps: s101, selecting a sample area containing a feature to be detected in an image of a sample part; wherein the sample part is any one of the batch parts; s102, aligning the image of the part to be detected with the image of the sample part based on an ORB algorithm; the part to be detected is any one of the batch parts different from the sample part; step S103, acquiring a detection area of the image of the part to be detected based on the sample area in the image of the sample part; step S104, performing edge detection on the detection area based on an edge detection algorithm to obtain an edge image of the detection area; and S105, detecting the to-be-detected features of the to-be-detected part according to the edge image based on a Hough transform method.
Preferably, in step S101, the acquired image of the sample part is subjected to noise preprocessing, and the sample region including the feature to be detected is selected in the noise-preprocessed image of the sample part.
Preferably, in step S102, based on an ORB rotation matching algorithm, the image of the to-be-measured part is aligned with the image of the sample part according to the alignment identifier in the image of the to-be-measured part and the alignment identifier in the image of the sample part.
Preferably, in step S103, the sample region is projected in the image of the part to be measured, and a detection region of the image of the part to be measured is acquired.
Preferably, in step S104, the edge detection is performed on the detection area based on an edge detection algorithm of deep learning, so as to obtain an edge image of the detection area.
Preferably, in step S105, detecting the feature to be detected in the edge image based on a hough transform method, mapping the detection result to the image of the part to be detected, and calculating the feature to be detected of the part to be detected.
Preferably, the image of the sample part and the image of the part to be measured are acquired by an area-array camera with a double telecentric lens.
The embodiment of the present application further provides a batch part feature detection system based on machine vision, including: a region dividing unit configured to select a sample region containing a feature to be detected in an image of a sample part; wherein the sample part is any one of the batch parts; the image alignment unit is configured to align the image of the part to be detected and the image of the sample part based on an ORB algorithm; the part to be detected is any one of the batch parts different from the sample part; a region projection unit configured to acquire a detection region of the image of the part to be measured based on the sample region in the image of the sample part; the edge detection unit is configured to perform edge detection on the detection area based on an edge detection algorithm to obtain an edge image of the detection area; and the feature detection unit is configured to detect the feature to be detected of the part to be detected according to the edge image based on a Hough transform method.
The embodiment of the present application further provides a computer-readable medium, on which a computer program is stored, where the program is a batch part feature detection method based on machine vision as described in any one of the above embodiments.
An embodiment of the present application further provides an electronic device, including: the system comprises a memory, a processor and a program stored in the memory and capable of running on the processor, wherein the processor executes the program to realize the batch part feature detection method based on machine vision according to any one of the embodiments.
Has the advantages that:
according to the technical scheme, a sample area containing the characteristics to be detected in the image of the sample part is selected, the image of the part to be detected and the image of the sample part are aligned based on an ORB algorithm, the sample area is projected in the image of the part to be detected, and the detection area of the image of the part to be detected is obtained; and further, detecting the to-be-detected features of the to-be-detected part through an edge detection algorithm and a Hough transform method. Therefore, the calculation amount is greatly reduced, the detection speed and the detection precision are improved, meanwhile, the part to be detected is not limited by the placing direction and the placing angle, manual adjustment of the placing position and the placing direction of the part to be detected is avoided, the detection efficiency is greatly improved, and large-batch rapid part feature detection is realized.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. Wherein:
FIG. 1 is a schematic flow diagram of a method for machine vision based inspection of features of a batch part according to some embodiments of the present application;
FIG. 2 is a schematic block diagram of a machine vision based batch part feature detection system according to some embodiments of the present application;
FIG. 3 is a schematic structural diagram of an electronic device provided in accordance with some embodiments of the present application;
fig. 4 is a hardware block diagram of an electronic device provided in accordance with some embodiments of the present application.
Detailed Description
The present application will be described in detail below with reference to the embodiments with reference to the attached drawings. The various examples are provided by way of explanation of the application and are not limiting of the application. In fact, it will be apparent to those skilled in the art that modifications and variations can be made in the present application without departing from the scope or spirit of the application. For instance, features illustrated or described as part of one embodiment, can be used with another embodiment to yield a still further embodiment. It is therefore intended that the present application cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.
In the description of the present application, the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description of the present application but do not require that the present application must be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present application. The terms "connected," "connected," and "disposed" as used herein are intended to be broadly construed, and may include, for example, fixed and removable connections; can be directly connected or indirectly connected through intermediate components; the connection may be a wired electrical connection, a wireless electrical connection, or a wireless communication signal connection, and a person skilled in the art can understand the specific meaning of the above terms according to specific situations.
Exemplary method
FIG. 1 is a schematic flow diagram of a method for machine vision based inspection of features of a batch part according to some embodiments of the present application; as shown in fig. 1, the method for detecting characteristics of batch parts based on machine vision includes:
s101, selecting a sample area containing a feature to be detected in an image of a sample part; wherein the sample part is any one of the batch parts;
specifically, an image of a sample part is collected through an area-array camera carrying a telecentric lens; and then carrying out noise pretreatment on the acquired image of the sample part, and selecting a sample region containing the characteristic to be detected in the noise-pretreated image of the sample part.
In the embodiment of the application, an industrial area-array camera and a double telecentric lens are used, and a large amount of interference factors such as noise exist in the acquired part image, so that the image is preprocessed by using methods such as graying, Gaussian filtering, median filtering and the like, noise is removed, and an area containing the feature to be detected is selected as a template image.
Step S102, aligning the image of the part to be detected with the image of the sample part based on an ORB (ordered Fast and ordered BRIEF) algorithm; the part to be detected is any one of the batch parts different from the sample part;
specifically, based on an ORB rotation matching algorithm, the image of the part to be detected and the image of the sample part are aligned according to the alignment mark in the image of the part to be detected and the alignment mark in the image of the sample part.
In the embodiment of the application, the images of the sample part and the part to be detected are collected through the area array camera carrying the telecentric lens, and the collected images of the sample part and the part to be detected are respectively identified; by aligning the corresponding marks, the direction and the angle of the image of the part to be detected are consistent with those of the image of the sample part, and the alignment of the image of the part to be detected and the image of the sample part is realized. Therefore, the detection device has the advantages that the detected parts to be detected are not limited by the placing direction and the angle, the manual adjustment of the placing position and the direction of the parts to be detected is avoided, and the detection efficiency is improved.
Step S103, acquiring a detection area of the image of the part to be detected based on the sample area in the image of the sample part; specifically, the sample region is projected in the image of the part to be detected, so as to obtain a detection region of the image of the part to be detected;
in the embodiment of the application, since the sample region is a selected region containing the feature to be detected, and the image of the part to be detected is aligned with the image of the sample part, the defined corresponding detection region in the image of the part to be detected also contains the feature to be detected by projecting the sample region in the image of the part to be detected.
Step S104, performing edge detection on the detection area based on an edge detection algorithm to obtain an edge image of the detection area; specifically, based on edge detection of deep learning (BDCN), performing edge detection on the detection area by using an algorithm to obtain an edge image of the detection area;
in the embodiment of the present application, in the process of performing Edge Detection on the Detection area, other Edge Detection algorithms based on machine learning, Edge Detection algorithms based on structure projects, Edge Detection algorithms based on a convolutional neural network (HED) and the like may also be used.
The edge image of the detection area can be obtained by carrying out edge detection on the detection area of the image of the part to be detected, the whole image of the part to be detected does not need to be processed and detected, the calculation requirement can be effectively reduced, and the detection efficiency is improved.
S105, detecting the to-be-detected features of the to-be-detected part according to the edge image based on a Hough transform method; specifically, based on a Hough transform method, the features to be detected in the edge image are detected, the detection result is mapped to the image of the part to be detected, and the features to be detected of the part to be detected are calculated.
In the embodiment of the application, feature information (such as a circular position and a radius of a hole to be detected) of a feature to be detected (such as the hole to be detected) in an edge image is obtained by using a hough transform method, then the feature information is mapped into an image of a part to be detected, and the feature to be detected of the part to be detected is calculated (such as solving the position of the hole diameter in the part to be detected, and realizing the detection and the positioning of the hole diameter of the part to be detected). Therefore, large-batch rapid part feature detection (such as part aperture detection and positioning) is realized.
In the embodiment of the application, a sample area containing the characteristics to be detected in the image of the sample part is selected, the image of the part to be detected and the image of the sample part are aligned based on an ORB algorithm, the sample area is projected in the image of the part to be detected, and a detection area of the image of the part to be detected is obtained; and further, detecting the to-be-detected features of the to-be-detected part through an edge detection algorithm and a Hough transform method. The method has the advantages that the calculated amount is greatly reduced, the detection speed and the detection precision are improved, meanwhile, the part to be detected is not limited by the placing direction and the placing angle, manual adjustment of the placing position and the placing direction of the part to be detected is avoided, the detection efficiency is greatly improved, and large-batch rapid part feature detection is realized.
Exemplary System
FIG. 2 is a schematic block diagram of a machine vision based batch part feature detection system according to some embodiments of the present application; as shown in fig. 2, the batch part feature detection system based on machine vision includes: a region dividing unit 201 configured to select a sample region containing a feature to be detected in an image of a sample part; wherein the sample part is any one of the batch parts; an image alignment unit 202 configured to align an image of a to-be-measured part with an image of the sample part based on an ORB algorithm; the part to be detected is any one of the batch parts different from the sample part; a region projection unit 203 configured to acquire a detection region of the image of the part to be detected based on the sample region in the image of the sample part; an edge detection unit 204 configured to perform edge detection on the monitored area based on an edge detection algorithm to obtain an edge image of the detected area; the feature detection unit 205 is configured to detect a feature to be detected of the part to be detected according to the edge image based on a hough transform method.
The batch part feature detection system based on machine vision provided by the embodiment of the application can realize the steps and the flows of the batch part feature detection method based on machine vision described in any embodiment above, and achieve the same technical effects, which are not described in detail herein.
Exemplary device
FIG. 3 is a schematic structural diagram of an electronic device provided in accordance with some embodiments of the present application; as shown in fig. 3, the electronic apparatus includes:
one or more processors 301;
computer-readable media 302 may be configured to store one or more programs that, when executed by the one or more processors, perform the steps of: selecting a sample region containing a feature to be detected in an image of a sample part; wherein the sample part is any one of the batch parts; aligning the image of the part to be detected with the image of the sample part based on an ORB algorithm; the part to be detected is any one of the batch parts different from the sample part; acquiring a detection area of the image of the part to be detected based on the sample area in the image of the sample part; based on an edge detection algorithm, carrying out edge detection on the detection area to obtain an edge image of the detection area; and detecting the to-be-detected features of the to-be-detected part according to the edge image based on a Hough transform method.
FIG. 4 is a hardware block diagram of an electronic device provided in accordance with some embodiments of the present application; as shown in fig. 4, the hardware structure of the electronic device may include: a processor 401, a communication interface 402, a computer-readable medium 403, and a communication bus 404;
the processor 401, the communication interface 402, and the computer-readable medium 403 are configured to communicate with each other via a communication bus 404;
alternatively, the communication interface 402 may be an interface of a communication module, such as an interface of a GSM module;
the processor 401 may be specifically configured to: selecting a sample region containing a feature to be detected in an image of a sample part; wherein the sample part is any one of the batch parts; aligning the image of the part to be detected with the image of the sample part based on an ORB algorithm; the part to be detected is any one of the batch parts different from the sample part; acquiring a detection area of the image of the part to be detected based on the sample area in the image of the sample part; based on an edge detection algorithm, carrying out edge detection on the detection area to obtain an edge image of the detection area; and detecting the to-be-detected features of the to-be-detected part according to the edge image based on a Hough transform method.
The Processor 401 may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), and the like, and may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The electronic device of the embodiments of the present application exists in various forms, including but not limited to:
(1) a mobile communication device: such devices are characterized by mobile communications capabilities and are primarily targeted at providing voice, data communications. Such terminals include: smart phones (e.g., IPhone), multimedia phones, functional phones, and low-end phones, etc.
(2) Ultra mobile personal computer device: the equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include: PDA, MID, and UMPC devices, etc., such as Ipad.
(3) A portable entertainment device: such devices can display and play multimedia content. This type of device comprises: audio and video players (e.g., iPod), handheld game players, electronic books, and smart toys and portable car navigation devices.
(4) A server: the device for providing the computing service comprises a processor, a hard disk, a memory, a system bus and the like, and the server is similar to a general computer architecture, but has higher requirements on processing capacity, stability, reliability, safety, expandability, manageability and the like because of the need of providing high-reliability service.
(5) And other electronic devices with data interaction functions.
It should be noted that, according to the implementation requirement, each component/step described in the embodiment of the present application may be divided into more components/steps, or two or more components/steps or partial operations of the components/steps may be combined into a new component/step to achieve the purpose of the embodiment of the present application.
The above-described methods according to embodiments of the present application may be implemented in hardware, firmware, or as software or computer code storable in a recording medium such as a CD ROM, a RAM, a floppy disk, a hard disk, or a magneto-optical disk, or as computer code originally stored in a remote recording medium or a non-transitory machine storage medium and to be stored in a local recording medium downloaded through a network, so that the methods described herein may be stored in such software processes on a recording medium using a general-purpose computer, a dedicated processor, or programmable or dedicated hardware such as an ASIC or FPGA. It will be appreciated that the computer, processor, microprocessor controller or programmable hardware includes memory components (e.g., RAM, ROM, flash memory, etc.) that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the machine vision based batch part feature detection method described herein. Further, when a general-purpose computer accesses code for implementing the methods illustrated herein, execution of the code transforms the general-purpose computer into a special-purpose computer for performing the methods illustrated herein.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the particular application of the solution and the constraints involved. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus and system embodiments, since they are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described embodiments of the apparatus and system are merely illustrative, and elements not shown as separate may or may not be physically separate, and elements not shown as unit hints may or may not be physical elements, may be located in one place, or may be distributed across multiple network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (10)
1. A batch part feature detection method based on machine vision is characterized by comprising the following steps:
s101, selecting a sample area containing a feature to be detected in an image of a sample part; wherein the sample part is any one of the batch parts;
s102, aligning the image of the part to be detected with the image of the sample part based on an ORB algorithm; the part to be detected is any one of the batch parts different from the sample part;
step S103, acquiring a detection area of the image of the part to be detected based on the sample area in the image of the sample part;
step S104, performing edge detection on the detection area based on an edge detection algorithm to obtain an edge image of the detection area;
and S105, detecting the to-be-detected features of the to-be-detected part according to the edge image based on a Hough transform method.
2. The machine-vision-based method for inspecting characteristics of batch parts according to claim 1, wherein in step S101,
and carrying out noise pretreatment on the acquired image of the sample part, and selecting the sample region containing the feature to be detected in the noise-pretreated image of the sample part.
3. The machine-vision-based method for inspecting characteristics of batch parts according to claim 1, wherein in step S102,
and aligning the image of the part to be detected with the image of the sample part according to the alignment mark in the image of the part to be detected and the alignment mark in the image of the sample part based on an ORB rotation matching algorithm.
4. The machine-vision-based method for inspecting characteristics of batch parts according to claim 1, wherein in step S103,
and projecting the sample region in the image of the part to be detected to obtain a detection region of the image of the part to be detected.
5. The machine-vision-based method for inspecting characteristics of batch parts according to claim 1, wherein in step S104,
and carrying out edge detection on the detection area based on an edge detection algorithm of deep learning to obtain an edge image of the detection area.
6. The machine-vision-based method for inspecting characteristics of batch parts according to claim 1, wherein in step S105,
and detecting the features to be detected in the edge image based on a Hough transform method, mapping the detection result to the image of the part to be detected, and calculating the features to be detected of the part to be detected.
7. The method for detecting the characteristics of the batch parts based on the machine vision according to any one of claims 1 to 6, wherein the image of the sample part and the image of the part to be detected are obtained by an area-array camera carrying a double telecentric lens.
8. A batch part feature detection system based on machine vision, comprising:
a region dividing unit configured to select a sample region containing a feature to be detected in an image of a sample part; wherein the sample part is any one of the batch parts;
the image alignment unit is configured to align the image of the part to be detected and the image of the sample part based on an ORB algorithm; the part to be detected is any one of the batch parts different from the sample part;
a region projection unit configured to acquire a detection region of the image of the part to be measured based on the sample region in the image of the sample part;
the edge detection unit is configured to perform edge detection on the detection area based on an edge detection algorithm to obtain an edge image of the detection area;
and the feature detection unit is configured to detect the feature to be detected of the part to be detected according to the edge image based on a Hough transform method.
9. A computer-readable medium having stored thereon a computer program, wherein the program is a method for machine vision based inspection of characteristics of a batch of parts according to any of claims 1 to 7.
10. An electronic device, comprising: a memory, a processor, and a program stored in the memory and executable on the processor, the processor implementing the method of machine vision based batch part feature detection as claimed in any one of claims 1-7 when executing the program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110633892.9A CN113269767B (en) | 2021-06-07 | 2021-06-07 | Batch part feature detection method, system, medium and equipment based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110633892.9A CN113269767B (en) | 2021-06-07 | 2021-06-07 | Batch part feature detection method, system, medium and equipment based on machine vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113269767A true CN113269767A (en) | 2021-08-17 |
CN113269767B CN113269767B (en) | 2023-07-18 |
Family
ID=77234512
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110633892.9A Active CN113269767B (en) | 2021-06-07 | 2021-06-07 | Batch part feature detection method, system, medium and equipment based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113269767B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103292701A (en) * | 2013-06-24 | 2013-09-11 | 哈尔滨工业大学 | Machine-vision-based online dimensional measurement method of precise instrument |
CN109741314A (en) * | 2018-12-29 | 2019-05-10 | 广州博通信息技术有限公司 | A kind of visible detection method and system of part |
CN109815822A (en) * | 2018-12-27 | 2019-05-28 | 北京航天福道高技术股份有限公司 | Inspection figure components target identification method based on Generalized Hough Transform |
CN111009036A (en) * | 2019-12-10 | 2020-04-14 | 北京歌尔泰克科技有限公司 | Grid map correction method and device in synchronous positioning and map construction |
CN112258455A (en) * | 2020-09-28 | 2021-01-22 | 上海工程技术大学 | Detection method for detecting spatial position of part based on monocular vision |
CN112561850A (en) * | 2019-09-26 | 2021-03-26 | 上海汽车集团股份有限公司 | Automobile gluing detection method and device and storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104809732B (en) * | 2015-05-07 | 2017-06-20 | 山东鲁能智能技术有限公司 | A kind of power equipment appearance method for detecting abnormality compared based on image |
CN108876842A (en) * | 2018-04-20 | 2018-11-23 | 苏州大学 | A kind of measurement method, system, equipment and the storage medium of sub-pixel edge angle |
CN112634365B (en) * | 2020-12-23 | 2022-09-23 | 大连理工大学 | High-precision pose tracking and detecting method for microstructure characteristics |
-
2021
- 2021-06-07 CN CN202110633892.9A patent/CN113269767B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103292701A (en) * | 2013-06-24 | 2013-09-11 | 哈尔滨工业大学 | Machine-vision-based online dimensional measurement method of precise instrument |
CN109815822A (en) * | 2018-12-27 | 2019-05-28 | 北京航天福道高技术股份有限公司 | Inspection figure components target identification method based on Generalized Hough Transform |
CN109741314A (en) * | 2018-12-29 | 2019-05-10 | 广州博通信息技术有限公司 | A kind of visible detection method and system of part |
CN112561850A (en) * | 2019-09-26 | 2021-03-26 | 上海汽车集团股份有限公司 | Automobile gluing detection method and device and storage medium |
CN111009036A (en) * | 2019-12-10 | 2020-04-14 | 北京歌尔泰克科技有限公司 | Grid map correction method and device in synchronous positioning and map construction |
CN112258455A (en) * | 2020-09-28 | 2021-01-22 | 上海工程技术大学 | Detection method for detecting spatial position of part based on monocular vision |
Non-Patent Citations (2)
Title |
---|
ESTEVE CERVANTES ET AL.: ""Hierarchical part detection with deep neural networks"", 《IEEE》 * |
廖强: ""基于机器视觉的零件几何尺寸测量自动调焦跟踪拼接技术研究"", 《万方数据知识服务平台》 * |
Also Published As
Publication number | Publication date |
---|---|
CN113269767B (en) | 2023-07-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109977935B (en) | Text recognition method and device | |
JP6868119B2 (en) | Holographic anti-counterfeit code inspection method and equipment | |
WO2016101643A1 (en) | Meter data read method and system | |
CN110962844B (en) | Vehicle course angle correction method and system, storage medium and terminal | |
US20150112470A1 (en) | Computing device and method for image measurement | |
CN111325798B (en) | Camera model correction method, device, AR implementation equipment and readable storage medium | |
CN113030121B (en) | Automatic optical detection method, system and equipment for circuit board components | |
CN108090486B (en) | Image processing method and device in billiard game | |
CN107808117A (en) | A kind of shared Vehicle positioning system and its localization method based on cloud computing | |
CN111862057A (en) | Picture labeling method and device, sensor quality detection method and electronic equipment | |
CN110909804B (en) | Method, device, server and storage medium for detecting abnormal data of base station | |
WO2017112131A1 (en) | Determining values of angular gauges | |
CN112561850A (en) | Automobile gluing detection method and device and storage medium | |
CN111046831B (en) | Poultry identification method, device and server | |
CN113269767A (en) | Batch part feature detection method, system, medium and equipment based on machine vision | |
CN112862882A (en) | Target distance measuring method, device, electronic apparatus and storage medium | |
CN110211459B (en) | Examination item rechecking method and device, processing terminal and storage medium | |
US20150051724A1 (en) | Computing device and simulation method for generating a double contour of an object | |
CN106910196B (en) | Image detection method and device | |
CN115546219B (en) | Detection plate type generation method, plate card defect detection method, device and product | |
CN112102415A (en) | Depth camera external parameter calibration method, device and equipment based on calibration ball | |
CN114975212A (en) | Wafer center positioning method, device, equipment and medium | |
CN115082552A (en) | Marking hole positioning method and device, assembly equipment and storage medium | |
CN108985160B (en) | Method and device for determining reading of pointer instrument | |
CN111127419B (en) | Wheel set standard circle polygon detection method and device and terminal equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |