CN113996500A - Intelligent dispensing identification system based on visual dispensing robot - Google Patents

Intelligent dispensing identification system based on visual dispensing robot Download PDF

Info

Publication number
CN113996500A
CN113996500A CN202111137295.3A CN202111137295A CN113996500A CN 113996500 A CN113996500 A CN 113996500A CN 202111137295 A CN202111137295 A CN 202111137295A CN 113996500 A CN113996500 A CN 113996500A
Authority
CN
China
Prior art keywords
image
dispensing
visual
system based
identification system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111137295.3A
Other languages
Chinese (zh)
Inventor
郑朝翌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Prosys Suzhou Software Technology Co ltd
Original Assignee
Prosys Suzhou Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Prosys Suzhou Software Technology Co ltd filed Critical Prosys Suzhou Software Technology Co ltd
Priority to CN202111137295.3A priority Critical patent/CN113996500A/en
Publication of CN113996500A publication Critical patent/CN113996500A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B05SPRAYING OR ATOMISING IN GENERAL; APPLYING FLUENT MATERIALS TO SURFACES, IN GENERAL
    • B05CAPPARATUS FOR APPLYING FLUENT MATERIALS TO SURFACES, IN GENERAL
    • B05C11/00Component parts, details or accessories not specifically provided for in groups B05C1/00 - B05C9/00
    • B05C11/10Storage, supply or control of liquid or other fluent material; Recovery of excess liquid or other fluent material
    • B05C11/1002Means for controlling supply, i.e. flow or pressure, of liquid or other fluent material to the applying apparatus, e.g. valves
    • B05C11/1015Means for controlling supply, i.e. flow or pressure, of liquid or other fluent material to the applying apparatus, e.g. valves responsive to a conditions of ambient medium or target, e.g. humidity, temperature ; responsive to position or movement of the coating head relative to the target
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B05SPRAYING OR ATOMISING IN GENERAL; APPLYING FLUENT MATERIALS TO SURFACES, IN GENERAL
    • B05CAPPARATUS FOR APPLYING FLUENT MATERIALS TO SURFACES, IN GENERAL
    • B05C5/00Apparatus in which liquid or other fluent material is projected, poured or allowed to flow on to the surface of the work
    • B05C5/02Apparatus in which liquid or other fluent material is projected, poured or allowed to flow on to the surface of the work the liquid or other fluent material being discharged through an outlet orifice by pressure, e.g. from an outlet device in contact or almost in contact, with the work
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0075Manipulators for painting or coating

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an intelligent dispensing identification system based on a visual dispensing robot, which comprises an image acquisition module, an image processing module, a measurement system calibration module and a dispensing module, wherein the image acquisition module comprises a light source and a camera, the light source is set as ambient light, a protective screen is arranged on the light source, and the camera is set as a visual sensor; the processing procedure of the image processing module comprises the following steps: step 1, image enhancement processing is used for adjusting the contrast of an image, highlighting important details in the image and improving visual quality; the invention improves the processing effect of the image to a certain extent, makes the image clearer, reduces the noise in the image, reduces the image distortion phenomenon caused by imaging equipment and environment, is beneficial to extracting useful information, recovers the original image and further improves the working efficiency and effect of the subsequent dispensing process.

Description

Intelligent dispensing identification system based on visual dispensing robot
Technical Field
The invention belongs to the technical field of visual dispensing robots, and particularly relates to an intelligent dispensing identification system based on a visual dispensing robot.
Background
With the development of large-scale integration technology, the volume of a computer memory is continuously reduced, the price is rapidly reduced, the speed is continuously improved, a vision system is also put into practical use, and robot vision refers to that visual information is used as input and is processed, so that useful information is extracted and provided for a robot, a three-dimensional object in an objective world is converted into a two-dimensional plane image through a sensor (such as a camera), and the image of the object is output through image processing. Generally, two types of information, namely distance information and brightness information, are required for a robot to judge the position and the shape of an object. Of course, there is color information as object visual information, but it is not as important for recognizing the position and shape of an object as the first two kinds of information. The robot vision system has a great dependence on light rays, often needs good illumination conditions so as to make an image formed by an object be the clearest, enhance detection information, overcome the problems of shadow, low contrast, mirror reflection and the like, and has the functions of identifying a workpiece, determining the position and the direction of the workpiece and providing visual feedback for the self-adaptive control of the motion trail of the robot.
However, in the prior art, when the intelligent dispensing recognition system for the visual dispensing robot processes the acquired image, the processing effect is not good, so that the image is not clear, the noise in the image is large, the image distortion is easily caused, useful information is not easily extracted, and further the subsequent dispensing process is affected, so that the intelligent dispensing recognition system based on the visual dispensing robot needs to be provided.
Disclosure of Invention
The invention aims to solve the defects in the prior art, improve the image processing effect to a certain extent, make the image clearer, reduce the noise in the image, reduce the image distortion phenomenon caused by imaging equipment and environment, facilitate the extraction of useful information and the recovery of the original image, and further improve the working efficiency and effect of the subsequent dispensing process.
In order to achieve the purpose, the invention provides the following technical scheme: an intelligent dispensing identification system based on a visual dispensing robot comprises an image acquisition module, an image processing module, a measurement system calibration module and a dispensing module, wherein the image acquisition module comprises a light source and a camera, the light source is set as ambient light, a protective screen is arranged on the light source, and the camera is set as a visual sensor;
the processing procedure of the image processing module comprises the following steps:
step 1, image enhancement processing is used for adjusting the contrast of an image, highlighting important details in the image and improving visual quality;
step 2, encoding and transmitting image data, wherein in the data transmission process, data needs to be compressed, and the data compression is mainly completed through encoding and transformation compression of the image data;
step 3, smoothing the image, which is used for removing noise in the image;
step 4, sharpening the edge of the image, and strengthening the contour edge and the details in the image to form a complete object boundary;
step 5, image segmentation processing, namely dividing the image into a plurality of parts, wherein each part corresponds to the surface of an object, and the gray level or texture of each part conforms to a certain uniform measure measurement during segmentation;
step 6, extracting the characteristics of the image, and identifying and quantifying the key characteristics of the image;
and 7, image recognition, namely distinguishing each segmented object in the scenery by using a recognition algorithm and endowing the objects with specific marks.
Preferably, in the process of calibrating the measurement system, the measurement system calibration module firstly makes the pixel resolution of the camera be the actual size represented by one pixel, then determines the position relationship between the rubber head and the origin of the camera by adopting a dotting method, and finally obtains the physical coordinates of the capture point in the motion platform coordinate system according to the transformation relationship between the motion platform coordinate system and the pixel image coordinate system.
Preferably, the dispensing module performs image matching and recognition after the visual information is inferred by the computer, and controls the manipulator to perform dispensing operation with the most appropriate dispensing trajectory.
Preferably, in step 1, the image enhancement processing adopts a gray histogram modification technique to perform image enhancement, and performs some mapping transformation on the pixel gray in an image with a known gray probability distribution, so that the pixel gray is changed into a new image with a uniform gray probability distribution, thereby achieving the purpose of making the image clear.
Preferably, in step 2, the image data encoding and transmission process adopts predictive encoding, that is, the spatial variation rule and the sequence variation rule of the image data are expressed by a prediction formula, and if the values of the adjacent pixels in front of a certain pixel are known, the pixel value can be predicted by the formula.
Preferably, in step 3, the image smoothing process may remove image distortion caused by an imaging device and an environment during an actual imaging process, extract useful information, and restore an original image.
Preferably, in step 4, the object is separated from the image or a region representing the surface of the same object is detected during the image edge sharpening process.
Preferably, in step 5, the classification in the image segmentation process is based on a gray scale value, a color, a spectral characteristic, a spatial characteristic, or a texture characteristic of the pixel.
Preferably, in the step 6, the feature extraction of the image is to perform edge thinning by contour tracing after the edge of the object image is extracted, remove pseudo edge points and noise points, encode the edge points forming a closed curve, and extract the feature of the centroid coordinate, the area, the curvature, the edge, the corner point and the short axis direction of the object.
Preferably, in the step 7, the image recognition is to correctly segment the object to be recognized from the background of the image, and try to match the attribute map of the object in the created image with the attribute map of the assumed model library.
The invention has the technical effects and advantages that:
the invention firstly carries out image enhancement processing for adjusting the contrast of the image, highlighting important details in the image, improving visual quality and realizing the aim of making the image clear, can remove image distortion caused by imaging equipment and environment in the actual imaging process by image smoothing processing, extracts useful information and restores the original image, improves the processing effect on the image to a certain extent, ensures that the image is clearer, reduces noise in the image, reduces the image distortion phenomenon caused by the imaging equipment and the environment, is beneficial to extracting the useful information and restoring the original image, and further improves the working efficiency and the effect of the subsequent dispensing process.
Drawings
FIG. 1 is a block diagram of the system of the present invention;
fig. 2 is a block diagram of the image processing of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-2, the present invention provides an intelligent identification system for dispensing based on a visual dispensing robot, which comprises an image acquisition module, an image processing module, a measurement system calibration module and a dispensing module, wherein the image acquisition module comprises a light source and a camera, the illumination and the important factors influencing the input of a machine vision system directly influence the quality of input data and the application effect of at least 30%, the light source is set as ambient light, a protective screen is arranged on the light source, the ambient light changes the total light energy of the light source irradiating on an object, so that the output image data has noise and the influence of the ambient light on the object is reduced, the camera is set as a visual sensor, the CCD is the most commonly used machine vision sensor at present, the lens image received by the camera is converted into an electrical signal processed by a computer, and the camera can be an electronic tube, but also a solid state sensing unit.
The processing procedure of the image processing module comprises the following steps:
step 1, image enhancement processing is used for adjusting the contrast of an image, highlighting important details in the image and improving visual quality;
step 2, encoding and transmitting image data, wherein in the data transmission process, data needs to be compressed, and the data compression is mainly completed through encoding and transformation compression of the image data;
step 3, smoothing the image, which is used for removing noise in the image;
step 4, sharpening the edge of the image, and strengthening the contour edge and the details in the image to form a complete object boundary;
step 5, image segmentation processing, namely dividing the image into a plurality of parts, wherein each part corresponds to the surface of an object, and the gray level or texture of each part conforms to a certain uniform measure measurement during segmentation;
step 6, extracting the characteristics of the image, and identifying and quantifying the key characteristics of the image;
and 7, image recognition, namely distinguishing each object which is well segmented in the scenery by using a recognition algorithm, giving specific marks to the objects, and improving the quality of an output image to a considerable extent after the objects are processed, so that the visual effect of the image is improved, and the image is convenient to analyze, process and recognize by a computer.
Specifically, the dispensing initial coordinate extracted through image processing and analysis in the program running process is a pixel coordinate based on an image coordinate system and cannot be directly used as a physical coordinate for driving a motion platform, a new calibration is needed for a visual coordinate system, a measurement system calibration module firstly enables the pixel resolution of a camera to be the actual size represented by one pixel in the process of calibrating the measurement system, then a dotting method is adopted to determine the position relation between a glue head and the camera origin, and finally the physical coordinate of a capture point in the motion platform coordinate system can be obtained through the transformation relation between the motion platform coordinate system and the pixel image coordinate system.
Specifically, the dispensing module performs image matching and recognition after the visual information is pushed out by the computer, and controls the manipulator to perform dispensing operation with the most appropriate dispensing trajectory.
Specifically, in step 1, the image enhancement processing adopts a gray histogram modification technology to perform image enhancement, and the histogram can only count the probability of the occurrence of a certain gray pixel, so that the two-dimensional coordinate of the pixel in the image cannot be reflected. Thus, different images are likely to have the same histogram. The definition and black-white contrast of the image can be judged by the shape of the gray level histogram, and the pixel gray level in an image with known gray level probability distribution is subjected to certain mapping transformation to be changed into a new image with uniform gray level probability distribution, so that the aim of making the image clear is fulfilled.
Specifically, in step 2, the image data encoding and transmission processing adopts predictive encoding, and generally only the initial value and the prediction error of the image data need to be transmitted, that is, the spatial change rule and the sequence change rule of the image data are expressed by a prediction formula, and if the values of the adjacent pixels in front of a certain pixel are known, the pixel value can be predicted by the formula.
Specifically, in step 3, external interference and internal interference inevitably exist in the process of forming, transmitting, receiving and processing the actually obtained image, for example, the image is deteriorated due to non-uniformity of sensitivity of a sensitive element in the photoelectric conversion process, quantization noise in the digitization process, errors in the transmission process and human factors, and the image smoothing process can remove image distortion caused by imaging equipment and environment in the actual imaging process, extract useful information and restore the original image.
Specifically, in step 4, the object is separated from the image or a region representing the surface of the same object is detected during the image edge sharpening process.
Specifically, in step 5, the classification in the image segmentation process is based on the gray level value, color, spectral characteristic, spatial characteristic or texture characteristic of the pixel.
Specifically, in step 6, the feature extraction of the image is to perform edge refinement by contour tracing after extracting the edge of the object image, remove pseudo edge points and noise points, encode the edge points forming a closed curve, and extract the centroid coordinates, area, curvature, edge, corner points and short axis direction features of the object.
Specifically, in step 7, the image recognition is to correctly segment the object to be recognized from the background of the image, and try to match the attribute map of the object in the created image with the attribute map of the assumed model library.
When the invention is used, a light source is turned on to irradiate on an object, a camera acquires an image of the object, then processes the image, firstly performs image enhancement processing for adjusting the contrast of the image, highlighting important details in the image and improving visual quality, then performs image data coding and transmission processing, in the data transmission process, data needs to be compressed, the data compression is mainly completed through the coding and transformation compression of the image data, then performs image smoothing processing, removes noise in the image, extracts useful information, restores the original image, then performs image edge sharpening processing, strengthens the contour edge and details in the image to form a complete object boundary, then performs image segmentation processing, divides the image into a plurality of parts, each part corresponds to the surface of a certain object, the gray level or texture of each part conforms to a certain uniform measure, then extracting the features of the image, identifying and quantizing the key features of the image, then identifying the image, distinguishing each object which is well divided in the scenery by using an identification algorithm, giving specific marks to the objects, converting the extracted pixel coordinates into physical coordinates which can be identified by a motion platform, finally, performing image matching and identification by using a dispensing module after the visual information is deduced by a computer, and controlling a mechanical arm to perform dispensing operation by using the most appropriate dispensing track.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments or portions thereof without departing from the spirit and scope of the invention.

Claims (10)

1. An intelligent dispensing identification system based on a visual dispensing robot is characterized by comprising an image acquisition module, an image processing module, a measurement system calibration module and a dispensing module, wherein the image acquisition module comprises a light source and a camera, the light source is set as ambient light, a protective screen is arranged on the light source, and the camera is set as a visual sensor;
the processing procedure of the image processing module comprises the following steps:
step 1, image enhancement processing is used for adjusting the contrast of an image, highlighting important details in the image and improving visual quality;
step 2, encoding and transmitting image data, wherein in the data transmission process, data needs to be compressed, and the data compression is mainly completed through encoding and transformation compression of the image data;
step 3, smoothing the image, which is used for removing noise in the image;
step 4, sharpening the edge of the image, and strengthening the contour edge and the details in the image to form a complete object boundary;
step 5, image segmentation processing, namely dividing the image into a plurality of parts, wherein each part corresponds to the surface of an object, and the gray level or texture of each part conforms to a certain uniform measure measurement during segmentation;
step 6, extracting the characteristics of the image, and identifying and quantifying the key characteristics of the image;
and 7, image recognition, namely distinguishing each segmented object in the scenery by using a recognition algorithm and endowing the objects with specific marks.
2. The intelligent dispensing identification system based on the visual dispensing robot as claimed in claim 1, wherein: the measuring system calibration module firstly enables the pixel resolution of the camera to be the actual size represented by one pixel in the process of calibrating the measuring system, then determines the position relation between the rubber head and the origin of the camera by adopting a dotting method, and finally obtains the physical coordinates of the capture point in the coordinate system of the motion platform according to the transformation relation between the coordinate system of the motion platform and the coordinate system of the pixel image.
3. The intelligent dispensing identification system based on the visual dispensing robot as claimed in claim 1, wherein: and the dispensing module performs image matching and recognition after the visual information is pushed out by the computer, and controls the manipulator to perform dispensing operation according to the most appropriate dispensing track.
4. The intelligent dispensing identification system based on the visual dispensing robot as claimed in claim 1, wherein: in the step 1, the image enhancement processing adopts a gray histogram modification technology to carry out image enhancement, and the gray level of a pixel in an image with known gray probability distribution is subjected to certain mapping transformation to be changed into a new image with uniform gray probability distribution, so that the aim of making the image clear is fulfilled.
5. The intelligent dispensing identification system based on the visual dispensing robot as claimed in claim 1, wherein: in step 2, the image data encoding and transmission processing adopts predictive encoding, that is, the spatial change rule and the sequence change rule of the image data are expressed by a prediction formula, and if the values of the adjacent pixels in front of a certain pixel are known, the pixel value can be predicted by the formula.
6. The intelligent dispensing identification system based on the visual dispensing robot as claimed in claim 1, wherein: in the step 3, the image smoothing process can remove image distortion caused by imaging equipment and environment in the actual imaging process, extract useful information and restore the original image.
7. The intelligent dispensing identification system based on the visual dispensing robot as claimed in claim 1, wherein: in step 4, during the image edge sharpening process, the object is separated from the image or the region representing the surface of the same object is detected.
8. The intelligent dispensing identification system based on the visual dispensing robot as claimed in claim 1, wherein: in step 5, the classification in the image segmentation process is based on the gray value, color, spectral characteristic, spatial characteristic or texture characteristic of the pixel.
9. The intelligent dispensing identification system based on the visual dispensing robot as claimed in claim 1, wherein: in the step 6, the feature extraction of the image is to adopt contour tracing to carry out edge thinning after the edge of the object image is extracted, remove false edge points and noise points, encode the edge points forming a closed curve, and extract the characteristics of the centroid coordinate, the area, the curvature, the edge, the angular point and the short axis direction of the object.
10. The intelligent dispensing identification system based on the visual dispensing robot as claimed in claim 1, wherein: in the step 7, the image recognition is to correctly segment the object to be recognized from the background of the image, and try to match the attribute map of the object in the established image with the attribute map of the assumed model library.
CN202111137295.3A 2021-09-27 2021-09-27 Intelligent dispensing identification system based on visual dispensing robot Pending CN113996500A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111137295.3A CN113996500A (en) 2021-09-27 2021-09-27 Intelligent dispensing identification system based on visual dispensing robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111137295.3A CN113996500A (en) 2021-09-27 2021-09-27 Intelligent dispensing identification system based on visual dispensing robot

Publications (1)

Publication Number Publication Date
CN113996500A true CN113996500A (en) 2022-02-01

Family

ID=79921774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111137295.3A Pending CN113996500A (en) 2021-09-27 2021-09-27 Intelligent dispensing identification system based on visual dispensing robot

Country Status (1)

Country Link
CN (1) CN113996500A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114833038A (en) * 2022-04-15 2022-08-02 苏州鸿优嘉智能科技有限公司 Gluing path planning method and system
CN114950886A (en) * 2022-06-06 2022-08-30 东莞理工学院 Positioning system based on machine vision
CN115722491A (en) * 2022-11-01 2023-03-03 智能网联汽车(山东)协同创新研究院有限公司 Control system for surface machining dust removal
CN117943310A (en) * 2024-03-22 2024-04-30 深圳万利科技有限公司 Intelligent production method of automatic bottom buckling type environment-friendly glue box based on visual identification

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040011284A1 (en) * 2000-09-29 2004-01-22 Josef Schucker Device for applying adhesive to a workpiece
CN106969706A (en) * 2017-04-02 2017-07-21 聊城大学 Workpiece sensing and three-dimension measuring system and detection method based on binocular stereo vision
CN206661587U (en) * 2017-02-23 2017-11-24 广州双维工业自动化控制设备有限公司 A kind of three axle point gum machine positioning control systems based on machine vision
CN111921788A (en) * 2020-08-07 2020-11-13 欣辰卓锐(苏州)智能装备有限公司 High-precision dynamic tracking dispensing method and device
CN112893007A (en) * 2021-01-15 2021-06-04 深圳市悦创进科技有限公司 Dispensing system based on machine vision and dispensing method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040011284A1 (en) * 2000-09-29 2004-01-22 Josef Schucker Device for applying adhesive to a workpiece
CN206661587U (en) * 2017-02-23 2017-11-24 广州双维工业自动化控制设备有限公司 A kind of three axle point gum machine positioning control systems based on machine vision
CN106969706A (en) * 2017-04-02 2017-07-21 聊城大学 Workpiece sensing and three-dimension measuring system and detection method based on binocular stereo vision
CN111921788A (en) * 2020-08-07 2020-11-13 欣辰卓锐(苏州)智能装备有限公司 High-precision dynamic tracking dispensing method and device
CN112893007A (en) * 2021-01-15 2021-06-04 深圳市悦创进科技有限公司 Dispensing system based on machine vision and dispensing method thereof

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CITY_PROLOVE: "浅谈相关机器视觉图像处理技术", 《电子发烧友网》 *
王敏,黄心汉: "基于视觉与超声技术机器人自动识别抓取系统", 《华中科技大学学报》 *
谢俊,朱广韬等: "基于机器视觉的点胶系统的设计与研究", 《电子测量技术》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114833038A (en) * 2022-04-15 2022-08-02 苏州鸿优嘉智能科技有限公司 Gluing path planning method and system
CN114950886A (en) * 2022-06-06 2022-08-30 东莞理工学院 Positioning system based on machine vision
CN115722491A (en) * 2022-11-01 2023-03-03 智能网联汽车(山东)协同创新研究院有限公司 Control system for surface machining dust removal
CN115722491B (en) * 2022-11-01 2023-09-01 智能网联汽车(山东)协同创新研究院有限公司 Control system for surface processing dust removal
CN117943310A (en) * 2024-03-22 2024-04-30 深圳万利科技有限公司 Intelligent production method of automatic bottom buckling type environment-friendly glue box based on visual identification
CN117943310B (en) * 2024-03-22 2024-05-28 深圳万利科技有限公司 Intelligent production method of automatic bottom buckling type environment-friendly glue box based on visual identification

Similar Documents

Publication Publication Date Title
CN113996500A (en) Intelligent dispensing identification system based on visual dispensing robot
Liu et al. A detection and recognition system of pointer meters in substations based on computer vision
CN111951237B (en) Visual appearance detection method
CN111833306B (en) Defect detection method and model training method for defect detection
JP6305171B2 (en) How to detect objects in a scene
CN101770582B (en) Image matching system and method
CN103424409B (en) Vision detecting system based on DSP
CN110033431B (en) Non-contact detection device and detection method for detecting corrosion area on surface of steel bridge
CN103093191A (en) Object recognition method with three-dimensional point cloud data and digital image data combined
CN112907519A (en) Metal curved surface defect analysis system and method based on deep learning
CN108288289B (en) LED visual detection method and system for visible light positioning
CN107527368B (en) Three-dimensional space attitude positioning method and device based on two-dimensional code
CN112200056B (en) Face living body detection method and device, electronic equipment and storage medium
CN110334727B (en) Intelligent matching detection method for tunnel cracks
CN110084830B (en) Video moving object detection and tracking method
CN112784712B (en) Missing child early warning implementation method and device based on real-time monitoring
CN116486287A (en) Target detection method and system based on environment self-adaptive robot vision system
CN114494169A (en) Industrial flexible object detection method based on machine vision
CN115375991A (en) Strong/weak illumination and fog environment self-adaptive target detection method
CN117557565B (en) Detection method and device for lithium battery pole piece
CN111199198A (en) Image target positioning method, image target positioning device and mobile robot
CN116703895B (en) Small sample 3D visual detection method and system based on generation countermeasure network
CN117214178A (en) Intelligent identification method for appearance defects of package on packaging production line
CN114266748B (en) Method and device for judging surface integrity of process board in field of rail traffic overhaul
CN116051808A (en) YOLOv 5-based lightweight part identification and positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220201

RJ01 Rejection of invention patent application after publication