WO2022048120A1 - 基于机器视觉的产线智能吸尘机器人 - Google Patents

基于机器视觉的产线智能吸尘机器人 Download PDF

Info

Publication number
WO2022048120A1
WO2022048120A1 PCT/CN2021/078710 CN2021078710W WO2022048120A1 WO 2022048120 A1 WO2022048120 A1 WO 2022048120A1 CN 2021078710 W CN2021078710 W CN 2021078710W WO 2022048120 A1 WO2022048120 A1 WO 2022048120A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
machine vision
production lines
conveyor belt
robot
Prior art date
Application number
PCT/CN2021/078710
Other languages
English (en)
French (fr)
Inventor
呙倩
于宝成
徐文霞
Original Assignee
武汉工程大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 武汉工程大学 filed Critical 武汉工程大学
Publication of WO2022048120A1 publication Critical patent/WO2022048120A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L5/00Structural features of suction cleaners
    • A47L5/12Structural features of suction cleaners with power-driven air-pumps or air-compressors, e.g. driven by motor vehicle engine vacuum
    • A47L5/22Structural features of suction cleaners with power-driven air-pumps or air-compressors, e.g. driven by motor vehicle engine vacuum with rotary fans
    • A47L5/28Suction cleaners with handles and nozzles fixed on the casings, e.g. wheeled suction cleaners with steering handle
    • A47L5/30Suction cleaners with handles and nozzles fixed on the casings, e.g. wheeled suction cleaners with steering handle with driven dust-loosening tools, e.g. rotating brushes
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L9/00Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
    • A47L9/28Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L9/00Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
    • A47L9/28Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
    • A47L9/2805Parameters or conditions being sensed
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L9/00Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
    • A47L9/28Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
    • A47L9/2894Details related to signal transmission in suction cleaners
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/083Shipping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Definitions

  • the invention belongs to the technical field of machine vision, and in particular relates to an intelligent vacuuming robot for production lines based on machine vision.
  • the light scattering method is usually used to detect the dust concentration, and then judge whether the conveyor belt needs to be vacuumed.
  • the uncertainty of this technology is relatively high, which may lead to misjudgment, resulting in low detection accuracy; for example, when the environment is relatively humid or there is accumulated water, dust will adhere to the conveyor belt, making the light scattering method unable to accurately detect Dust concentration.
  • the technical problem solved by the present invention is to provide an intelligent cleaning robot for production lines based on machine vision, which solves the problems of high uncertainty and low accuracy in detecting dust concentration by light scattering method.
  • the invention provides an intelligent vacuuming robot for production lines based on machine vision, including: a tray, a vacuuming port, a cleaning brush, a centrifugal fan, a dust filter bag and a dust collector; the vacuuming robot further includes an image collector and image processors; where,
  • An image collector for collecting images of the conveyor belt
  • the image processor is used to receive the collected image of the conveyor belt, and grayscale the image, and then use the grayscale co-occurrence matrix to extract the texture features of the grayscale image from four directions: second-order moment, contrast, inverse difference Moment and correlation, combine the texture feature values into a one-dimensional vector; in the same way, get the one-dimensional vector of the template image, the template image is the image of the conveyor belt when clean; based on the improved Langley distance algorithm, calculate two one-dimensional vectors The similarity S is compared with the preset threshold, and then it is judged whether to vacuum; wherein, the improved Rankine distance formula is as follows:
  • x i represents the texture feature value of the collected image of the conveyor belt
  • yi represents the texture feature value of the template image
  • n represents the number of texture feature values
  • grayscale processing is performed on the image by using the maximum value method.
  • the smoothing method is the neighborhood averaging method.
  • the image sharpening processing methods are: a spatial domain method, a frequency domain method, and a template convolution method.
  • the image acquisition device is a camera
  • the image processor is a PC
  • the image acquisition device is connected to the PC through an image acquisition card.
  • the cleaning robot further includes a light source for providing illumination to the acquisition area of the image acquisition device.
  • the present invention uses an image collector to collect the image of the conveyor belt, performs grayscale processing on the image, and then uses the grayscale co-occurrence matrix to extract the texture feature of the grayscale image; in the same way, the template image is obtained by using the same method.
  • Texture feature the template image is the image of the conveyor belt when it is clean; based on the improved Rankine distance algorithm, the similarity is calculated to determine whether to vacuum or not; among them, the improved Rankine distance algorithm uses the texture feature value of the template image as the denominator; improved dust detection At the same time, it also improves the accuracy of dust detection under different illumination conditions.
  • the Rankine distance is normalized; in order to reduce the influence of individual elements in the vector on the distance sought, the improved Rankine distance will be used. The distances are averaged and then normalized.
  • the image is subjected to noise removal and image sharpening processing to further improve the accuracy of dust detection; set a light source to provide illumination for the acquisition area of the image collector, improve the accuracy of dust detection, and eliminate the need for dust detection. Errors caused by lighting changes.
  • Fig. 1 is the structural schematic diagram of the production line intelligent vacuuming robot based on machine vision of the present invention
  • Fig. 2 is the vacuuming flow chart of the production line intelligent vacuuming robot based on machine vision of the present invention
  • Fig. 3 is the visual system structure block diagram of the production line intelligent vacuuming robot based on machine vision of the present invention
  • FIG. 4 is a schematic diagram of the visual working process of the production line intelligent vacuuming robot based on machine vision of the present invention.
  • the invention uses an image collector to collect the image of the conveyor belt, and performs grayscale processing on the image, and then uses the grayscale co-occurrence matrix to extract the texture feature of the grayscale image; in the same way, the texture feature of the template image is obtained, and the template image is clean
  • the image of the conveyor belt based on the improved Rankine distance algorithm, the similarity is calculated to determine whether to vacuum; among them, the improved Rankine distance algorithm uses the texture feature value of the template image as the denominator, which improves the accuracy of dust detection; at the same time, it also Improved the accuracy of dust detection under different lighting conditions.
  • the machine vision-based production line intelligent cleaning robot includes: a tray, a cleaning port, a cleaning brush, a centrifugal fan, a dust filter bag and a dust collector; the cleaning robot further includes an image collector and a dust collector. an image processor; wherein,
  • Image collectors used to capture images of conveyor belts; such as cameras, video cameras, etc.;
  • the image processor is used to receive the collected image of the conveyor belt, and grayscale the image, and then use the grayscale co-occurrence matrix to extract the texture features of the grayscale image from four directions: second-order moment, contrast, inverse difference Moment and correlation, the texture feature values of the grayscale images extracted from the four directions are combined into a one-dimensional vector; in the same way, the one-dimensional vector of the template image is obtained, and the template image is the image of the conveyor belt when it is clean; based on the improvement
  • the Rankine distance algorithm calculates the similarity S of two one-dimensional vectors, and compares it with a preset threshold, and then judges whether to vacuum; wherein, the improved Rankine distance formula is as follows:
  • xi represents the texture feature value of the collected image of the conveyor belt
  • yi represents the texture feature value of the template image
  • n represents the number of texture feature values
  • the present invention also provides a method for designing an intelligent cleaning robot for a production line based on machine vision.
  • the specific steps are as follows:
  • the intelligent vacuum robot for production line based on machine vision mainly includes a tray 2, a dust suction port, a dust brush 3, a centrifugal fan, a dust filter bag and a dust collector 4;
  • the dust suction port is provided with a dust sweeping brush 3 at the bottom of the tray 2, and the dust sweeping brush 3 is in contact with the ground, so that the fine dust and small paper scraps on the ground are suspended in the air.
  • the dust collector 4 is arranged above the tray 2 for collecting sundries and dust for centralized treatment.
  • the centrifugal fan is arranged above the dust suction port to generate suction, and use the generated wind to suck up dust and residues, and the dust filter bag is used to filter the wind and collect the dust in the dust collector 4 .
  • a large-capacity lithium battery is also installed on the vacuuming robot, which has good continuous cleaning ability; the charging connector 5 charges the vacuuming robot. Of course you can also use the wired way.
  • the robot vision system is divided into two parts: hardware and software.
  • the hardware includes: a cleaning robot, which is used to complete the cleaning function; peripherals of the robot, such as liquid crystal display and keyboard input devices, used to display the status of the robot and set various parameters of the robot; fast processing of video signals devices for converting image signals into signals acceptable to the display screen; video signal digitizing equipment for converting images into digital signals; scene and distance sensors, such as cameras and distance sensors, for taking images of conveyor belts and sensing The location of the vacuuming robot.
  • the software includes: robot control software, used to control the robot to move, charge and vacuum, etc.; visual processing algorithm, used to process images; computer software, used to process images using visual processing algorithms, and the computer software can be set on the computer. It can also be integrated into the vacuum cleaner.
  • the overall workflow of the system is shown in Figure 2.
  • the camera is used to collect real-time images on the conveyor belt, and then the collected images are processed.
  • the gray-scale co-occurrence matrix is used to extract the texture features of the gray-scale images from four directions: the second-order moment, contrast, inverse difference moment and correlation, combine the texture feature values of grayscale images extracted from 4 directions into a one-dimensional vector;
  • the one-dimensional vector of the template image is obtained, and the template image is the image of the conveyor belt when it is clean; Then determine whether to vacuum or not.
  • the visual working process of the intelligent cleaning robot is shown in Figure 4.
  • image analysis is performed to determine whether vacuuming processing is required; finally, the processing results are output.
  • peripherals can be used to complete the human-computer interaction function.
  • the hardware of the robot vision system mainly includes light source, lens, camera, and image acquisition card.
  • Light source is one of the important components in machine vision inspection system.
  • the texture, brightness, and reflectivity of the surface of each object are different, and natural light is not adjustable, so it is impossible to meet the lighting conditions required by all objects.
  • the main function of the lens is to focus the imaging target on the photosensitive surface of the image sensor.
  • the quality of the lens directly affects the overall performance of the machine vision system. It is also important to select and install the lens reasonably.
  • the present invention uses a CCD camera.
  • the transmission of the image signal is a fast transmission of a large amount of data.
  • the average transmission rate of the PCI interface is 50-90MB/S, which may not meet the requirements of high transmission rate at the moment of transmission.
  • this task is usually completed by a frame grabber, and the theoretical bandwidth of the PCI interface that the frame grabber usually adopts is 132MB/S.
  • the image acquisition card is a board that can be inserted into the computer or used independently from the computer. The image acquisition card sends the digital signal to the computer after processing, and is the interface between the image acquisition part and the image processing part. In order to avoid data loss when conflicting with other PCI devices, there should be a data cache on the frame grabber.
  • a grayscale image refers to an image that contains only luminance information but no color information. Each pixel of a grayscale image only needs one byte to store the grayscale value, so less resources are occupied in image processing, and the operation speed is relatively fast.
  • the present invention adopts the maximum value method to gray-scale the color image collected by the camera.
  • the maximum value method is to take the maximum value of the three-component luminance in the color image as the gray value of the gray image.
  • image noise can be considered as an unpredictable random signal.
  • Common noises include random noise, Gaussian noise, and salt and pepper noise.
  • the main method to eliminate image noise is to smooth the grayscale image.
  • smoothing methods For example, the neighborhood average method is used to smooth the image.
  • the pixel gray value at the (i, j) point in the image is used.
  • f(i,j) represents, and the gray value smoothed by the neighborhood average method is represented by g(i,j), then g(i,j) is composed of the grayscale of several pixels including (i,j) neighborhood.
  • the specific calculation method can be expressed by the following formula:
  • A represents the set of all neighborhood points centered on (i, j), and M is the total number of all pixels in A.
  • the sharpening process requires the image to have a high signal-to-noise ratio, otherwise it will increase the noise more than the image itself.
  • sharpening is done after removing or reducing noise.
  • Image sharpening can be performed both in the spatial domain and in the frequency domain.
  • commonly used image sharpening methods are: spatial domain method, frequency domain method, template convolution method, etc. According to the processing effect of different sharpening methods, the method suitable for the machine vision system of the vacuuming robot is selected.
  • the present invention adopts the Laplacian operator sharpening method.
  • the gray level co-occurrence matrix is a feature extraction method based on mathematical statistics theory.
  • the extracted texture features have good discrimination ability and can effectively reflect the distribution probability of local texture features of images.
  • 14 texture feature parameters can be derived from GLCM for texture analysis. 4 of the 14 texture features are irrelevant. These 4 features are not only easy to calculate, but also provide high resolution accuracy. The calculation formulas of these 4 irrelevant texture feature parameters are as follows:
  • Second-order moment It reflects the uniformity of the distribution of image elements.
  • Contrast It reflects the clarity of the image and the depth of texture grooves.
  • ⁇ i , ⁇ j , S i , S j are defined as follows:
  • p(i, j) is the normalized frequency of occurrence of two pixels with spatial position relationship and grayscale i and j, respectively, and ⁇ i and ⁇ j represent the mean.
  • the texture feature of an image is the regular distribution of gray values on the image, and such a feature is the texture feature of the image.
  • the gray level co-occurrence matrix is a matrix function of pixel distance and angle. It reflects the comprehensive information of the image in direction, interval, variation range and speed by calculating the correlation between the gray levels of two points in a certain distance and a certain direction in the image. .
  • the formula for calculating the Rankine distance of two vectors is as follows:
  • the invention calculates the image similarity by calculating the feature parameter value vector between the template image and the image to be detected, and judges whether there is dust on the desktop according to the similarity between the image to be detected and the template image.
  • the template image is used as a comparison image. If the Rankine distance is used to calculate the similarity, the denominator in the Rankine distance formula will change with the change of the image to be detected. The calculated distance will not be comparable, and the template image will be lost. original meaning.
  • the present invention improves the Ranjian distance. In the improved algorithm, only the elements of the feature parameter value vector of the template image are used as the denominator.
  • the distance value range is [0, + ⁇ ).
  • the distance needs to be normalized.
  • the properties of the exponential function first invert the distance, and then calculate Its exponential function, e is the base, and the result is the image similarity.
  • the formula for calculating the similarity S of two one-dimensional vectors is as follows:
  • the improved Rankine distance is averaged.
  • the formula for calculating the similarity of two one-dimensional vectors is as follows:
  • n in the two eigenparameter value vectors is 16.
  • n 16
  • m ik represents the k th element of the template image texture feature parameter value vector
  • m jk represents the k th element of the to-be-detected image texture feature parameter value vector.
  • the similarity calculation algorithm based on the improved Rankine distance can effectively improve the accuracy of conveyor belt dust detection. At the same time, it also improves the accuracy of dust detection under different illumination conditions, and eliminates the error of vision-based dust detection caused by illumination changes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • General Engineering & Computer Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Manipulator (AREA)

Abstract

一种基于机器视觉的产线智能吸尘机器人,包括:图像采集器,用于采集传送带的图像;图像处理器,用于接收采集到的传送带的图像,并对图像进行灰度化处理,然后采用灰度共生矩阵从4个方向上分别提取灰度图像的纹理特征:二阶矩、对比度、逆差矩以及相关性,将纹理特征值组合为一维向量;以同样的方法,得到模板图像的一维向量,模板图像为干净时传送带的图像;基于改进兰氏距离算法,计算两个一维向量的相似度S,并与预先设定的阈值进行比较,判断是否吸尘。利用改进兰氏距离算法以模板图像的纹理特征值作为分母,提高了灰尘检测的准确度,也提高了灰尘检测在不同光照度情况下的准确性。

Description

基于机器视觉的产线智能吸尘机器人 技术领域
本发明属于机器视觉技术领域,具体涉及一种基于机器视觉的产线智能吸尘机器人。
背景技术
目前通常采用光散射法来检测灰尘浓度,进而判断传送带上是否需要吸尘。然而,该技术不确定性比较高,可能会造成误判,导致检测的准确度不高;例如环境比较潮湿或有积水时,会使灰尘粘附在传送带上,使得光散射法无法准确检测灰尘浓度。
发明内容
本发明解决的技术问题是提供一种基于机器视觉的产线智能吸尘机器人,解决利用光散射法检测灰尘浓度不确定较高、准确度低的问题。
本发明提供了一种基于机器视觉的产线智能吸尘机器人,包括:托盘、吸尘口、扫尘毛刷、离心式风机、滤尘袋和集尘器;该吸尘机器人还包括图像采集器和图像处理器;其中,
图像采集器,用于采集传送带的图像;
图像处理器,用于接收采集到的传送带的图像,并对图像进行灰度化处理,然后采用灰度共生矩阵从4个方向上分别提取灰度图像的纹理特征:二阶矩、对比度、逆差矩以及相关性,将纹理特征值组合为一维向量;以同样的方法,得到模板图像的一维向量,模板图像为干净时传送带的图像;基于改进兰氏距离算法,计算两个一维向量的相似度S,并与预先设定的阈值进行比较,进而判断是否吸尘;其中,改进兰氏距离公式如下:
Figure PCTCN2021078710-appb-000001
式中,x i表示采集到的传送带的图像的纹理特征值,yi表示模板图像的纹理特征值,n表示纹理特征值的个数。
进一步地,基于改进兰氏距离算法,计算两个一维向量的相似度的公式如下:
Figure PCTCN2021078710-appb-000002
进一步地,基于改进兰氏距离算法,计算两个一维向量的相似度的公式如下:
Figure PCTCN2021078710-appb-000003
进一步地,利用最大值法将图像进行灰度化处理。
进一步地,灰度化处理之后,对图像进行噪声消除和图像锐化处理。
进一步地,利用平滑处理对图像进行噪声消除。平滑处理方法为邻域平均法。
进一步地,图像锐化处理方法为:空域法、频域法、模板卷积法。
进一步地,图像采集器为相机,图像处理器为PC机,图像采集器通过图像采集卡与PC机连接。
进一步地,该吸尘机器人还包括光源,用于给图像采集器的采集区域提供照明。
本发明的有益效果是:本发明利用图像采集器采集传送带的图像,并对图像进行灰度化处理,然后采用灰度共生矩阵提取灰度图像的纹理特征;以同样的方法,得到模板图像的纹理特征,模板图像为干净时传送带的图像;基于改进兰氏距离算法,计算相似度,进而判断是否吸尘;其中,改进兰氏距离算法以模板图像的纹理特征值作为分母;提高了灰尘检测的准确度;同时,也提高了灰尘检测在不同光照度情况下的准确性。
进一步地,为了使相似度的值在(0,1]内,便于转化为百分数,对兰氏距离进行归一化处理;为降低向量中个别元素对所求距离的影响,将改进后兰氏距离求均值,再进行归一化处理。
进一步地,灰度化处理之后,对图像进行噪声消除和图像锐化处理,进一步提高灰尘检测的准确度;设置光源,给图像采集器的采集区域提供照明,提高灰尘检测的准确度,消除了光照变化带来的误差。
附图说明
图1为本发明基于机器视觉的产线智能吸尘机器人的结构示意图;
图2为本发明基于机器视觉的产线智能吸尘机器人的吸尘流程图;
图3为本发明基于机器视觉的产线智能吸尘机器人的视觉系统结构框图;
图4为本发明基于机器视觉的产线智能吸尘机器人的视觉工作过程示意图。
图中:1-相机、2-托盘、3-扫尘毛刷、4-集尘器、5-充电接头。
具体实施方式
下面将结合附图对本发明作进一步的说明:
本发明利用图像采集器采集传送带的图像,并对图像进行灰度化处理,然后采用灰度共生矩阵提取灰度图像的纹理特征;以同样的方法,得到模板图像的纹理特征,模板图像为干净时传送带的图像;基于改进兰氏距离算法,计算相似度,进而判断是否吸尘;其中,改进 兰氏距离算法以模板图像的纹理特征值作为分母;提高了灰尘检测的准确度;同时,也提高了灰尘检测在不同光照度情况下的准确性。
本发明实施例的基于机器视觉的产线智能吸尘机器人,包括:托盘、吸尘口、扫尘毛刷、离心式风机、滤尘袋和集尘器;该吸尘机器人还包括图像采集器和图像处理器;其中,
图像采集器,用于采集传送带的图像;例如相机、摄像机等;
图像处理器,用于接收采集到的传送带的图像,并对图像进行灰度化处理,然后采用灰度共生矩阵从4个方向上分别提取灰度图像的纹理特征:二阶矩、对比度、逆差矩以及相关性,将从4个方向上提取的灰度图像的纹理特征值组合为一维向量;以同样的方法,得到模板图像的一维向量,模板图像为干净时传送带的图像;基于改进兰氏距离算法,计算两个一维向量的相似度S,并与预先设定的阈值进行比较,进而判断是否吸尘;其中,改进兰氏距离公式如下:
Figure PCTCN2021078710-appb-000004
式中,xi表示采集到的传送带的图像的纹理特征值,yi表示模板图像的纹理特征值,n表示纹理特征值的个数。
本发明还提供一种基于机器视觉的产线智能吸尘机器人的设计方法,具体步骤如下:
S1、设计基于机器视觉的产线智能吸尘机器人结构。
如图1所示,基于机器视觉的产线智能吸尘机器人主要包括托盘2、吸尘口、扫尘毛刷3、离心式风机、滤尘袋和集尘器4;其中,托盘2中间设有吸尘口,托盘2底部设有扫尘毛刷3,扫尘毛刷3与地面接触,使地面上的微尘和小纸屑悬浮于空中。集尘器4设置在托盘2的上方,用于收集杂物和灰尘集中处理。离心式风机设置在吸尘口的上方,用于产生吸力,利用产生的风吸起灰尘以及残渣,滤尘袋则用于对风进行过滤,将灰尘集中在集尘器4中。此外,吸尘机器人上还安装有大容量锂电池,具有很好的持续清扫能力;充电接头5对吸尘机器人进行充电。当然也可以使用有线的方式。
S2、机器视觉系统。
S201、吸尘机器人视觉系统结构。
机器人视觉系统分为硬件和软件两大部分。如图3所示,硬件包括:吸尘机器人,用于完成吸尘功能;机器人机器外设,如液晶显示和键盘输入等设备,用于显示机器人状态以及设置机器人各种参数;视频信号快速处理器,用于将图像信号转化为显示屏所能接受的信号; 视频信号数字化设备,用于将图像转化为数字信号;景物和距离传感器,例如相机和距离传感器,用于拍摄传送带的图像以及感应吸尘机器人的位置。
软件包括:机器人控制软件,用于控制机器人移动、充电和吸尘等;视觉处理算法,用于对图像进行处理;计算机软件,用于利用视觉处理算法对图像进行处理,计算机软件可设置在计算机中,也可以集成到吸尘机器人内部。
该系统的总体工作流程如图2所示,利用相机采集传送带上的实时图像,然后对采集的图像进行处理,采用灰度共生矩阵从4个方向上分别提取灰度图像的纹理特征:二阶矩、对比度、逆差矩以及相关性,将从4个方向上提取的灰度图像的纹理特征值组合为一维向量;
然后以同样的方法,得到模板图像的一维向量,模板图像为干净时传送带的图像;基于改进兰氏距离算法,计算两个一维向量的相似度,并与预先设定的阈值进行比较,进而判断是否吸尘。
S202、机器人视觉工作过程。
智能吸尘机器人视觉工作过程如图4所示,首先采集传送带上的实时图像,采集的原始图像为连续的模拟电信号,然后转换成离散的数字信号。由于实际景物转换为图像信号时,总会引入各种噪声或畸变失真,因此需要先进行图像处理;此过程可以借用大量的图像处理技术和算法,如图像滤波、图像增强、边缘检测等。接着进行图像分析,确定是否需要进行吸尘处理;最后,输出处理结果。在此过程中,可以利用外设完成人机交互功能。
S203、机器视觉系统硬件组成。
机器人视觉系统的硬件主要有光源、镜头、相机、图像采集卡。
光源是机器视觉检测系统中重要的组成部分之一。各个物体表面的纹理、明暗度、反光性等不一,而自然光不可调,因此不可能满足所有物体所需的光照条件,实际应用中需要根据机器视觉检测系统工作的现场情况提供适当的光照强度才能满足需求。因此,需要借助自然光之外的人造光源来完成检测工作,例如采用LED光源。
在机器视觉系统中,镜头的主要作用是将成像目标聚焦在图像传感器的光敏面上。镜头的质量直接影响到机器视觉系统的整体性能,合理选择并安装镜头,也很重要。
本发明采用CCD相机。
图像信号的传输是大数据量快速传输。当需要将信号传输到计算机上进行处理时,PCI接口的平均传输速率为50-90MB/S,有可能在传输瞬间不能满足高传输率的要求。在实际使用中,这一任务通常使用图像采集卡来完成,图像采集卡通常采用的PCI接口的理论带宽峰值为132MB/S。图像采集卡是一块可插入计算机或脱离计算机独立使用的板卡,图像采集卡 将数字信号经过处理送入计算机,是图像采集部分和图像处理部分的接口。为了避免与其他PCI设备产生冲突时丢失数据,一般在图像采集卡上应有数据缓存。
S3、机器视觉中图像处理。
S301、图像灰度化
灰度图是指只含有亮度信息,而不含色彩信息的图像。灰度图像每个像素只需一个字节存放灰度值,因此在图像处理中占用的资源比较少,相对来说运算速度比较快。图像灰度化的方法有很多,例如本发明采用最大值法来对摄像机采集的彩色图像进行灰度化。最大值法是将彩色图像中三分量亮度的最大值作为灰度图的灰度值。
S302、噪声消除
在采集图像的过程中不可避免的会带有许多噪声,由于光源照射强度的不稳定性等因素,还会附加许多噪声干扰,这就给后续的分析处理带来很多不便,因此需要对所采集的原始图像进行消噪的处理。一般来说,图像噪声可以认为是不可预测的随机信号,常见的噪声包括:随机噪声、高斯噪声、椒盐噪声等。消除图像噪声的主要方法就是对灰度图像进行平滑处理,平滑处理的方法有很多种,例如采用邻域平均法进行图像的平滑,图像中(i,j)点位置处的像素灰度值用f(i,j)表示,经过邻域平均法平滑之后的灰度值用g(i,j)表示,则g(i,j)由包含(i,j)邻域的若干个像素的灰度平均值所决定,具体计算方法可以由下式表示:
Figure PCTCN2021078710-appb-000005
式中,A表示以(i,j)为中心的所有邻域点的集合,M为A中所有像素点的总数。
S303、图像的锐化
锐化处理要求图像有较高的信噪比,否则会使噪声增加的比图像本身还要多。一般而言,就是先去除或减轻噪声之后,才进行锐化处理的。图像锐化既可以在空间域进行,也可以在频率域进行。一般而言,常用的图像锐化方法有:空域法、频域法、模板卷积法等。根据不同锐化方法的处理效果来选用适合于吸尘机器人机器视觉系统的方法。本发明采用拉普拉斯算子锐化法。
S304、灰度共生矩阵(GLCM)
灰度共生矩阵是一种基于数学统计学理论的特征提取方法,提取的纹理特征具有很好的鉴别能力,能有效地反映图像局部纹理特征的分布概率。由GLCM可以推导出14种纹理特征参数用于纹理分析。14个纹理特征中有4个是不相关的,这4个特征不仅便于计算,还能提供较高的分辨精度,这4个不相关纹理特征参数的计算公式如下:
(1)二阶矩:反映图像元素分布均匀程度。
Figure PCTCN2021078710-appb-000006
(2)对比度:反映图像的清晰程度和纹理沟纹深浅程度。
Figure PCTCN2021078710-appb-000007
(3)逆差矩:反映图像的同质性。
Figure PCTCN2021078710-appb-000008
(4)相关性:反映了纹理的局部灰度相似性。
Figure PCTCN2021078710-appb-000009
式中μ ij,S i,S j的定义如下所示:
Figure PCTCN2021078710-appb-000010
Figure PCTCN2021078710-appb-000011
Figure PCTCN2021078710-appb-000012
Figure PCTCN2021078710-appb-000013
式中,p(i,j)是具有空间位置关系且灰度分别为i和j的两个像素出现的归一化频率,μi和μj表示均值。
图像的纹理特征是由图像上灰度值有规则的分布,这样的特征就是图像的纹理特征。灰度共生矩阵是像素距离和角度的矩阵函数,它通过计算图像中一定距离和一定方向的两点灰度之间的相关性,来反映图像在方向、间隔、变化幅度及快慢上的综合信息。计算4个方向上共生矩阵的4个纹理特征值:二阶矩、相关性、逆差矩和对比度,将16个特征值组合为一维向量,用改进兰式距离度量这两个一维向量的相似度。
S305、基于改进兰氏距离的相似度匹配。
兰氏距离是无量纲的量,假设两个向量X i=(x 1,x 2,...,x n)和Y i=(y 1,y 2,...,y n),向量中的元素值均大于0。两个向量的兰氏距离计算公式如下:
Figure PCTCN2021078710-appb-000014
本发明通过计算模板图与待检测图之间的特征参数值向量来计算图像相似度,根据待检测图像与模板图的相似度来判断桌面是否具有灰尘。将模板图作为对比图,如果采用兰氏距离计算相似度,兰氏距离公式中的分母将会随着待检测图的变化而变化,所计算的距离将会没有对比性,模板图将会失去原有的意义。为解决这种问题,本发明将兰氏距离进行改进,改进算法中,只用模板图像的特征参数值向量的元素作为分母。
此时,距离取值范围为[0,+∞),为使相似度值在(0,1]内,需要将距离归一化处理。根据指数函数得性质,先将距离取反,然后计算其指数函数,e为底数,所得结果就是图像相似度。基于改进兰氏距离算法,计算两个一维向量的相似度S的公式如下:
Figure PCTCN2021078710-appb-000015
为降低向量中个别元素对所求距离的影响,再将改进后兰氏距离求均值。此时,基于改进兰氏距离算法,计算两个一维向量的相似度的公式如下:
Figure PCTCN2021078710-appb-000016
假设模板图像的特征参数值向量为M i=(m i1,m i2,...,m in),待检测图像的特征参数向量为M j=(m j1,m j2,...,m jn),两个特征参数值向量中的n为16。基于上述的改进后的兰氏距离相似度算法公式如下:
Figure PCTCN2021078710-appb-000017
其中,n为16,m ik代表模板图像纹理特征参数值向量的第k个元素,m jk代表待检测图像纹理特征参数值向量的第k个元素。
基于改进兰氏距离计算相似度算法可以有效提高传送带灰尘检测的正确率,同时,也提高了灰尘检测在不同光照度情况下的准确性,消除了基于视觉的灰尘检测在光照变化带来的误差。
本领域的技术人员容易理解,以上仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种基于机器视觉的产线智能吸尘机器人,其特征在于,包括:托盘、吸尘口、扫尘毛刷、离心式风机、滤尘袋和集尘器;该吸尘机器人还包括图像采集器和图像处理器;其中,
    图像采集器,用于采集传送带的图像;
    图像处理器,用于接收采集到的传送带的图像,并对图像进行灰度化处理,然后采用灰度共生矩阵从4个方向上分别提取灰度图像的纹理特征:二阶矩、对比度、逆差矩以及相关性,将纹理特征值组合为一维向量;以同样的方法,得到模板图像的一维向量,模板图像为干净时传送带的图像;基于改进兰氏距离算法,计算两个一维向量的相似度S,并与预先设定的阈值进行比较,进而判断是否吸尘;其中,改进兰氏距离公式如下:
    Figure PCTCN2021078710-appb-100001
    式中,x i表示采集到的传送带的图像的纹理特征值,y i表示模板图像的纹理特征值,n表示纹理特征值的个数。
  2. 根据权利要求1所述的基于机器视觉的产线智能吸尘机器人,其特征在于,基于改进兰氏距离算法,计算两个一维向量的相似度的公式如下:
    Figure PCTCN2021078710-appb-100002
  3. 根据权利要求1所述的基于机器视觉的产线智能吸尘机器人,其特征在于,基于改进兰氏距离算法,计算两个一维向量的相似度的公式如下:
    Figure PCTCN2021078710-appb-100003
  4. 根据权利要求1所述的基于机器视觉的产线智能吸尘机器人,其特征在于,利用最大值法将图像进行灰度化处理。
  5. 根据权利要求1所述的基于机器视觉的产线智能吸尘机器人,其特征在于,灰度化处理之后,对图像进行噪声消除和图像锐化处理。
  6. 根据权利要求5所述的基于机器视觉的产线智能吸尘机器人,其特征在于,利用平滑处理对图像进行噪声消除。
  7. 根据权利要求6所述的基于机器视觉的产线智能吸尘机器人,其特征在于,平滑处理方法为邻域平均法。
  8. 根据权利要求5所述的基于机器视觉的产线智能吸尘机器人,其特征在于,图像锐化处理方法为:空域法、频域法、模板卷积法。
  9. 根据权利要求1所述的基于机器视觉的产线智能吸尘机器人,其特征在于,图像采集 器为相机,图像处理器为PC机,图像采集器通过图像采集卡与PC机连接。
  10. 根据权利要求1所述的基于机器视觉的产线智能吸尘机器人,其特征在于,该吸尘机器人还包括光源,用于给图像采集器的采集区域提供照明。
PCT/CN2021/078710 2020-09-01 2021-03-02 基于机器视觉的产线智能吸尘机器人 WO2022048120A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010902667.6A CN112075876A (zh) 2020-09-01 2020-09-01 基于机器视觉的产线智能吸尘机器人
CN202010902667.6 2020-09-01

Publications (1)

Publication Number Publication Date
WO2022048120A1 true WO2022048120A1 (zh) 2022-03-10

Family

ID=73731287

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/078710 WO2022048120A1 (zh) 2020-09-01 2021-03-02 基于机器视觉的产线智能吸尘机器人

Country Status (2)

Country Link
CN (1) CN112075876A (zh)
WO (1) WO2022048120A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112075876A (zh) * 2020-09-01 2020-12-15 武汉工程大学 基于机器视觉的产线智能吸尘机器人
CN117292101B (zh) * 2023-11-21 2024-02-09 南通黛圣婕家居科技有限公司 基于计算机视觉的智能除尘系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050192707A1 (en) * 2004-02-27 2005-09-01 Samsung Electronics Co., Ltd. Dust detection method and apparatus for cleaning robot
CN102613944A (zh) * 2012-03-27 2012-08-01 复旦大学 清洁机器人脏物识别系统及清洁方法
CN206214042U (zh) * 2016-08-03 2017-06-06 九阳股份有限公司 清洁机器人的灰尘检测装置
CN206482534U (zh) * 2016-11-25 2017-09-12 成都理工大学 一种自动定位工业吸尘器
CN112075876A (zh) * 2020-09-01 2020-12-15 武汉工程大学 基于机器视觉的产线智能吸尘机器人

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103815827A (zh) * 2014-03-04 2014-05-28 中国矿业大学 基于dsp的功率自适应吸尘器
CN105411491A (zh) * 2015-11-02 2016-03-23 中山大学 一种基于环境监测的家庭智能清洁系统及方法
CN110955235A (zh) * 2018-09-27 2020-04-03 广东美的生活电器制造有限公司 扫地机器人的控制方法及控制装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050192707A1 (en) * 2004-02-27 2005-09-01 Samsung Electronics Co., Ltd. Dust detection method and apparatus for cleaning robot
CN102613944A (zh) * 2012-03-27 2012-08-01 复旦大学 清洁机器人脏物识别系统及清洁方法
CN206214042U (zh) * 2016-08-03 2017-06-06 九阳股份有限公司 清洁机器人的灰尘检测装置
CN206482534U (zh) * 2016-11-25 2017-09-12 成都理工大学 一种自动定位工业吸尘器
CN112075876A (zh) * 2020-09-01 2020-12-15 武汉工程大学 基于机器视觉的产线智能吸尘机器人

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHANG YUBO, ZHANG YADONG, ZHANG BIN: "Desktop dust detection algorithm based on gray gradient co-occurrence matrix", JOURNAL OF COMPUTER APPLICATIONS, JISUANJI YINGYONG, CN, vol. 39, no. 8, 10 August 2019 (2019-08-10), CN , pages 2414 - 2419, XP055907541, ISSN: 1001-9081, DOI: 10.11772/j.issn.1001-9081.2019010081 *

Also Published As

Publication number Publication date
CN112075876A (zh) 2020-12-15

Similar Documents

Publication Publication Date Title
WO2022048120A1 (zh) 基于机器视觉的产线智能吸尘机器人
CN109816678B (zh) 一种基于视觉的喷嘴雾化角度自动检测系统及方法
CN107678192B (zh) 一种基于机器视觉的Mura缺陷检测方法
CN109870461A (zh) 一种电子元器件质量检测系统
CN101561249B (zh) 手术刀片配合尺寸自动检测方法
CN107064160A (zh) 基于显著性检测的纺织品表面瑕疵检测方法及系统
CN113252568A (zh) 基于机器视觉的镜片表面缺陷检测方法、系统、产品、终端
CN106969708A (zh) 一种骨料形态质量的检测装置和方法
CN109584215A (zh) 一种电路板在线视觉检测系统
CN108470338A (zh) 一种水位监测方法
CN111665199A (zh) 一种基于机器视觉的电线电缆颜色检测识别方法
CN111830039B (zh) 一种智能化的产品质量检测方法及装置
CN117635565B (zh) 一种基于图像识别的半导体表面缺陷检测系统
CN109255785A (zh) 一种轴承外观缺陷检测系统
WO2023231262A1 (zh) 基于视觉振频识别的提升钢丝绳张力检测方法
CN115631191A (zh) 一种基于灰度特征和边缘检测的堵煤检测算法
CN111521128A (zh) 一种基于光学投影的贝类外部形态自动测量方法
CN106989672A (zh) 一种基于机器视觉的工件测量方法
CN109815784A (zh) 一种基于红外热像仪的智能分类方法、系统及存储介质
CN112102319A (zh) 脏污图像检测方法、脏污图像检测装置及脏污图像检测机构
CN115661187B (zh) 用于中药制剂分析的图像增强方法
WO2023134251A1 (zh) 一种基于聚类的光条提取方法及装置
CN115184362B (zh) 一种基于结构光投影的快速缺陷检测方法
CN115471537A (zh) 一种基于单目摄像头的移动目标距离与高度的测量方法
CN112337810B (zh) 一种视觉引导分拣珍珠机器人及其分拣方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21863191

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21863191

Country of ref document: EP

Kind code of ref document: A1