CN116433990A - Ultrasonic cleaner feedback governing system based on visual detection - Google Patents

Ultrasonic cleaner feedback governing system based on visual detection Download PDF

Info

Publication number
CN116433990A
CN116433990A CN202310685123.2A CN202310685123A CN116433990A CN 116433990 A CN116433990 A CN 116433990A CN 202310685123 A CN202310685123 A CN 202310685123A CN 116433990 A CN116433990 A CN 116433990A
Authority
CN
China
Prior art keywords
image
points
extremum
cleaning image
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310685123.2A
Other languages
Chinese (zh)
Other versions
CN116433990B (en
Inventor
曾献金
王峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hengchaoyuan Washing Technology Shenzhen Co ltd
Original Assignee
Hengchaoyuan Washing Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hengchaoyuan Washing Technology Shenzhen Co ltd filed Critical Hengchaoyuan Washing Technology Shenzhen Co ltd
Priority to CN202310685123.2A priority Critical patent/CN116433990B/en
Publication of CN116433990A publication Critical patent/CN116433990A/en
Application granted granted Critical
Publication of CN116433990B publication Critical patent/CN116433990B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B08CLEANING
    • B08BCLEANING IN GENERAL; PREVENTION OF FOULING IN GENERAL
    • B08B3/00Cleaning by methods involving the use or presence of liquid or steam
    • B08B3/04Cleaning involving contact with liquid
    • B08B3/10Cleaning involving contact with liquid with additional treatment of the liquid or of the object being cleaned, e.g. by heat, by electricity or by vibration
    • B08B3/12Cleaning involving contact with liquid with additional treatment of the liquid or of the object being cleaned, e.g. by heat, by electricity or by vibration by sonic or ultrasonic vibrations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/7635Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks based on graphs, e.g. graph cuts or spectral clustering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B40/00Technologies aiming at improving the efficiency of home appliances, e.g. induction cooking or efficient technologies for refrigerators, freezers or dish washers

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image data processing, in particular to an ultrasonic cleaner feedback adjusting system based on visual detection, which comprises: according to the graph structure formed between the connected domains in the cleaning image and the extremum of the extremum points under different filtering degrees in the cleaning image, the rotary invariant chain codes corresponding to the groups formed by the connected domains are obtained, and the stain probability of the corresponding group type areas is obtained according to the matching relation and the similarity between the rotary invariant chain codes, so that the intelligent feedback adjustment of the ultrasonic cleaner is further realized. According to the method and the device, the chain code sequence with rotation invariance can be obtained according to the graph structure relation between the connected domains, so that the accurate matching between the cleaning images of the adjacent frames is realized, the dirt probability accurately reflecting the dirt residual degree in the cleaning process is further obtained, and the intelligent feedback adjustment of the ultrasonic cleaner is realized.

Description

Ultrasonic cleaner feedback governing system based on visual detection
Technical Field
The invention relates to the technical field of image data processing, in particular to a feedback regulating system of an ultrasonic cleaner based on visual detection.
Background
When the ultrasonic cleaner is used for cleaning tableware, whether the cleaning needs to be continued is often determined according to the stain condition of the surfaces of articles such as tableware, and the like. Based on this, this patent provides an ultrasonic cleaner feedback governing system based on visual detection, through constructing rotatory unchangeable chain sign indicating number, avoided the tableware in the cleaning process position, the change of form, lead to being difficult to the matching, combine adjacent frame change to obtain the spot probability simultaneously, and then decide whether to continue the washing.
Disclosure of Invention
The invention provides a feedback regulating system of an ultrasonic cleaner based on visual detection, which aims to solve the existing problems.
The feedback regulation system of the ultrasonic cleaner based on visual detection adopts the following technical scheme:
the invention provides a feedback regulation system of an ultrasonic cleaner based on visual detection, which comprises the following modules:
an image data acquisition module: acquiring an image of the interior of a cleaning bin of the ultrasonic cleaner, and marking the image as a cleaning image;
and a graph structure construction analysis module: taking the connected domain in the cleaning image as a node, taking the Euclidean distance between the centroids of any two connected domains as the edge value of the corresponding edge, and constructing a graph structure according to the node, the edge and the edge value; obtaining a plurality of clustering results of all nodes in the graph structure, marking the clustering results as node categories, obtaining similarity between adjacent matrixes of the graph structure corresponding to any node category, and dividing the node categories according to the similarity to obtain a plurality of group categories;
obtaining the comparison extremum of the extremum points in the cleaning image according to the extremum points of the extremum points in the cleaning image under windows with different sizes;
a rotation invariant chain code module: obtaining a retention point according to the magnitude of the comparison extremum, obtaining the retention degree of the retention point according to the distance between the retention point and the central point of the cleaning image, obtaining a final starting point according to the magnitude of the numerical value between the retention degrees in the cleaning image of the adjacent frames, obtaining the chain codes of all connected domains in the group category where the final starting point is located, and marking the chain codes after the chain codes of all connected domains in the group category as rotation invariant chain codes;
stain probability module: obtaining the stain probability according to the similarity between the rotating invariant chain codes;
the cleaning machine adjusting module: and carrying out feedback adjustment on the ultrasonic cleaner according to the size of the stain probability.
Further, the group category is obtained by the following steps:
firstly, clustering a graph structure by using a Grignard-Newman algorithm to obtain clustering results of a plurality of nodes, and marking the clustering results of the nodes as node categories, wherein any node category contains a plurality of nodes;
and then, carrying out iterative clustering on the graph structure by using a Grignard-Newman algorithm, when the number of each node category changes, acquiring an adjacent matrix of the graph structure corresponding to each node category, acquiring average ratios of all elements at the same position in the two adjacent matrices, marking the average ratios as the similarity of the adjacent matrices, grouping the node categories according to the similarity of the adjacent matrices, dividing all the categories with the similarity larger than a preset similarity threshold into one group category, and simultaneously ensuring that the similarity of any two categories in the same group category is larger than the preset similarity threshold, and obtaining a plurality of group categories after all the node categories are divided.
Further, the comparing extremum is obtained by the following steps:
firstly, performing Gaussian filtering on a cleaning image by utilizing a plurality of windows with preset sizes to obtain a plurality of corresponding filtering images, and obtaining extreme points in the filtering images and extreme value properties corresponding to the extreme points;
then, the extremum of all extremum points in the window with the same size is arranged in sequence from large to small, and the second extremum of the extremum point size row in the window with the corresponding size is marked as the secondary extremum;
finally, comparing extremum of any extremum point
Figure SMS_1
The acquisition method of (1) comprises the following steps:
Figure SMS_2
where m represents extremum of extremum points in a window of corresponding size, n represents subtotal in a window of corresponding size, and max () represents obtaining the maximum value in brackets.
Further, the retention, the acquisition method is as follows:
firstly, reserving 10 extreme points with the greatest comparison extremum in all the extreme points, and marking the 10 extreme points as reserved points;
then, marking the center point of the cleaning image as an image center point, and acquiring the Euclidean distance between the reserved point and an image center store;
finally, the method for acquiring the centrality s2 of any retention point is as follows:
Figure SMS_3
where d represents the Euclidean distance between the retention point and the image center point, dmax represents the maximum Euclidean distance between all retention points and the image center point, and e represents a natural constant.
Further, the final starting point is obtained by the following steps:
firstly, marking the latest acquired cleaning image as a current frame cleaning image, marking the previous frame cleaning image of the current frame as a previous frame cleaning image, and acquiring all retention points in the current frame cleaning image and the previous frame cleaning image;
then, the product of the comparison extremum s1 and the centrality s2 of the reserved points is marked as reserved degrees of the reserved points, pixel points with reserved degrees being larger than a preset reserved degree threshold value are marked as candidate starting points, and all the candidate starting points in the current frame cleaning image and the previous frame cleaning image are obtained;
and finally, acquiring candidate starting points in the current frame cleaning image, obtaining a plurality of equivalent starting points by all candidate starting points with the same retention degree as the candidate starting points in the previous frame cleaning image, and marking the equivalent starting point with the largest retention degree in all the equivalent starting points in the current frame cleaning image as a final starting point.
Further, the rotation invariant chain code is obtained by the following steps:
step (1), marking a connected domain where a final starting point is located as a starting connected domain, acquiring a center point of the starting connected domain through Dijkstra algorithm, traversing the center points of all connected domains in a group category where the starting connected domain is located, and acquiring a shortest path;
step (2), the chain code of the initial connected domain is: starting from the final starting point, obtaining a chain code corresponding to the edge of the initial connected domain by taking the anticlockwise direction as the initial chain code;
step (3), except for the initial connected domain in the group category, taking any pixel point on the edge of all the connected domains as a starting point to obtain a corresponding chain code, wherein each connected domain corresponds to a plurality of chain codes, one chain code sequence is converted into a numerical value according to the sequence of the chain code coding value in the chain code sequence and is recorded as a chain code value, and the chain code corresponding to the minimum chain code value in all the chain code values of any connected domain is used as the chain code of the corresponding connected domain;
and (4) connecting the chain codes of all the connected domains in any group class end to end according to the sequence of the shortest path, and marking the chain codes obtained after connection as rotation-invariant chain codes corresponding to the group class.
Further, the stain probability is obtained by the following steps:
firstly, matching group categories in a current frame cleaning image and a previous frame cleaning image through a Hungary algorithm, taking rotation invariant chain codes of all the group categories in the current frame cleaning image as left side nodes, taking rotation invariant chain codes of all the group categories in the previous frame cleaning image as right side nodes, acquiring an optimal matching relation between the left side nodes and the right side nodes, taking the optimal matching relation as edges between the nodes, and taking edge values of the edges as cosine similarity between corresponding rotation invariant chain codes;
then, the method for acquiring the stain probability P of the corresponding region of any group of categories of the cleaning image of the current frame comprises the following steps:
Figure SMS_4
wherein s represents the group category of the current frame cleaning image, and the cosine similarity corresponding to the optimal matching relation between the group category of the previous frame cleaning image; n1 represents the number of connected domains included in the group category of the current frame cleaning image; n2 represents the number of connected domains contained in the group category corresponding to the previous frame of cleaning image under the optimal matching relationship; n represents the sum of the number of connected domains contained in all group categories in the current frame cleaning image and the previous frame cleaning image; min () represents the minimum value obtained; exp () represents an exponential function that bases on a natural constant.
Further, the feedback adjustment of the ultrasonic cleaner according to the size of the stain probability comprises the following specific steps:
when the stain probability is larger than a preset stain probability threshold, the ultrasonic cleaner continues to perform cleaning until the stain probability is detected to be smaller than the preset stain probability threshold, and the ultrasonic cleaner is adjusted and controlled to stop cleaning and prompt to take out the cleaned tableware.
The technical scheme of the invention has the beneficial effects that: the final group class is obtained by calculating the similarity of the group class in each graph clustering iterative process, and the group class can represent the surface texture pattern of the cleaned object to the greatest extent, so that the texture characteristics are highlighted, and the accurate distinction of textures and stains is facilitated later; by constructing a chain code which is unchanged in rotation, the problem of low matching accuracy caused by the change of the position and the form of tableware in the ultrasonic cleaning process is avoided; the characteristics of the shape change of the stains in the adjacent frames are combined with the rotation invariant chain codes, so that the probabilities of different groups of stains are obtained, further whether the stains are continuously cleaned is determined, the detection precision is greatly improved, and the intelligent degree of feedback adjustment of the ultrasonic cleaner is greatly improved; compared with the traditional image method that the sift matching is adopted to carry out the matching calculation on the upper frame and the lower frame, the huge calculation amount of the sift operator is avoided, and meanwhile, the multiscale of the sift operator is difficult to play a large role in the scene, so that the regions in the upper frame and the lower frame are easily matched with higher precision by constructing the rotation invariant chain code, the method is more suitable for the current ultrasonic cleaning scene, and the detection effect with higher precision is obtained.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a block flow diagram of a feedback adjustment system for an ultrasonic cleaning machine based on visual inspection in accordance with the present invention;
FIG. 2 is a schematic representation of the texture pattern of the surface of a dinner plate.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following detailed description is given below of the feedback adjustment system of the ultrasonic cleaner based on visual detection according to the invention, which is provided by combining the accompanying drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the feedback adjustment system of the ultrasonic cleaner based on visual detection provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a block flow diagram of a feedback adjustment system for an ultrasonic cleaning machine based on visual inspection according to an embodiment of the invention is shown, the system includes the following blocks:
an image data acquisition module: when the ultrasonic cleaner is used for cleaning tableware, an image acquisition card is used for acquiring an image of the interior of a cleaning chamber for cleaning the tableware in the ultrasonic cleaner, and the image is recorded as a cleaning image.
And a graph structure construction analysis module: and acquiring connected domains in the cleaning image, acquiring a plurality of categories by using a graph clustering method, and grouping by using the similarity of each category to acquire group categories.
And (1) obtaining edges and connected domains in the cleaning image, obtaining a plurality of categories by using a graph clustering method, obtaining the similarity between the categories, and grouping according to the similarity to obtain a plurality of group categories.
Since the ultrasonic cleaner is required to focus on cleaning of a stained area of an article until there is no stain to stop cleaning when cleaning the article, and in practice, a texture pattern of itself on a surface of some articles is recognized as a stain, it is required to distinguish an actual stained portion from the texture pattern on the surface of the article.
The texture pattern of the object surface belongs to the object and cannot be washed away, and the surface shape of the stains on the object surface changes in the washing process.
In order to distinguish stains from texture patterns, texture information on the surface of an object needs to be detected, as shown in fig. 2, which is a schematic diagram of the texture pattern on the surface of a dinner plate, since the texture patterns on the dinner plate are usually repeated, the similarity between individual connected domains may be smaller, but the similarity between combinations formed by a plurality of connected domains may be larger;
therefore, in the embodiment, the corresponding group categories are obtained by using a graph clustering method for the connected domain in the cleaning image; in addition, due to the greater randomness of the stain shape, when the similarity between the group categories is greater and the number of connected domains in the group categories is greater, the corresponding group categories are more likely to be self-textured patterns on the dish surface.
The acquisition method of the group category comprises the following steps:
firstly, obtaining edge information on a cleaning image through a watershed algorithm, then obtaining a plurality of connected domains through connected domain detection, taking each connected domain as a node, taking Euclidean distance between centroids of any two connected domains as an edge value, and constructing a graph structure according to the nodes, the edges and the edge values;
then, clustering nodes in the graph structure by using a Grignard-Newman algorithm to obtain a plurality of clustering results, and marking the clustering results as group categories;
it should be noted that, any group of categories includes a plurality of connected domains;
as shown in fig. 2, in the printed area of the dinner plate, the similarity between the connected areas corresponding to adjacent single prints is small, but after a plurality of prints form a combination, the similarity between different areas is very large.
The specific acquisition method of the group category comprises the following steps:
firstly, when a graph structure is clustered by using a Grignard-Newman algorithm, the Grignard-Newman algorithm repeatedly deletes edges among nodes in the graph structure for a plurality of times according to edge betweenness corresponding to all edges to obtain a clustering result of a plurality of nodes, the clustering result of the nodes is marked as a node category, and any node category contains a plurality of nodes;
then, when the graph structure is clustered by the Gri-Newman algorithm, the number of the node categories is changed by each iteration, when the number of each node category is changed, an adjacent matrix of the graph structure corresponding to each node category is obtained, the node categories are grouped according to the similarity of the adjacent matrix, all the categories with the similarity larger than a preset similarity threshold value are divided into one group category, meanwhile, the similarity of any two categories in the same group category is ensured to be larger than the preset similarity threshold value, and a plurality of group categories are obtained after all the node categories are divided.
It should be noted that, the similarity obtaining method of the adjacency matrix includes: obtaining average ratio of all elements at the same position in two adjacent matrixes, and recording the average ratio as similarity of the adjacent matrixes;
it should be noted that, the similarity threshold value is preset to be 0.7 according to experience, and can be adjusted according to the situation;
when the number of the groups changes, the corresponding group is obtained, a plurality of group categories are corresponding each time, the similarity of adjacent matrixes of any two categories in each group category is calculated, the similarity mean value is used as the similarity of the group category, then the similarity mean value of all the group categories is used as the similarity corresponding to the group, and the group category condition corresponding to the change of the number of the categories corresponding to the maximum similarity value is used as the final group category condition.
The fewer the number of node categories in the group category, the more likely it is a stained area.
And (2) obtaining a rotation invariant chain code corresponding to each group category by using chain code coding, and obtaining each group category by combining adjacent frame changes to obtain the stain probability of each group category.
In the ultrasonic cleaning process, in order to uniformly clean the surface, an automatic rotating or swinging device is generally used by an ultrasonic cleaner, so that the position of the cleaned object is continuously changed in the solution, a uniform and high-strength cleaning effect is formed on the surface of the whole object, and the cleaning efficiency and quality are improved.
Therefore, when the texture and the stain are distinguished by the change of the adjacent frames, it is necessary to calculate the region matched with each other between the previous frame and the next frame cleaning image, since the change of the article position causes the change of the texture region, in order to reduce the error, the region matched with each other can be obtained by constructing a rotation-invariant chain code for each group category, and the stain region is the stain position because the change occurs during the cleaning process, so the high probability of the decrease of the matching degree is the stain position.
It should be noted that, a common construction method for the chain code with unchanged rotation is as follows: firstly, a starting point is determined, then construction is carried out according to the relative position relation between other points and the starting point, and then the selection of the starting point is related to whether the obtained chain code can be well constructed in different frame videos.
It should be noted that, there are two requirements for acquiring the start point of the chain code, one is that the degree of visibility is larger under different viewing angles, the point can be easily located in different frame images, and the other is that the closer to the center position, because the less easily the camera is in virtual focus or even in virtual focus when photographing the region, the less the influence the point receives.
Adopting windows from small to large, namely, window sizes are 3*3, 5*5, … and 11 x 11 respectively, performing Gaussian filtering on the cleaning images by using windows with different sizes respectively to obtain a plurality of corresponding filtering images, and acquiring the comparison extremum of extremum points on each filtering image according to the extremum of pixel points, wherein the comparison pixel extremum is the extremum of the point compared with the secondary extremum, and the comparison extremum points are marked as the comparison extremum of the extremum points;
the extremum of all extremum points in the window with the same size is arranged in sequence from large to small, and the second extremum of the extremum point size row in the window with the corresponding size is marked as secondary extremum;
comparing extremum of arbitrary extremum points
Figure SMS_5
The acquisition method of (1) comprises the following steps:
Figure SMS_6
wherein m represents extremum of extremum points in a window with a corresponding size, n represents subtotal in a window with a corresponding size, and max () represents obtaining the maximum value in a bracket;
the extremum is obtained according to the extremum of the pixel point, and the maximum extremum is taken as the extremum of the corresponding point in the window according to the magnitude of the numerical value between the sub-extremum in the window and the extremum of the pixel point. For example: and if the extremum calculated according to the maximum value is larger than the extremum calculated according to the minimum value, taking the extremum corresponding to the maximum value as the extremum in the window and taking the pixel point corresponding to the maximum value as the extremum point in the region.
A rotation invariant chain code module: obtaining a retention point and a corresponding retention degree according to the extremum of all extremum points and the position relation between the extremum points and the central point of the image, obtaining a final starting point according to the corresponding relation between the retention degrees of the retention points in the cleaning image of the adjacent frames, and obtaining rotary invariant chain codes corresponding to each group according to the final starting point.
Firstly, reserving 10 extreme points with the greatest comparison extremum in all the extreme points, marking the extreme points as reserved points, acquiring the centrality of the reserved points, namely reflecting the distance between the reserved points and the central point of the cleaning image, and marking the central point of the cleaning image as the central point of the image;
the method for acquiring the centrality s2 of the arbitrary retention point is as follows:
Figure SMS_7
wherein d represents the Euclidean distance between the retention points and the image center point, dmax represents the maximum Euclidean distance between all the retention points and the image center point, and e represents a natural constant;
in the method for obtaining centrality at any of the reserved points, d is normalized by dmax to prevent the obtained value of centrality s2 from being too small.
Then, the product of the comparison extremum s1 and the centrality s2 of each retention point is marked as the retention degree of the corresponding retention point, and the pixel points with the retention degree larger than the preset retention degree threshold value are marked as candidate starting points;
the retention threshold was empirically preset to be 0.7, and may be adjusted according to circumstances.
In addition, when the cleaning process of the ultrasonic cleaner is monitored in real time, a cleaning image of a continuous frame is obtained, in the embodiment, the latest acquired cleaning image is recorded as a cleaning image of a current frame, the last cleaning image of the current frame is recorded as the last cleaning image of the previous frame, the retention degrees of all retention points in the cleaning image of the current frame and the cleaning image of the last frame are acquired through the above-mentioned retention degree acquisition method, and all candidate starting points in the cleaning image of the current frame and the cleaning image of the last frame are acquired according to the retention degrees of all the retention points;
finally, obtaining candidate starting points in the current frame cleaning image, obtaining a plurality of equivalent starting points, and marking the equivalent starting point with the largest retention degree in all the equivalent starting points in the current frame cleaning image as a final starting point, wherein the candidate starting points are equal to the retention degree of the candidate starting points in the previous frame cleaning image;
the connected domain where the final starting point is located is marked as a starting connected domain, the central point of the starting connected domain is obtained through Dijkstra algorithm, the shortest path of the central points of all connected domains in the group category where the starting connected domain is located is traversed, and then the chain code of the starting connected domain is: starting from a final starting point, obtaining a chain code corresponding to the edge of a starting connected domain by taking the anticlockwise direction as the starting chain code, simultaneously taking any pixel point on the edge of all connected domains except the starting connected domain in the group class as the starting point to obtain a corresponding chain code, corresponding a plurality of chain codes to each connected domain, converting the chain code sequence into a numerical value according to the sequence of the chain code coding values in the chain code sequence, marking the numerical value as the chain code value, taking the minimum chain code value in all the chain code values of any connected domain as the chain code of the corresponding connected domain, and marking the chain code obtained after connection as a rotation-invariant chain code corresponding to the group class according to the sequence of the shortest path.
Stain probability module: and obtaining the stain probability of the corresponding region of each group of groups according to the cosine similarity between the rotation invariant chain codes of each group of groups between the cleaning images of the adjacent frames.
Through calculation, each group obtains a rotation invariant chain code, and through matching of the chain codes in adjacent frames, a corresponding region can be obtained, the matching rate of the corresponding region is high, the high probability is self texture, the matching rate is low, and the high probability is a stain region.
The specific matching method comprises the following steps: and matching the group categories in the current frame cleaning image and the previous frame cleaning image through a Hungary algorithm, taking the rotation invariant chain codes of all the group categories in the current frame cleaning image as left nodes, taking the rotation invariant chain codes of all the group categories in the previous frame cleaning image as right nodes, acquiring the optimal matching relation of the nodes at two sides, taking the optimal matching relation as edges between the nodes, and taking the edge value of the edges as cosine similarity between the corresponding rotation invariant chain codes.
After the optimal matching relation between the group category of the current frame cleaning image and the group category of the previous frame cleaning image is obtained, the method for obtaining the stain probability of the corresponding area of any group category of the current frame cleaning image comprises the following steps:
Figure SMS_8
wherein s represents the group category of the current frame cleaning image, and the cosine similarity corresponding to the optimal matching relation between the group category of the previous frame cleaning image; n1 represents the number of connected domains included in the group category of the current frame cleaning image; n2 represents the number of connected domains contained in the group category corresponding to the previous frame of cleaning image under the optimal matching relationship; n represents the sum of the number of connected domains contained in all group categories in the current frame cleaning image and the previous frame cleaning image; min () represents the minimum value obtained; exp () represents an exponential function that bases on a natural constant.
It should be noted that, the cosine similarity is inversely proportional to the stain probability, and the smaller the cosine similarity is, the larger the stain probability is, so that the inverse proportion relation is represented by the cosine similarity subtracted by 1; n1 and N2 refer to the number of connected domains contained in the corresponding group in the optimal matching relationship, the fewer the number is, the larger the stain probability is, and N represents the sum of the number of connected domains of all the group in the current frame cleaning image and the previous frame cleaning image, so as to normalize molecules; in addition, since the denominator represents two frames, a constant of 2 is introduced in order to prevent the actual value from being too small, resulting in data distortion; ratio of
Figure SMS_9
The smaller the probability that the region where the corresponding group category is located is a stain.
The cleaning machine adjusting module: and carrying out feedback adjustment on the ultrasonic cleaner according to the stain probability.
According to the size of the stain probability of the area corresponding to each group category, the ultrasonic cleaning machine is regulated, and the specific regulation method is as follows:
when the stain probability is larger than a preset stain probability threshold, the ultrasonic cleaner continues to perform cleaning until the stain probability is detected to be smaller than the preset stain probability threshold, and the ultrasonic cleaner is adjusted and controlled to stop cleaning and prompt to take out the cleaned tableware.
It should be noted that the exp (-x) model used in this embodiment is only used to represent that the result of the output of the negative correlation and constraint model is in
Figure SMS_10
In the section, other models with the same purpose can be replaced in the implementation, and the embodiment only uses exp (-x) model as an example and does not limit the description specifically, wherein x refers to the input of the model.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (8)

1. Ultrasonic cleaner feedback governing system based on visual detection, characterized in that, this system includes following module:
an image data acquisition module: acquiring an image of the interior of a cleaning bin of the ultrasonic cleaner, and marking the image as a cleaning image;
and a graph structure construction analysis module: taking the connected domain in the cleaning image as a node, taking the Euclidean distance between the centroids of any two connected domains as the edge value of the corresponding edge, and constructing a graph structure according to the node, the edge and the edge value; obtaining a plurality of clustering results of all nodes in the graph structure, marking the clustering results as node categories, obtaining similarity between adjacent matrixes of the graph structure corresponding to any node category, and dividing the node categories according to the similarity to obtain a plurality of group categories;
obtaining the comparison extremum of the extremum points in the cleaning image according to the extremum points of the extremum points in the cleaning image under windows with different sizes;
a rotation invariant chain code module: obtaining a retention point according to the magnitude of the comparison extremum, obtaining the retention degree of the retention point according to the distance between the retention point and the central point of the cleaning image, obtaining a final starting point according to the magnitude of the numerical value between the retention degrees in the cleaning image of the adjacent frames, obtaining the chain codes of all connected domains in the group category where the final starting point is located, and marking the chain codes after the chain codes of all connected domains in the group category as rotation invariant chain codes;
stain probability module: obtaining the stain probability according to the similarity between the rotating invariant chain codes;
the cleaning machine adjusting module: and carrying out feedback adjustment on the ultrasonic cleaner according to the size of the stain probability.
2. The ultrasonic cleaner feedback adjustment system based on visual inspection of claim 1, wherein the group classification is obtained by:
firstly, clustering a graph structure by using a Grignard-Newman algorithm to obtain clustering results of a plurality of nodes, and marking the clustering results of the nodes as node categories, wherein any node category contains a plurality of nodes;
and then, carrying out iterative clustering on the graph structure by using a Grignard-Newman algorithm, when the number of each node category changes, acquiring an adjacent matrix of the graph structure corresponding to each node category, acquiring average ratios of all elements at the same position in the two adjacent matrices, marking the average ratios as the similarity of the adjacent matrices, grouping the node categories according to the similarity of the adjacent matrices, dividing all the categories with the similarity larger than a preset similarity threshold into one group category, and simultaneously ensuring that the similarity of any two categories in the same group category is larger than the preset similarity threshold, and obtaining a plurality of group categories after all the node categories are divided.
3. The ultrasonic cleaner feedback adjustment system based on visual inspection of claim 1, wherein the comparison extremum is obtained by:
firstly, performing Gaussian filtering on a cleaning image by utilizing a plurality of windows with preset sizes to obtain a plurality of corresponding filtering images, and obtaining extreme points in the filtering images and extreme value properties corresponding to the extreme points;
then, the extremum of all extremum points in the window with the same size is arranged in sequence from large to small, and the second extremum of the extremum point size row in the window with the corresponding size is marked as the secondary extremum;
finally, comparing extremum of any extremum point
Figure QLYQS_1
The acquisition method of (1) comprises the following steps:
Figure QLYQS_2
where m represents extremum of extremum points in a window of corresponding size, n represents subtotal in a window of corresponding size, and max () represents obtaining the maximum value in brackets.
4. The ultrasonic cleaner feedback adjustment system based on visual inspection of claim 1, wherein the retention is obtained by:
firstly, reserving 10 extreme points with the greatest comparison extremum in all the extreme points, and marking the 10 extreme points as reserved points;
then, marking the center point of the cleaning image as an image center point, and acquiring the Euclidean distance between the reserved point and an image center store;
finally, the method for acquiring the centrality s2 of any retention point is as follows:
Figure QLYQS_3
where d represents the Euclidean distance between the retention point and the image center point, dmax represents the maximum Euclidean distance between all retention points and the image center point, and e represents a natural constant.
5. The ultrasonic cleaner feedback adjustment system based on visual inspection of claim 1, wherein the final starting point is obtained by:
firstly, marking the latest acquired cleaning image as a current frame cleaning image, marking the previous frame cleaning image of the current frame as a previous frame cleaning image, and acquiring all retention points in the current frame cleaning image and the previous frame cleaning image;
then, the product of the comparison extremum s1 and the centrality s2 of the reserved points is marked as reserved degrees of the reserved points, pixel points with reserved degrees being larger than a preset reserved degree threshold value are marked as candidate starting points, and all the candidate starting points in the current frame cleaning image and the previous frame cleaning image are obtained;
and finally, acquiring candidate starting points in the current frame cleaning image, obtaining a plurality of equivalent starting points by all candidate starting points with the same retention degree as the candidate starting points in the previous frame cleaning image, and marking the equivalent starting point with the largest retention degree in all the equivalent starting points in the current frame cleaning image as a final starting point.
6. The ultrasonic cleaner feedback adjustment system based on visual inspection according to claim 1, wherein the rotation invariant chain code is obtained by the following method:
step (1), marking a connected domain where a final starting point is located as a starting connected domain, acquiring a center point of the starting connected domain through Dijkstra algorithm, traversing the center points of all connected domains in a group category where the starting connected domain is located, and acquiring a shortest path;
step (2), the chain code of the initial connected domain is: starting from the final starting point, obtaining a chain code corresponding to the edge of the initial connected domain by taking the anticlockwise direction as the initial chain code;
step (3), except for the initial connected domain in the group category, taking any pixel point on the edge of all the connected domains as a starting point to obtain a corresponding chain code, wherein each connected domain corresponds to a plurality of chain codes, one chain code sequence is converted into a numerical value according to the sequence of the chain code coding value in the chain code sequence and is recorded as a chain code value, and the chain code corresponding to the minimum chain code value in all the chain code values of any connected domain is used as the chain code of the corresponding connected domain;
and (4) connecting the chain codes of all the connected domains in any group class end to end according to the sequence of the shortest path, and marking the chain codes obtained after connection as rotation-invariant chain codes corresponding to the group class.
7. The ultrasonic cleaner feedback adjustment system based on visual inspection of claim 1, wherein the stain probability is obtained by:
firstly, matching group categories in a current frame cleaning image and a previous frame cleaning image through a Hungary algorithm, taking rotation invariant chain codes of all the group categories in the current frame cleaning image as left side nodes, taking rotation invariant chain codes of all the group categories in the previous frame cleaning image as right side nodes, acquiring an optimal matching relation between the left side nodes and the right side nodes, taking the optimal matching relation as edges between the nodes, and taking edge values of the edges as cosine similarity between corresponding rotation invariant chain codes;
then, the method for acquiring the stain probability P of the corresponding region of any group of categories of the cleaning image of the current frame comprises the following steps:
Figure QLYQS_4
wherein s represents the group category of the current frame cleaning image, and the cosine similarity corresponding to the optimal matching relation between the group category of the previous frame cleaning image; n1 represents the number of connected domains included in the group category of the current frame cleaning image; n2 represents the number of connected domains contained in the group category corresponding to the previous frame of cleaning image under the optimal matching relationship; n represents the sum of the number of connected domains contained in all group categories in the current frame cleaning image and the previous frame cleaning image; min () represents the minimum value obtained; exp () represents an exponential function that bases on a natural constant.
8. The feedback adjustment system of the ultrasonic cleaner based on visual inspection according to claim 1, wherein the feedback adjustment of the ultrasonic cleaner according to the size of the stain probability comprises the following specific steps:
when the stain probability is larger than a preset stain probability threshold, the ultrasonic cleaner continues to perform cleaning until the stain probability is detected to be smaller than the preset stain probability threshold, and the ultrasonic cleaner is adjusted and controlled to stop cleaning and prompt to take out the cleaned tableware.
CN202310685123.2A 2023-06-12 2023-06-12 Ultrasonic cleaner feedback governing system based on visual detection Active CN116433990B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310685123.2A CN116433990B (en) 2023-06-12 2023-06-12 Ultrasonic cleaner feedback governing system based on visual detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310685123.2A CN116433990B (en) 2023-06-12 2023-06-12 Ultrasonic cleaner feedback governing system based on visual detection

Publications (2)

Publication Number Publication Date
CN116433990A true CN116433990A (en) 2023-07-14
CN116433990B CN116433990B (en) 2023-08-15

Family

ID=87080024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310685123.2A Active CN116433990B (en) 2023-06-12 2023-06-12 Ultrasonic cleaner feedback governing system based on visual detection

Country Status (1)

Country Link
CN (1) CN116433990B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090257655A1 (en) * 2008-04-11 2009-10-15 Recognition Robotics System and method for visual recognition
US20110110598A1 (en) * 2009-11-11 2011-05-12 Industrial Technology Research Institute Image processing method and system using regionalized architecture
US20130195361A1 (en) * 2012-01-17 2013-08-01 Alibaba Group Holding Limited Image index generation based on similarities of image features
CN109214428A (en) * 2018-08-13 2019-01-15 平安科技(深圳)有限公司 Image partition method, device, computer equipment and computer storage medium
WO2020119053A1 (en) * 2018-12-11 2020-06-18 平安科技(深圳)有限公司 Picture clustering method and apparatus, storage medium and terminal device
CN113988148A (en) * 2020-07-10 2022-01-28 华为技术有限公司 Data clustering method, system, computer equipment and storage medium
CN114299406A (en) * 2022-03-07 2022-04-08 山东鹰联光电科技股份有限公司 Optical fiber cable line inspection method based on unmanned aerial vehicle aerial photography
CN114965483A (en) * 2022-05-23 2022-08-30 中国空气动力研究与发展中心超高速空气动力研究所 Quantitative evaluation method for various complex defects of spacecraft
CN115055964A (en) * 2022-08-18 2022-09-16 山东鑫亚工业股份有限公司 Intelligent assembling method and system based on fuel injection pump
CN115272339A (en) * 2022-09-29 2022-11-01 江苏浚荣升新材料科技有限公司 Metal mold dirt cleaning method
CN116229276A (en) * 2023-05-05 2023-06-06 生态环境部华南环境科学研究所(生态环境部生态环境应急研究所) River entering pollution discharge detection method based on computer vision

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090257655A1 (en) * 2008-04-11 2009-10-15 Recognition Robotics System and method for visual recognition
US20110110598A1 (en) * 2009-11-11 2011-05-12 Industrial Technology Research Institute Image processing method and system using regionalized architecture
US20130195361A1 (en) * 2012-01-17 2013-08-01 Alibaba Group Holding Limited Image index generation based on similarities of image features
CN109214428A (en) * 2018-08-13 2019-01-15 平安科技(深圳)有限公司 Image partition method, device, computer equipment and computer storage medium
WO2020119053A1 (en) * 2018-12-11 2020-06-18 平安科技(深圳)有限公司 Picture clustering method and apparatus, storage medium and terminal device
CN113988148A (en) * 2020-07-10 2022-01-28 华为技术有限公司 Data clustering method, system, computer equipment and storage medium
CN114299406A (en) * 2022-03-07 2022-04-08 山东鹰联光电科技股份有限公司 Optical fiber cable line inspection method based on unmanned aerial vehicle aerial photography
CN114965483A (en) * 2022-05-23 2022-08-30 中国空气动力研究与发展中心超高速空气动力研究所 Quantitative evaluation method for various complex defects of spacecraft
CN115055964A (en) * 2022-08-18 2022-09-16 山东鑫亚工业股份有限公司 Intelligent assembling method and system based on fuel injection pump
CN115272339A (en) * 2022-09-29 2022-11-01 江苏浚荣升新材料科技有限公司 Metal mold dirt cleaning method
CN116229276A (en) * 2023-05-05 2023-06-06 生态环境部华南环境科学研究所(生态环境部生态环境应急研究所) River entering pollution discharge detection method based on computer vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
方兴林等: "一种基于链码向量的图像匹配算法", 《齐齐哈尔大学学报(自然科学版)》, no. 06 *

Also Published As

Publication number Publication date
CN116433990B (en) 2023-08-15

Similar Documents

Publication Publication Date Title
CN108388896B (en) License plate identification method based on dynamic time sequence convolution neural network
CN115082419B (en) Blow-molded luggage production defect detection method
CN108470354B (en) Video target tracking method and device and implementation device
CN116205919B (en) Hardware part production quality detection method and system based on artificial intelligence
CN112819772B (en) High-precision rapid pattern detection and recognition method
CN116664559B (en) Machine vision-based memory bank damage rapid detection method
CN109035219B (en) FICS golden finger defect detection system and detection method based on BP neural network
JP2021528784A (en) Sky filter method for panoramic images and mobile terminals
CN113449606B (en) Target object identification method and device, computer equipment and storage medium
CN113012157B (en) Visual detection method and system for equipment defects
CN116777907A (en) Sheet metal part quality detection method
CN107578011A (en) The decision method and device of key frame of video
CN113706566B (en) Edge detection-based perfuming and spraying performance detection method
CN107704867A (en) Based on the image characteristic point error hiding elimination method for weighing the factor in a kind of vision positioning
CN108802051B (en) System and method for detecting bubble and crease defects of linear circuit of flexible IC substrate
CN114155230A (en) Quality classification method and system for injection molding PC board with smooth surface
CN113393426A (en) Method for detecting surface defects of rolled steel plate
CN113610799B (en) Artificial intelligence-based photovoltaic cell panel rainbow line detection method, device and equipment
CN113128518B (en) Sift mismatch detection method based on twin convolution network and feature mixing
CN116433990B (en) Ultrasonic cleaner feedback governing system based on visual detection
CN114067147A (en) Ship target confirmation method based on local shape matching
JP7403562B2 (en) Method for generating slap/finger foreground masks
CN109190505A (en) The image-recognizing method that view-based access control model understands
CN113409353A (en) Motion foreground detection method and device, terminal equipment and storage medium
CN113406111A (en) Defect detection method and device based on structural light field video stream

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant