CN116433653A - Product production management method and system based on machine vision - Google Patents

Product production management method and system based on machine vision Download PDF

Info

Publication number
CN116433653A
CN116433653A CN202310555748.7A CN202310555748A CN116433653A CN 116433653 A CN116433653 A CN 116433653A CN 202310555748 A CN202310555748 A CN 202310555748A CN 116433653 A CN116433653 A CN 116433653A
Authority
CN
China
Prior art keywords
product production
production
information
product
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202310555748.7A
Other languages
Chinese (zh)
Inventor
吴海东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunming Haiqiu East Information Technology Co ltd
Original Assignee
Kunming Haiqiu East Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunming Haiqiu East Information Technology Co ltd filed Critical Kunming Haiqiu East Information Technology Co ltd
Priority to CN202310555748.7A priority Critical patent/CN116433653A/en
Publication of CN116433653A publication Critical patent/CN116433653A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/04Manufacturing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Business, Economics & Management (AREA)
  • Manufacturing & Machinery (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of artificial intelligence and provides a product production management method and system based on machine vision. The method comprises the following steps: the method comprises the steps of obtaining product production monitoring video information through a visual detection device, performing video compression and key frame extraction on the product production monitoring video information, and obtaining product production key image information; performing super-pixel preprocessing based on the product production key image information to obtain a product production super-pixel block region set; carrying out semantic segmentation on the product production super-pixel block region set to obtain a product production characteristic semantic segmentation result; and carrying out feature analysis based on the semantic segmentation result of the product production features to obtain product production application feature information, carrying out quality analysis on the product production application feature information to obtain production quality analysis information, and carrying out product production regulation and control according to the production quality analysis information. The method can achieve the technical effects of reducing the quantity of visual detection and identification data, improving the quality detection efficiency of products and increasing the accuracy of product identification and detection.

Description

Product production management method and system based on machine vision
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a product production management method and system based on machine vision.
Background
With rapid development of industrial intelligence, machine vision technology with the advantages of convenience, accuracy, rapidness, intelligence and the like is widely applied to various fields of industrial production, and is increasingly valued as a modern detection means. The production product is rapidly detected in quality through machine vision, so that the production efficiency of the industrial product is improved, and the industrial production is gradually developed to an automatic direction. However, the prior art has the technical problems of low identification detection accuracy and low detection efficiency caused by large visual detection data volume of products.
Disclosure of Invention
Accordingly, in view of the above-mentioned problems, it is necessary to provide a machine vision-based product production management method and system that can reduce the amount of visual inspection and identification data, improve the quality inspection efficiency of products, and increase the accuracy of product inspection and identification.
A machine vision based product production management method, the method comprising: the whole process of the production process of the product is monitored by a visual detection device, and product production monitoring video information is obtained; performing video compression and key frame extraction on the product production monitoring video information to obtain product production key image information; performing super-pixel preprocessing on the basis of the product production key image information to obtain a product production super-pixel block region set; carrying out semantic segmentation on the product production super-pixel block region set to obtain a product production characteristic semantic segmentation result; performing feature analysis based on the product production feature semantic segmentation result to obtain product production application feature information; and carrying out quality analysis based on the product production application characteristic information to obtain production quality analysis information, and carrying out product production regulation and control according to the production quality analysis information.
A machine vision based product production management system, the system comprising: the whole-process monitoring module is used for monitoring the whole process of the production process of the product through the visual detection device to obtain product production monitoring video information; the key frame extraction module is used for carrying out video compression and key frame extraction on the product production monitoring video information to obtain product production key image information; the super-pixel preprocessing module is used for performing super-pixel preprocessing on the basis of the product production key image information to obtain a product production super-pixel block region set; the semantic segmentation module is used for carrying out semantic segmentation on the product production super-pixel block region set to obtain a product production characteristic semantic segmentation result; the feature analysis module is used for carrying out feature analysis based on the product production feature semantic segmentation result to obtain product production application feature information; and the product production regulation and control module is used for carrying out quality analysis based on the product production application characteristic information, obtaining production quality analysis information and carrying out product production regulation and control according to the production quality analysis information.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
the whole process of the production process of the product is monitored by a visual detection device, and product production monitoring video information is obtained;
performing video compression and key frame extraction on the product production monitoring video information to obtain product production key image information;
performing super-pixel preprocessing on the basis of the product production key image information to obtain a product production super-pixel block region set;
carrying out semantic segmentation on the product production super-pixel block region set to obtain a product production characteristic semantic segmentation result;
performing feature analysis based on the product production feature semantic segmentation result to obtain product production application feature information;
and carrying out quality analysis based on the product production application characteristic information to obtain production quality analysis information, and carrying out product production regulation and control according to the production quality analysis information.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
the whole process of the production process of the product is monitored by a visual detection device, and product production monitoring video information is obtained;
performing video compression and key frame extraction on the product production monitoring video information to obtain product production key image information;
performing super-pixel preprocessing on the basis of the product production key image information to obtain a product production super-pixel block region set;
carrying out semantic segmentation on the product production super-pixel block region set to obtain a product production characteristic semantic segmentation result;
performing feature analysis based on the product production feature semantic segmentation result to obtain product production application feature information;
and carrying out quality analysis based on the product production application characteristic information to obtain production quality analysis information, and carrying out product production regulation and control according to the production quality analysis information.
The product production management method and system based on machine vision solves the technical problems of low recognition detection accuracy and low detection efficiency caused by large visual detection data volume of products in the prior art, and achieves the technical effects of reducing the visual detection recognition data volume, improving the product quality detection efficiency and increasing the product recognition detection accuracy by performing intelligent compression and segmentation processing on visual detection videos.
The foregoing description is merely an overview of the technical solutions of the present application, and may be implemented according to the content of the specification in order to make the technical means of the present application more clearly understood, and in order to make the above-mentioned and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application will be given.
Drawings
FIG. 1 is a flow chart of a machine vision based product production management method in one embodiment;
FIG. 2 is a schematic flow chart of obtaining a set of super pixel block areas for product production in a machine vision based product production management method according to an embodiment;
FIG. 3 is a block diagram of a machine vision based product production management system in one embodiment;
fig. 4 is an internal structural diagram of a computer device in one embodiment.
Reference numerals illustrate: the system comprises a whole production process monitoring module 11, a key frame extraction module 12, a super-pixel preprocessing module 13, a semantic segmentation module 14, a feature analysis module 15 and a product production regulation and control module 16.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
As shown in fig. 1, the present application provides a machine vision-based product production management method, which includes:
step S100: the whole process of the production process of the product is monitored by a visual detection device, and product production monitoring video information is obtained;
step S200: performing video compression and key frame extraction on the product production monitoring video information to obtain product production key image information;
specifically, with rapid development of industrial intelligence, machine vision technology with advantages of convenience, accuracy, rapidness, intelligence and the like is widely applied to various fields of industrial production, and is increasingly receiving attention as a modern detection means. The production product is rapidly detected in quality through machine vision, so that the production efficiency of the industrial product is improved, and the industrial production is gradually developed to an automatic direction.
Firstly, the whole process of the production process of the product is monitored through a visual detection device, wherein the visual detection device is high-precision industrial visual detection equipment, namely an industrial image pickup device, can continuously operate for a long time, is suitable for different application environments, and has the advantages of high precision, high speed and high efficiency, and the visual detection device is used for acquiring and acquiring the production monitoring video information of the product. In order to reduce the image recognition detection data volume, the video compression algorithm is adopted to carry out video compression processing on the collected product production monitoring video information, and the image space redundant information is removed to reduce the same data in continuous frames of the product production monitoring video information. In order to reduce the number of image recognition and detection frames, the compressed video is subjected to key frame extraction, namely, full-frame compressed encoding frames, the information quantity of occupied data is relatively large, and a complete image can be reconstructed only by using the key frames during decoding, so that key image information of product production is obtained, and the quality recognition and detection processing efficiency of subsequent products is improved.
Step S300: performing super-pixel preprocessing on the basis of the product production key image information to obtain a product production super-pixel block region set;
in one embodiment, as shown in fig. 2, the obtaining the set of product-producing superpixel block areas, step S300 of the present application further includes:
step S310: initializing a product image seed point, and reselecting a target seed point in a preset neighborhood of the product image seed point;
step S320: class labels are distributed to the neighborhood pixel point set of the target seed point, and a pixel point seed label set is obtained;
step S330: calculating the distance between each pixel point in the neighborhood pixel point set and the target seed point to obtain a pixel distance information set;
step S340: and screening the minimum distance seed points in the pixel distance information set as pixel point clustering centers, performing iterative optimization to preset times, and performing matching segmentation on the pixel point clustering centers and the pixel point seed label set to obtain the super-pixel block region set for product production.
In one embodiment, the obtaining the pixel distance information set, step S330 of the present application further includes:
step S331: acquiring seed color coordinate information [ L ] of the target seed point i ,a i ,b i ]And seed space coordinate information [ x ] i ,y i ]And pixel color coordinate information [ L ] of each pixel point j ,a j ,b j ]And pixel space coordinate information [ x ] j ,y j ];
Step S332: calculating color distance information d of the seed color coordinate information and the pixel color coordinate information c And spatial distance information d of the seed spatial coordinate information and the pixel spatial coordinate information s
Step S333: distance measurement function based on the color distance information d c And the spatial distance information d s And performing fusion calculation to obtain the pixel distance information set.
In one embodiment, the distance metric function is specifically:
Figure BDA0004232935530000061
Figure BDA0004232935530000062
Figure BDA0004232935530000063
wherein d in the formula c Representing color distance information, d s Representing spatial distance information, S representing the adjacent super-pixel distance, and m representing the weight of the color space.
Specifically, in order to better assist the subsequent semantic segmentation visual task, super-pixel preprocessing is performed based on the product production key image information, namely, the image is segmented into super-pixel blocks far exceeding the target number and far smaller than the pixel number depending on the color information and the spatial relationship information of the image. The specific pretreatment process is that firstly, seed points of the product image are initialized, namely, the seed points are uniformly distributed in the image according to the set number of super pixels, and the seed points are also clustering centers of the image. And re-selecting a target seed point in a preset neighborhood of the seed point of the product image, wherein the preset neighborhood can be set by itself, generally 3*3 is preferable, and the re-selecting the seed point firstly calculates gradient values of all pixel points in the neighborhood, and moves the seed point to a place with the minimum gradient in the neighborhood, so that the seed point is prevented from falling on a contour boundary with a larger gradient, and the subsequent clustering effect is prevented from being influenced.
And then, class labels are distributed to the neighborhood pixel point sets of the target seed points, namely, the clustering centers of the neighborhood pixel points are distributed to each pixel point in the neighborhood around each target seed point, and the corresponding pixel point seed label sets are obtained. Calculating the distance between each pixel point in the neighborhood pixel point set and the target seed point, wherein the distance specifically comprises the color distance and the space distance between each pixel point and the seed point, and acquiring seed color coordinate information [ L ] of the target seed point i ,a i ,b i ]And seed space coordinate information [ x ] i ,y i ]And pixel color coordinate information [ L ] of each pixel point j ,a j ,b j ]And pixel space coordinate information [ x ] j ,y j ]. Calculating color distance information of the seed color coordinate information and the pixel color coordinate information
Figure BDA0004232935530000071
And spatial distance information +_of the seed spatial coordinate information and the pixel spatial coordinate information>
Figure BDA0004232935530000072
Figure BDA0004232935530000073
Distance measurement function based on the color distance information d c And the spatial distance information d s Performing fusion calculation, namely calculating the comprehensive distance from the clustering center to the pixel points in the specific area, wherein the distance measurement function is specifically as follows
Figure BDA0004232935530000074
Wherein d in the formula c Representative colorDistance information d s Representing spatial distance information, S representing the distance between adjacent superpixels, the size of the superpixels being determined by the artificially set pre-divided superpixel number K value and the total number N of pixels contained in the image, i.e. the size is P=N/K, whereas +.>
Figure BDA0004232935530000075
m represents the weight value of the occupied color space, namely the weight value of the occupied color similarity and the occupied space proximity of the colors in the image, and can be set according to the practical image application, so as to obtain the pixel distance information set corresponding to each image pixel.
Since each pixel point is searched by a plurality of seed points, each pixel point has a distance from surrounding seed points, so that the seed point with the smallest distance in the pixel distance information set is selected as a pixel point clustering center, and is iteratively optimized to a preset number of times, namely, iteration is continued until the error converges, and the fact that each pixel point clustering center is not changed any more can be understood, the general iteration number is 10, and the pixel point clustering center and the pixel point seed label set are subjected to traversal matching and super-pixel segmentation to obtain a product production super-pixel block region set after image segmentation. The image is preprocessed through super-image segmentation, so that irregular pixel blocks composed of pixel points with similar characteristics are obtained, insignificant information in subsequent processing of the image can be reduced, complexity of image processing is greatly reduced, calculated amount is reduced, and operation efficiency of an algorithm is improved.
Step S400: carrying out semantic segmentation on the product production super-pixel block region set to obtain a product production characteristic semantic segmentation result;
in one embodiment, the step S400 of applying further includes:
step S410: acquiring a product production image data set, and performing image labeling segmentation on the product production image data set to obtain a product production image segmentation result set;
step S420: constructing a network encoder and a network decoder based on the full convolutional neural network structure;
step S430: carrying out semantic segmentation training on the network encoder and the network decoder by adopting the product production image segmentation result set to obtain a product semantic segmentation model;
step S440: and carrying out semantic segmentation on the product production super-pixel block region set based on the product semantic segmentation model, and outputting the product production characteristic semantic segmentation result.
Specifically, the product production super-pixel block region set is subjected to semantic segmentation, namely visual image input is divided into different semantic interpretable categories. Firstly, a product production image data set can be acquired through a big data technology, then the product production image data set is subjected to image labeling segmentation, and the product production image segmentation result set can be acquired by carrying out labeling segmentation on different structural areas of an image according to product production structural labels. Based on the full convolutional neural network structure, a network encoder and a network decoder are constructed, both of which are contained in the deep learning network structure.
And carrying out semantic segmentation training on the network encoder and the network decoder by adopting the product production image segmentation result set until the model accuracy reaches the standard, so as to obtain a corresponding product semantic segmentation model. And carrying out semantic segmentation on the product production super-pixel block region set based on the product semantic segmentation model, and outputting a product production feature semantic segmentation result, namely a semantic segmentation result of a product production image. And the product production image processing is carried out through the high-precision semantic segmentation model, so that the image semantic segmentation accuracy and segmentation efficiency are improved, the complexity and information quantity of the monitored image are reduced, and the product quality detection processing efficiency is further improved.
Step S500: performing feature analysis based on the product production feature semantic segmentation result to obtain product production application feature information;
in one embodiment, the obtaining product production application feature information, step S500 of the present application further includes:
step S510: obtaining a production convolution feature set according to the product production application quality standard;
step S520: performing traversal convolution calculation on the product production feature semantic segmentation result according to the production convolution feature set to obtain a production convolution calculation result;
step S530: based on the production convolution calculation result, obtaining a production application feature set conforming to a preset convolution numerical range;
step S540: and fusing all the characteristics in the production application characteristic set to generate the product production application characteristic information.
Specifically, feature analysis is performed based on the product production feature semantic segmentation result, a production convolution feature set is determined according to a product production application quality standard, local features, namely set standard features, are focused on by convolution features, and the numerical values of the convolution features of the local feature parts, including the structure quality standard, the color quality standard and the like, of the product production application quality standard are determined so as to be used for evaluating the matching degree of image features. Performing traversal convolution calculation on the product production feature semantic segmentation result according to the production convolution feature set to obtain a production convolution calculation result, namely a calculation result matched with the application quality standard feature. And obtaining a production application feature set which accords with a preset convolution numerical range based on the production convolution calculation result, namely, the matching degree which accords with each application quality standard feature. And fusing all the characteristics in the production application characteristic set to generate product production application characteristic information, namely the overall quality detection standard matching degree of the image. And the product production application characteristics are rapidly obtained through traversal convolution calculation, so that the product quality detection efficiency is improved.
Step S600: and carrying out quality analysis based on the product production application characteristic information to obtain production quality analysis information, and carrying out product production regulation and control according to the production quality analysis information.
In one embodiment, the step S600 of the present application further includes:
step S610: when the production quality analysis information is unqualified, dynamically marking the unqualified product;
step S620: carrying out production data tracing on the unqualified product to obtain a negative product production quality tracing data stream;
step S630: carrying out production abnormality cause analysis based on the negative product production quality traceability data stream to generate product production abnormality parameters;
step S640: and adjusting production parameters of the production flow of the product based on the abnormal production parameters of the product.
Specifically, based on the product production application characteristic information, quality analysis is performed, a product production application characteristic baseline standard can be set, evaluation and scoring are performed according to the characteristic matching degree, and whether the product reaches the application standard is judged, so that production quality analysis information is obtained. And carrying out product production regulation and control according to the production quality analysis information, when the production quality analysis information is unqualified, dynamically marking the unqualified product, and then carrying out production data retrieval and tracing on the unqualified product to obtain a negative product production quality tracing data stream related to the production process. And carrying out production abnormality cause analysis based on the negative product production quality traceable data flow, extracting the production abnormality data flow, and analyzing the production abnormality data flow to generate product production abnormality parameters, wherein the product production temperature and the processing time are abnormal in an exemplary manner. And based on the abnormal product production parameters, carrying out production parameter adjustment on the product production flow, for example, reducing the product production temperature parameters, and adjusting the production parameters to normal standards. The product quality production standardization is realized, the abnormal production of subsequent products is avoided, and the product production quality is further ensured.
In one embodiment, as shown in FIG. 3, there is provided a machine vision based product production management system comprising: the system comprises a production process whole-course monitoring module 11, a key frame extraction module 12, a super-pixel preprocessing module 13, a semantic segmentation module 14, a feature analysis module 15 and a product production regulation and control module 16, wherein:
the whole production process monitoring module 11 is used for monitoring the whole production process of the product through the visual detection device to obtain product production monitoring video information;
the key frame extraction module 12 is configured to perform video compression and key frame extraction on the product production monitoring video information to obtain product production key image information;
the super-pixel preprocessing module 13 is used for performing super-pixel preprocessing based on the product production key image information to obtain a product production super-pixel block region set;
the semantic segmentation module 14 is used for carrying out semantic segmentation on the product production super-pixel block region set to obtain a product production characteristic semantic segmentation result;
the feature analysis module 15 is configured to perform feature analysis based on the product production feature semantic segmentation result, so as to obtain product production application feature information;
the product production control module 16 is configured to perform quality analysis based on the product production application feature information, obtain production quality analysis information, and perform product production control according to the production quality analysis information.
In one embodiment, the system further comprises:
the seed point selection unit is used for initializing seed points of the product image and reselecting target seed points in a preset adjacent area of the seed points of the product image;
the class label distribution unit is used for distributing class labels to the neighborhood pixel point set of the target seed point to obtain a pixel point seed label set;
the pixel distance information obtaining unit is used for calculating the distance between each pixel point in the neighborhood pixel point set and the target seed point to obtain a pixel distance information set;
the super pixel block area obtaining unit is used for screening the minimum distance seed points in the pixel distance information set as pixel point clustering centers, performing iterative optimization to preset times, and performing matching segmentation on the pixel point clustering centers and the pixel point seed label sets to obtain the product production super pixel block area set.
In one embodiment, the system further comprises:
a coordinate information obtaining unit for obtaining seed color coordinate information [ L ] of the target seed point i ,a i ,b i ]And seed space coordinate information [ x ] i ,y i ]And pixel color coordinate information [ L ] of each pixel point j ,a j ,b j ]And pixel space coordinate information [ x ] j ,y j ];
A distance information obtaining unit for calculating color distance information d of the seed color coordinate information and the pixel color coordinate information c And spatial distance information d of the seed spatial coordinate information and the pixel spatial coordinate information s
A distance fusion calculation unit for calculating the color distance information d based on a distance measurement function c And the spatial distance information d s And performing fusion calculation to obtain the pixel distance information set.
In one embodiment, the system further comprises:
the image labeling and dividing unit is used for acquiring a product production image data set, and carrying out image labeling and dividing on the product production image data set to obtain a product production image dividing result set;
the encoder and decoder constructing unit is used for constructing a network encoder and a network decoder based on the full convolution neural network structure;
the semantic segmentation training unit is used for carrying out semantic segmentation training on the network encoder and the network decoder by adopting the product production image segmentation result set to obtain a product semantic segmentation model;
the semantic segmentation unit is used for carrying out semantic segmentation on the product production super-pixel block region set based on the product semantic segmentation model and outputting the product production characteristic semantic segmentation result.
In one embodiment, the system further comprises:
the production convolution feature set obtaining unit is used for obtaining a production convolution feature set according to the product production application quality standard;
the traversal convolution calculation unit is used for performing traversal convolution calculation on the product production feature semantic segmentation result according to the production convolution feature set to obtain a production convolution calculation result;
a production application feature set obtaining unit, configured to obtain a production application feature set that conforms to a predetermined convolution numerical range based on the production convolution calculation result;
and the feature fusion unit is used for fusing all the features in the production application feature set to generate the product production application feature information.
In one embodiment, the system further comprises:
the product dynamic marking unit is used for dynamically marking the unqualified product when the production quality analysis information is unqualified;
the production data tracing unit is used for tracing the production data of the unqualified product to obtain a negative product production quality tracing data stream;
the production anomaly cause analysis unit is used for carrying out production anomaly cause analysis based on the negative product production quality traceability data stream and generating product production anomaly parameters;
and the production parameter adjusting unit is used for adjusting the production parameters of the product production flow based on the abnormal production parameters of the product.
For a specific embodiment of a machine vision based product production management system, reference may be made to the above embodiment of a machine vision based product production management method, which is not described herein. The above-described respective modules in a machine vision-based product production management apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 4. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing news data, time attenuation factors and other data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a machine vision based product production management method.
Those skilled in the art will appreciate that the structures shown in FIG. 4 are block diagrams only and do not constitute a limitation of the computer device on which the present aspects apply, and that a particular computer device may include more or less components than those shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of: the whole process of the production process of the product is monitored by a visual detection device, and product production monitoring video information is obtained; performing video compression and key frame extraction on the product production monitoring video information to obtain product production key image information; performing super-pixel preprocessing on the basis of the product production key image information to obtain a product production super-pixel block region set; carrying out semantic segmentation on the product production super-pixel block region set to obtain a product production characteristic semantic segmentation result; performing feature analysis based on the product production feature semantic segmentation result to obtain product production application feature information; and carrying out quality analysis based on the product production application characteristic information to obtain production quality analysis information, and carrying out product production regulation and control according to the production quality analysis information.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of: the whole process of the production process of the product is monitored by a visual detection device, and product production monitoring video information is obtained; performing video compression and key frame extraction on the product production monitoring video information to obtain product production key image information; performing super-pixel preprocessing on the basis of the product production key image information to obtain a product production super-pixel block region set; carrying out semantic segmentation on the product production super-pixel block region set to obtain a product production characteristic semantic segmentation result; performing feature analysis based on the product production feature semantic segmentation result to obtain product production application feature information; and carrying out quality analysis based on the product production application characteristic information to obtain production quality analysis information, and carrying out product production regulation and control according to the production quality analysis information. The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. A machine vision based product production management method, the method comprising:
the whole process of the production process of the product is monitored by a visual detection device, and product production monitoring video information is obtained;
performing video compression and key frame extraction on the product production monitoring video information to obtain product production key image information;
performing super-pixel preprocessing on the basis of the product production key image information to obtain a product production super-pixel block region set;
carrying out semantic segmentation on the product production super-pixel block region set to obtain a product production characteristic semantic segmentation result;
performing feature analysis based on the product production feature semantic segmentation result to obtain product production application feature information;
and carrying out quality analysis based on the product production application characteristic information to obtain production quality analysis information, and carrying out product production regulation and control according to the production quality analysis information.
2. The method of claim 1, wherein the obtaining a set of product-producing superpixel block regions comprises:
initializing a product image seed point, and reselecting a target seed point in a preset neighborhood of the product image seed point;
class labels are distributed to the neighborhood pixel point set of the target seed point, and a pixel point seed label set is obtained;
calculating the distance between each pixel point in the neighborhood pixel point set and the target seed point to obtain a pixel distance information set;
and screening the minimum distance seed points in the pixel distance information set as pixel point clustering centers, performing iterative optimization to preset times, and performing matching segmentation on the pixel point clustering centers and the pixel point seed label set to obtain the super-pixel block region set for product production.
3. The method of claim 2, wherein the obtaining a set of pixel distance information comprises:
acquiring seed color coordinate information [ L ] of the target seed point i ,a i ,b i ]And seed space coordinate information [ x ] i ,y i ]And pixel color coordinates of each pixel pointLabel information L j ,a j ,b j ]And pixel space coordinate information [ x ] j ,y j ];
Calculating color distance information d of the seed color coordinate information and the pixel color coordinate information c And spatial distance information d of the seed spatial coordinate information and the pixel spatial coordinate information s
Distance measurement function based on the color distance information d c And the spatial distance information d s And performing fusion calculation to obtain the pixel distance information set.
4. A method according to claim 3, wherein the distance metric function is in particular:
Figure FDA0004232935520000021
Figure FDA0004232935520000022
Figure FDA0004232935520000023
wherein d in the formula c Representing color distance information, d s Representing spatial distance information, S representing the adjacent super-pixel distance, and m representing the weight of the color space.
5. The method of claim 1, wherein the obtaining the product production feature semantic segmentation result comprises:
acquiring a product production image data set, and performing image labeling segmentation on the product production image data set to obtain a product production image segmentation result set;
constructing a network encoder and a network decoder based on the full convolutional neural network structure;
carrying out semantic segmentation training on the network encoder and the network decoder by adopting the product production image segmentation result set to obtain a product semantic segmentation model;
and carrying out semantic segmentation on the product production super-pixel block region set based on the product semantic segmentation model, and outputting the product production characteristic semantic segmentation result.
6. The method of claim 1, wherein the obtaining product production application characteristic information comprises:
obtaining a production convolution feature set according to the product production application quality standard;
performing traversal convolution calculation on the product production feature semantic segmentation result according to the production convolution feature set to obtain a production convolution calculation result;
based on the production convolution calculation result, obtaining a production application feature set conforming to a preset convolution numerical range;
and fusing all the characteristics in the production application characteristic set to generate the product production application characteristic information.
7. The method of claim 1, wherein said performing product production control based on said production quality analysis information comprises:
when the production quality analysis information is unqualified, dynamically marking the unqualified product;
carrying out production data tracing on the unqualified product to obtain a negative product production quality tracing data stream;
carrying out production abnormality cause analysis based on the negative product production quality traceability data stream to generate product production abnormality parameters;
and adjusting production parameters of the production flow of the product based on the abnormal production parameters of the product.
8. A machine vision based product production management system, the system comprising:
the whole-process monitoring module is used for monitoring the whole process of the production process of the product through the visual detection device to obtain product production monitoring video information;
the key frame extraction module is used for carrying out video compression and key frame extraction on the product production monitoring video information to obtain product production key image information;
the super-pixel preprocessing module is used for performing super-pixel preprocessing on the basis of the product production key image information to obtain a product production super-pixel block region set;
the semantic segmentation module is used for carrying out semantic segmentation on the product production super-pixel block region set to obtain a product production characteristic semantic segmentation result;
the feature analysis module is used for carrying out feature analysis based on the product production feature semantic segmentation result to obtain product production application feature information;
and the product production regulation and control module is used for carrying out quality analysis based on the product production application characteristic information, obtaining production quality analysis information and carrying out product production regulation and control according to the production quality analysis information.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202310555748.7A 2023-05-17 2023-05-17 Product production management method and system based on machine vision Withdrawn CN116433653A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310555748.7A CN116433653A (en) 2023-05-17 2023-05-17 Product production management method and system based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310555748.7A CN116433653A (en) 2023-05-17 2023-05-17 Product production management method and system based on machine vision

Publications (1)

Publication Number Publication Date
CN116433653A true CN116433653A (en) 2023-07-14

Family

ID=87085693

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310555748.7A Withdrawn CN116433653A (en) 2023-05-17 2023-05-17 Product production management method and system based on machine vision

Country Status (1)

Country Link
CN (1) CN116433653A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117235606A (en) * 2023-11-10 2023-12-15 张家港广大特材股份有限公司 Production quality management method and system for special stainless steel
CN117893945A (en) * 2024-01-18 2024-04-16 北京合工仿真技术有限公司 Method, device and equipment for detecting production beat data

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117235606A (en) * 2023-11-10 2023-12-15 张家港广大特材股份有限公司 Production quality management method and system for special stainless steel
CN117893945A (en) * 2024-01-18 2024-04-16 北京合工仿真技术有限公司 Method, device and equipment for detecting production beat data

Similar Documents

Publication Publication Date Title
CN113269237B (en) Assembly change detection method, device and medium based on attention mechanism
CN116433653A (en) Product production management method and system based on machine vision
CN111179249A (en) Power equipment detection method and device based on deep convolutional neural network
CN112861575A (en) Pedestrian structuring method, device, equipment and storage medium
CN115294117B (en) Defect detection method and related device for LED lamp beads
Choi et al. Attention-based multimodal image feature fusion module for transmission line detection
CN112419202B (en) Automatic wild animal image recognition system based on big data and deep learning
CN112801047B (en) Defect detection method and device, electronic equipment and readable storage medium
CN111444923A (en) Image semantic segmentation method and device under natural scene
Geng et al. An improved helmet detection method for YOLOv3 on an unbalanced dataset
CN113515655A (en) Fault identification method and device based on image classification
CN113989604A (en) Tire DOT information identification method based on end-to-end deep learning
CN114359787A (en) Target attribute identification method and device, computer equipment and storage medium
CN114067339A (en) Image recognition method and device, electronic equipment and computer readable storage medium
US11393091B2 (en) Video image processing and motion detection
CN117217020A (en) Industrial model construction method and system based on digital twin
CN115457652A (en) Pedestrian re-identification method and device based on semi-supervised learning and storage medium
CN114663751A (en) Power transmission line defect identification method and system based on incremental learning technology
CN117557775B (en) Substation power equipment detection method and system based on infrared and visible light fusion
CN115700821B (en) Cell identification method and system based on image processing
CN117670820B (en) Plastic film production defect detection method and system
CN115802013B (en) Video monitoring method, device and equipment based on intelligent illumination and storage medium
CN115631340A (en) Transformer substation operation inspection image identification method, computer equipment and storage medium
CN116958636A (en) Training method, device, equipment and storage medium of picture discrimination model
CN115909143A (en) Method and device for intelligently identifying manual work of robot hand sample

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20230714