CN116309406A - Appearance defect detection system, method and storage medium - Google Patents

Appearance defect detection system, method and storage medium Download PDF

Info

Publication number
CN116309406A
CN116309406A CN202310208118.2A CN202310208118A CN116309406A CN 116309406 A CN116309406 A CN 116309406A CN 202310208118 A CN202310208118 A CN 202310208118A CN 116309406 A CN116309406 A CN 116309406A
Authority
CN
China
Prior art keywords
image
model
training
defect detection
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202310208118.2A
Other languages
Chinese (zh)
Inventor
陈永彬
卿超
黄嘉思
黄国健
徐健晖
陈裕斌
赖文鑫
谢思
梁景熙
黄国芹
陈楚真
叶振华
蔡阳光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Mechanical and Electrical College
Original Assignee
Guangdong Mechanical and Electrical College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Mechanical and Electrical College filed Critical Guangdong Mechanical and Electrical College
Priority to CN202310208118.2A priority Critical patent/CN116309406A/en
Publication of CN116309406A publication Critical patent/CN116309406A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an appearance defect detection system, an appearance defect detection method and a storage medium, which can be widely applied to the technical field of distributed processing. According to the system, after the training images are acquired, registered and amplified through the client, the model is trained through the server according to the amplified training images, so that the model weight file is obtained, the client can update model parameters according to the model weight file, appearance defect detection is carried out on the acquired real-time product images through the appearance defect detection model after weight updating, and further, the processes of different model design, training and the like do not need to be repeated, the data utilization rate is effectively improved, the training workload is reduced, and the development efficiency is improved.

Description

Appearance defect detection system, method and storage medium
Technical Field
The invention relates to the technical field of distributed processing, in particular to an appearance defect detection system, an appearance defect detection method and a storage medium.
Background
In the related art, deep learning model training is usually realized by independent software for image acquisition, labeling and testing, and data among different modules are not communicated, so that different model training processes are needed when appearance defects of different types of products are detected, thereby causing resource waste and work repetition and further affecting development efficiency.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems existing in the prior art. Therefore, the invention provides an appearance defect detection system, an appearance defect detection method and a storage medium, which can effectively improve the data utilization rate, reduce the training workload and improve the development efficiency.
In one aspect, an embodiment of the present invention provides an appearance defect detection system, including:
the system comprises a client, a first image acquisition module and a second image acquisition module, wherein the client is used for acquiring training images, the training images comprise a first image and a second image, the first image comprises a defect camera image, and the second image comprises an image obtained through white light interference processing; after carrying out data registration and amplification on the training images, sending the training images to a server; receiving a weight file returned by the server, and updating the weight of the appearance defect detection model according to the weight file; performing appearance defect detection on the acquired real-time product image through the appearance defect detection model with updated weight;
and the server side performs model training according to the amplified image to obtain a model weight file, and sends the model weight file to the client side.
In some embodiments, the server includes a business application layer for performing model tuning, model training, distributed training, user login and registration, log viewing, result viewing, publishing a dataset, or publishing a model.
In some embodiments, the server further comprises a scheduling layer, wherein the scheduling layer is used for executing packaging tasks, scheduling tasks, submitting tasks or updating state tasks.
In some embodiments, the packaging task includes:
sub-packaging tasks submitted by a user interface;
and inserting the split-packed tasks into a queue to be scheduled, wherein the tasks in the queue to be scheduled are ordered according to the priority.
In some embodiments, the submitting task includes:
the method comprises the steps of submitting target task objects to a cluster, wherein the cluster comprises a Kubernetes cluster, and the target objects are task objects which can be identified by the Kubernetes cluster.
In some embodiments, the Kubernetes cluster includes an actuator and an adapter;
the executor is used for receiving the task object parameters, analyzing the task object parameters, and extracting to obtain a data set acquisition address and a file acquisition address; acquiring a data set file according to the data set acquisition address and acquiring an execution file according to the file acquisition address; storing the data set file and the execution file to a container local;
the adapter is used for executing tasks according to the data set files and the execution files.
In some embodiments, the server further includes a data storage layer and a cluster layer, the data set file and the execution file are stored in the data storage layer, and the Kubernetes clusters are distributed in the cluster layer.
In some embodiments, the server further includes a mirror layer, where the mirror layer is configured to provide an environment required to perform a task.
In another aspect, an embodiment of the present invention provides an appearance defect detection method, where the method is applied to a client, and the method includes the following steps:
acquiring a training image, wherein the training image comprises a first image and a second image, the first image comprises a defect camera image, and the second image comprises an image obtained through white light interference processing;
the training images are subjected to data registration and amplification and then sent to a server, and the server performs model training according to the amplified images to obtain a model weight file;
updating the weight of the appearance defect detection model according to the weight file returned by the server;
and detecting the appearance defects of the acquired real-time product images through the appearance defect detection model with updated weight.
In another aspect, an embodiment of the present invention provides a storage medium in which a computer-executable program is stored, the computer-executable program being for implementing the appearance defect detection method when executed by a processor.
The appearance defect detection system provided by the embodiment of the invention has the following beneficial effects:
according to the embodiment, after the training images are collected, registered and amplified through the client, the model is trained through the server according to the amplified training images, so that the model weight file is obtained, the client can update model parameters according to the model weight file, appearance defect detection is carried out on the collected real-time product images through the appearance defect detection model after weight updating, and further, processes of model design, training and the like of different types are not required to be repeated, the data utilization rate is effectively improved, the training workload is reduced, and the development efficiency is improved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The invention is further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a schematic diagram of an appearance defect detection system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a frame of a server according to an embodiment of the present invention;
FIG. 3 is a flow chart of data processing at a server according to an embodiment of the present invention;
fig. 4 is a diagram of an appearance defect detecting method according to an embodiment of the invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
In the description of the present invention, it should be understood that references to orientation descriptions such as upper, lower, front, rear, left, right, etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of description of the present invention and to simplify the description, and do not indicate or imply that the apparatus or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the present invention.
In the description of the present invention, the meaning of a number is one or more, the meaning of a number is two or more, and greater than, less than, exceeding, etc. are understood to exclude the present number, and the meaning of a number is understood to include the present number. The description of the first and second is for the purpose of distinguishing between technical features only and should not be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, unless explicitly defined otherwise, terms such as arrangement, installation, connection, etc. should be construed broadly and the specific meaning of the terms in the present invention can be reasonably determined by a person skilled in the art in combination with the specific contents of the technical scheme.
In the description of the present invention, the descriptions of the terms "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Referring to fig. 1, an embodiment of the present invention provides an appearance defect detection system, which includes a client and a server. The client is used for acquiring training images; specifically, the training image includes a first image including a defective camera image and a second image including an image obtained by white light interference processing. The client is also used for registering and amplifying the data of the training images and then sending the training images to the server; receiving a weight file returned by the server, and updating the weight of the appearance defect detection model according to the weight file; performing appearance defect detection on the acquired real-time product image through the appearance defect detection model with updated weight; and the server side carries out model training according to the amplified image to obtain a model weight file, and sends the model weight file to the client side.
In this embodiment, the appearance defect detection system provided in this embodiment is a distributed system, and the client may include a plurality of clients. Specifically, the client integrates 3D image acquisition, data set expansion, defect identification detection and sorting functions for real-time online appearance defect detection sorting equipment, and is responsible for data acquisition, data fusion processing, data set expansion, defect online detection and the like. The server is mainly responsible for data annotation, training of a model, management of the model and data, user management and the like. It can be understood that the main processing procedures of the system of the embodiment are as follows: the client acquires image data of a defect camera and high-precision depth data based on white light interference, and uploads the image data to the server after RGBD point cloud registration data fusion and data expansion; the server receives a small amount of RGBD data for training, which is collected by the client, and the server generates training samples and automatically trains the training samples by amplifying and enhancing data by generating an countermeasure network through simple man-machine interaction, and then distributes the training data to corresponding client nodes in parallel. And the client side returns model parameters to perform defect semantic segmentation model and high-precision edge segmentation based on the probability graph model by adopting the server side to obtain the online real-time workpiece RGBD information, so that defect classification identification and quantitative measurement are realized, and sorting control is performed according to a preset standard.
In this embodiment of the present application, as shown in fig. 2, the server includes a service application layer, a scheduling layer, a data storage layer, a cluster layer, and a mirror layer. Specifically, the service application layer is configured to perform the following functions:
model parameter adjusting function: the platform mainly provides a function of executing a deep learning task for a user, the user needs to debug a self deep learning model before submitting the deep learning task, in order to facilitate user debugging, the platform provides a model parameter adjusting function, the user can select various parameters on line, then a parameter adjusting interface is opened on line for uploading a data set, and the user can check the execution condition of the task in the foreground in real time, so that the parameters can be modified at any time.
Model training: general deep learning tasks are mainly divided into deep learning training tasks and prediction tasks. A user writes model training codes in the deep learning training task, marks on the existing data set, and trains by using the existing data set, thereby generating a model containing the characteristics of the data set. And a user in the deep learning prediction task uses a training model generated in the model training task, sets a corresponding prediction data set, and predicts the prediction data set according to the extracted features in the training model, so as to obtain a prediction result. The system provides training task functionality that a user can use to generate a training model and then publish the training model to a model repository. The user may run directly using the model in the model repository when submitting the deep learning prediction task. In task execution, a user can view the state of task execution and the execution log. After the task is executed, the user can download the execution result, the generated training model and other files.
Distributed training function: training in a stand-alone environment typically suffers from problems such as overflow of GPU memory due to complex models, too large data volumes to load, and therefore requires distributed training. The distributed training of the deep learning task is generally divided into a single-machine multi-card training mode and a multi-machine multi-card training mode. For deep learning tasks with large data volume and complex operation, a single-machine multi-card mode is generally used for running, namely, one host machine is simultaneously provided with a plurality of display cards, and a plurality of display cards are simultaneously used for running the same deep learning task. For deep learning tasks with overlarge data quantity and complicated operation, the deep learning tasks can be operated in a multi-machine multi-card mode. Based on the consideration of the problems, the platform provides a distributed training task function for a user, and the user can select a single-machine multi-card or multi-machine multi-card mode to run the deep learning task, so that the task execution efficiency is improved.
User login and registration: based on permission control and security factor consideration of the user using platform, the user using platform must register account and login first, and after the user logs in, the user can perform on-line submitting task, and manage personal files and other information, such as checking historical task, downloading task result, checking historical data set, checking log and other functions.
Log viewing: for tasks executed in the background such as deep learning training and prediction tasks, the platform provides a log checking function for enabling a user to know execution details of the tasks in real time. After a user submits a deep learning task, the task prints a log at intervals of corresponding time in the background execution process, then the log is uploaded to the system background at regular time, and the user can download the real-time execution log from the task execution background at regular time to check real-time execution details of the task.
Results viewing: for tasks executed in the background such as deep learning training, a result viewing function is provided to enable a user to acquire the execution result of the task. After a user submits a deep learning task, the task can save the task execution result in the background execution process, after the task execution is completed, the result file can be packed and compressed and then uploaded to the task execution background, and the user can download the task execution result through the task execution background when the task execution is completed.
Publishing the data set: for some frequently used data sets, a user can issue the data sets to a data set warehouse after the data sets are uploaded to the platform, the data set warehouse is a file directory existing in a file system, the user does not need to repeatedly upload when using the data sets next time by issuing the data sets to the data set warehouse, and the existing data sets can be selected from the existing data set warehouse to use, because the data sets have large general data volume, the waste of storage resources can be greatly reduced by using the data set warehouse, and the recycling of the resources is realized.
Release model: for some frequently used or better data models with prediction effects, a user can issue the generated training model to a model warehouse after the model training task is executed, the model warehouse is also a file directory existing in a file system, the user does not need to repeatedly upload the data model when the data model is used for the prediction task next time, the user selects the existing model from the model warehouse to use, the waste of storage resources is reduced by using the model warehouse, the resource can be recycled, and the user can conveniently store test results.
File upload and download: the user can upload the content such as data sets, data models, executable files and the like needed by the deep learning task through the function, and can download the files such as data models, execution logs, execution results and the like generated by task execution.
The scheduling layer is used for executing the following functions:
packaging: for tasks submitted to the background by users, the system packages the tasks, then inserts the tasks into a queue to be scheduled, and the scheduler orders the tasks according to priority and then regularly acquires the tasks to be scheduled from the queue to be scheduled.
Scheduling tasks: for tasks meeting the scheduling conditions, the scheduler distributes tasks to be scheduled to the submitter for submission.
Submitting a task: for tasks distributed by the scheduler, the presenter encapsulates the tasks into task objects that can be identified by the cluster and then presents the task objects to the cluster for execution.
Updating task state: for a task that has been submitted to be executed, the state of its execution needs to be continuously monitored, and the dispatcher accesses the cluster periodically to acquire the execution state of the task and then updates the latest state to the database.
The data storage layer is used for executing the following functions:
storing a data table: the data table needed by the user to execute the task is stored by using a database, wherein the data table comprises user information, data set information, model information, task information, log information, result information and the like.
Storing a file: for files that are needed by the user to perform tasks, a distributed file storage system is used to store, for example, data set files, model files, executable files, result files, log files, and the like.
The cluster layer is used for executing the following functions:
analysis tasks: and the task submitted by the user is dispatched to the cluster for execution after being scheduled and packaged by the scheduler, an executor is needed to be used for receiving the transmitted task object in the cluster, the executor analyzes the task object, corresponding task parameters are extracted, a data set, an executable file and task type information are acquired according to the parameters, and different task adapters are created for executing the tasks according to different task types.
Performing the task: the corresponding task adapter is selected for execution for different types of tasks, such as training, predictive tasks, etc. Each task will start a process to execute.
The mirror layer is used for providing an environment required for executing tasks. Specifically, a deep learning task using a different framework, such as a type of TensorFlow, pyTorch, MXNet or the like, is performed. All the deep learning tasks are executed in the Docker container, and the deep learning tasks with different frameworks need to be executed, so that the Docker container with corresponding deep learning framework environments needs to be started, and therefore, base images supporting different deep learning framework environments need to be customized, for example, if the base images supporting three types of deep learning frameworks are respectively a TensorFlow base image, a PyTorch base image and an MXNet base image.
In an embodiment of the present application, submitting the task includes submitting the target task object to the cluster. Specifically, the clusters include Kubernetes clusters distributed at the cluster layer. The target object is a task object which can be identified by the Kubernetes cluster. It is understood that Kubernetes clusters include actuators and adapters. The executor is used for receiving the task object parameters, analyzing the task object parameters, and extracting to obtain a data set acquisition address and a file acquisition address; acquiring a data set file according to the data set acquisition address and acquiring an execution file according to the file acquisition address; storing the data set file and the execution file to the local of the container; the adapter is used for executing tasks according to the data set files and the execution files. The executor is used for receiving the task object parameters transmitted from the dispatcher, analyzing the parameters, extracting a data set acquisition address, executing a file acquisition address and a task type; storing the data set file and the execution file to the local of the container; according to different task types, different adapters are created to execute the tasks; the method is responsible for uploading logs generated in task execution at fixed time; and after the task execution is completed, uploading the saved result to a file system. The adapters correspond to different adapters for different types of tasks, and the adapters are the last operations to start a process to perform a task. Starting a process in the container to execute tasks according to the execution files and the data set files transmitted by the executor; after the task is executed, the output of the task is saved to a result file, and the client can download the weight files for defect detection.
In this embodiment, based on the server application framework shown in fig. 2, as shown in fig. 3, the processing procedure of the server is that after a task is submitted to the server, the background task service mainly has two steps of operations: providing for the receipt and encapsulation of task requests; after the task ID is generated for the task and the relevant attributes are filled, the user request is inserted into the task table of the database, i.e. the task submission queue, for scheduling by the scheduler.
The scheduler mainly has the following operations when scheduling tasks: the scheduler is started to monitor a task queue to be scheduled at regular time, and when a task meeting a scheduling condition is found, the task to be scheduled is acquired according to the priority level; the scheduler schedules the tasks with the priority meeting the resource conditions, and judges whether the tasks meet the resource conditions or not, and the tasks meeting the resource conditions are submitted to the cluster for execution; the scheduler distributes tasks meeting the resource conditions to clusters for execution and updating to the database.
Based on this, the system provided in the embodiments of the present application implements functions based on the following aspects:
firstly, designing a server-side software framework of a general deep learning distributed operation platform facing an industrial environment based on Spring Boot. According to the characteristic of large calculation amount of the deep learning task, the system is logically split into a user module, a task module and a file module for design and development. The deep learning tasks are divided into three types of model development, training, prediction and distributed training according to the requirements of users, and different realization flows are defined for different task types to respectively carry out development work.
Secondly, aiming at the problems of complex deployment and difficult maintenance of the deep learning operation environment, a Docker mirror image deploying the deep learning operation environment is constructed. Three deep learning frameworks, tensorFlow, pyTorch, MXNet each, were selected for construction.
Thirdly, aiming at the problems of complex execution flow and large calculation amount of the deep learning task, an executor plug-in for executing the deep learning task is designed and realized, and the execution flow of the deep learning task is defined at a container end. The task parameter receiving, task execution, execution result and log uploading work are realized.
Fourth, aiming at the problems of uneven distribution and difficult management of computing resources, the GPU computing resources are registered in the cluster to be uniformly distributed and called, so that efficient management and utilization of the computing resources are realized.
Fifthly, storing files by using an object storage mode for the problem of large storage requirement of deep learning tasks. The Ceph file system is deployed on the cluster, so that the requirement of a user for storing large files is met, and the expandability of the file system is improved.
In summary, the embodiment of the application is based on the distributed defect detection system, so that the performance of the detection system is improved, the automation level is improved, the use technical threshold and the cost are reduced, meanwhile, the system has good performance in the aspects of expandability and scale growth, the architecture and the cluster node number of the system can be quickly adjusted, and the system can adapt to different detection requirements in industrial production.
Referring to fig. 4, an embodiment of the present invention provides an appearance defect detection method, which is applied to a client, and includes the following steps:
step S410, acquiring a training image, wherein the training image comprises a first image and a second image, the first image comprises a defect camera image, and the second image comprises an image obtained by white light interference processing;
step S420, after carrying out data registration and amplification on the training image, sending the training image to a server, and obtaining a model weight file after carrying out model training on the training image by the server according to the amplified image;
step S430, updating the weight of the appearance defect detection model according to the weight file returned by the server;
and S440, performing appearance defect detection on the acquired real-time product image through the appearance defect detection model with updated weight.
The content of the system embodiment of the invention is suitable for the method embodiment, the functions of the method embodiment are the same as those of the system embodiment, and the beneficial effects achieved by the method embodiment are the same as those achieved by the system.
An embodiment of the present invention provides a storage medium in which a computer-executable program for implementing the appearance defect detection method shown in fig. 4 when executed by a processor is stored.
The content of the method embodiment of the invention is applicable to the storage medium embodiment, the specific function of the storage medium embodiment is the same as that of the method embodiment, and the achieved beneficial effects are the same as those of the method.
Furthermore, embodiments of the present invention provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device may read the computer instructions from the computer-readable storage medium, and execute the computer instructions to cause the computer device to perform the appearance defect detection method shown in fig. 4.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of one of ordinary skill in the art without departing from the spirit of the present invention. Furthermore, embodiments of the invention and features of the embodiments may be combined with each other without conflict.

Claims (10)

1. An appearance defect detection system, comprising:
the system comprises a client, a first image acquisition module and a second image acquisition module, wherein the client is used for acquiring training images, the training images comprise a first image and a second image, the first image comprises a defect camera image, and the second image comprises an image obtained through white light interference processing; after carrying out data registration and amplification on the training images, sending the training images to a server; receiving a weight file returned by the server, and updating the weight of the appearance defect detection model according to the weight file; performing appearance defect detection on the acquired real-time product image through the appearance defect detection model with updated weight;
and the server side performs model training according to the amplified image to obtain a model weight file, and sends the model weight file to the client side.
2. The visual inspection system of claim 1, wherein the server comprises a business application layer for performing model tuning, model training, distributed training, user login and registration, log viewing, result viewing, publishing data sets or publishing models.
3. The visual inspection system of claim 2, wherein the server further comprises a scheduling layer for performing encapsulation tasks, scheduling tasks, commit tasks, or update status tasks.
4. An appearance defect detection system according to claim 3, wherein said packaging task comprises:
sub-packaging tasks submitted by a user interface;
and inserting the split-packed tasks into a queue to be scheduled, wherein the tasks in the queue to be scheduled are ordered according to the priority.
5. An appearance defect detection system according to claim 3 wherein said submitting task comprises:
the method comprises the steps of submitting target task objects to a cluster, wherein the cluster comprises a Kubernetes cluster, and the target objects are task objects which can be identified by the Kubernetes cluster.
6. The visual inspection system of claim 5 wherein said Kubernetes cluster comprises an actuator and an adapter;
the executor is used for receiving the task object parameters, analyzing the task object parameters, and extracting to obtain a data set acquisition address and a file acquisition address; acquiring a data set file according to the data set acquisition address and acquiring an execution file according to the file acquisition address; storing the data set file and the execution file to a container local;
the adapter is used for executing tasks according to the data set files and the execution files.
7. The visual inspection system of claim 6, wherein the server further comprises a data storage layer and a cluster layer, the data set files and the execution files are stored in the data storage layer, and the Kubernetes clusters are distributed in the cluster layer.
8. The visual inspection system of claim 7, wherein the server further comprises a mirror layer for providing an environment for performing tasks.
9. An appearance defect detection method, wherein the method is applied to a client, and the method comprises the following steps:
acquiring a training image, wherein the training image comprises a first image and a second image, the first image comprises a defect camera image, and the second image comprises an image obtained through white light interference processing;
the training images are subjected to data registration and amplification and then sent to a server, and the server performs model training according to the amplified images to obtain a model weight file;
updating the weight of the appearance defect detection model according to the weight file returned by the server;
and detecting the appearance defects of the acquired real-time product images through the appearance defect detection model with updated weight.
10. A storage medium having stored therein a computer executable program for implementing the appearance defect detection method of claim 9 when executed by a processor.
CN202310208118.2A 2023-03-06 2023-03-06 Appearance defect detection system, method and storage medium Withdrawn CN116309406A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310208118.2A CN116309406A (en) 2023-03-06 2023-03-06 Appearance defect detection system, method and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310208118.2A CN116309406A (en) 2023-03-06 2023-03-06 Appearance defect detection system, method and storage medium

Publications (1)

Publication Number Publication Date
CN116309406A true CN116309406A (en) 2023-06-23

Family

ID=86821744

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310208118.2A Withdrawn CN116309406A (en) 2023-03-06 2023-03-06 Appearance defect detection system, method and storage medium

Country Status (1)

Country Link
CN (1) CN116309406A (en)

Similar Documents

Publication Publication Date Title
CN111835582B (en) Configuration method and device of Internet of things inspection equipment and computer equipment
CN110245023B (en) Distributed scheduling method and device, electronic equipment and computer storage medium
CN108764808A (en) Data Analysis Services system and its on-time model dispositions method
CN109933317B (en) Point burying method, device, server and readable storage medium
CN106354729A (en) Graph data handling method, device and system
CN110249312A (en) Data integration operation changing
CN111630475B (en) Method for controlling robot, server, storage medium and cloud service platform
CN113010598A (en) Dynamic self-adaptive distributed cooperative workflow system for remote sensing big data processing
US9613168B2 (en) Computer aided modeling
CN114416703A (en) Method, device, equipment and medium for automatically monitoring data integrity
CN115454420A (en) Artificial intelligence algorithm model deployment system, method, equipment and storage medium
CN114510526A (en) Online numerical control exhibition method
CN113886111A (en) Workflow-based data analysis model calculation engine system and operation method
CN117149383A (en) Data processing method, device, terminal equipment and storage medium
CN116309406A (en) Appearance defect detection system, method and storage medium
KR20210071283A (en) Safety inspection maintenance method and system for structure using drone
CN113435489B (en) Method, device, computer readable storage medium and processor for deploying system
CN115065597A (en) Container resource allocation method, device, medium and equipment
US20130138690A1 (en) Automatically identifying reused model artifacts in business process models
CN112884391A (en) Receiving and dispatching piece planning method and device, electronic equipment and storage medium
CN113674798B (en) Proteomics data analysis system
CN114637564B (en) Data visualization method and device, electronic equipment and storage medium
US20230007856A1 (en) Real-time dynamic container optimization computing platform
JP6707215B1 (en) Engineering tools, learning devices, and data acquisition systems
CN117579626B (en) Optimization method and system based on distributed realization of edge calculation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20230623