CN112801959B - Auxiliary assembly system based on visual feature recognition - Google Patents

Auxiliary assembly system based on visual feature recognition Download PDF

Info

Publication number
CN112801959B
CN112801959B CN202110061685.0A CN202110061685A CN112801959B CN 112801959 B CN112801959 B CN 112801959B CN 202110061685 A CN202110061685 A CN 202110061685A CN 112801959 B CN112801959 B CN 112801959B
Authority
CN
China
Prior art keywords
information
assembly
image
product
edge server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110061685.0A
Other languages
Chinese (zh)
Other versions
CN112801959A (en
Inventor
万加富
谭劲标
夏丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202110061685.0A priority Critical patent/CN112801959B/en
Publication of CN112801959A publication Critical patent/CN112801959A/en
Application granted granted Critical
Publication of CN112801959B publication Critical patent/CN112801959B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20152Watershed segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an auxiliary assembly system based on visual feature recognition, which comprises: the cloud server is used for inputting product process flow information, product three-dimensional model information, process flow and process steps of the product in advance; the edge server is used for receiving input information of the cloud server, comparing the data information fed back by the data processing unit with the model information of the storage library, acquiring the current product assembly state, automatically triggering and executing a corresponding functional strategy; the data processing unit is used for obtaining position image information shot by the camera module, identifying the newly installed parts by adopting differential comparison, marking, feeding back to the edge server, and simultaneously responding to the corresponding functional strategy; and the display is used for displaying the corresponding functional strategies of the data processor. The invention realizes intelligent detection and identification of the key components, and improves the automatic identification accuracy of the key components through deep learning.

Description

Auxiliary assembly system based on visual feature recognition
Technical Field
The invention relates to the field of intelligent manufacturing, in particular to an auxiliary assembly system based on visual feature recognition.
Background
In the field of automobile manufacturing equipment, the product is fast to update, various types are numerous, the nonstandard characteristics are outstanding, and the technical level of first-line workers is very high; moreover, for products with complex assembly and strict requirements, the phenomena of neglected loading, wrong loading and the like often occur, so that the qualification rate of the products can not be improved.
Most enterprises are greatly increased in product types along with the development of company business, the complexity and precision degree of products are also higher and higher, the requirements on the skill level of workers are also higher and higher, and the loss caused by assembly errors of the workers is increased year by year. The cultivation and inheritance of skill levels of workers is becoming a focus of enterprise attention.
The current deep learning technology and the visual characteristic recognition technology are relatively mature, but in the field of automobile assembly, a system for integrating related technologies and assisting assembly is lacking. Therefore, aiming at the phenomenon, the invention discloses an auxiliary assembly system by utilizing deep learning and visual characteristic recognition technology, which aims to improve the assembly quality of workers and reduce the loss of enterprises caused by the technical skill level problem of the workers.
Disclosure of Invention
In order to overcome the defects and shortcomings in the prior art, the invention provides an auxiliary assembly system based on visual feature recognition.
The invention adopts the following technical scheme:
an auxiliary assembly system based on visual feature recognition, comprising:
the cloud server is used for inputting product process flow information, product three-dimensional model information, process flow and process steps of the product in advance;
the edge server is used for receiving input information of the cloud server, comparing the data information fed back by the data processing unit with the model information of the storage library, acquiring the current product assembly state, automatically triggering and executing a corresponding functional strategy;
the data processing unit is used for obtaining position image information shot by the camera module, identifying the newly installed parts by adopting differential comparison, marking, feeding back to the edge server, and simultaneously responding to the corresponding functional strategy;
and the display is used for displaying the corresponding functional strategies of the data processor.
Further, the corresponding functional policy includes:
identifying the process progress, prompting the next process step, and simultaneously feeding back and displaying the effect of the current process step, wherein the effect comprises whether the assembly is qualified or not, and if the assembly is qualified, displaying pass; otherwise, displaying 'unqualified, please recheck', and displaying the unqualified reason of feedback;
uploading the image information and the corresponding process information to an edge server for storage;
according to the position of the marked position in the image and the triangle theorem, the marked assembly position and angle are obtained, the camera group is driven to rotate to a preset angle, the camera group is focused to the marked new mounting part position, and the marked position is ensured to be in the center.
Further, the data processing unit uploads the marked image information or the image information shot after refocusing to the edge server, the edge server performs image cutting on the positions of the installed parts in the image, redundant parts are removed, an image matrix is reduced, the image matrix is stored, specific parts are identified, the assembly effect is checked, and meanwhile, the checking result is fed back to the data processing unit.
Further, the edge server stores the received images in the Pascal VOC database according to the types and specific names of the parts, carries out domestication learning on the stored image information by adopting a convolutional neural network to obtain a standard model, and builds a dynamic link library for accelerating the identification and processing of the image information.
Further, the domestication learning includes:
the image preprocessing operation is specifically filtering noise reduction and data enhancement operation on the acquired image;
the segmentation and extraction operation, which uses an Opencv library to segment and extract images by using a watershed algorithm, and delete unnecessary backgrounds or extract small image matrixes of required parts.
Further, the process steps of the product comprise a non-parallel process, a critical component mounting process and a non-critical component mounting process in sequence.
Further, judging the current product assembly state, specifically judging whether the current process step has the process step parallel to the current product assembly state according to the parallel process information set by the system, if so, prompting other process steps which can be simultaneously performed currently by workers on a display, otherwise, not prompting; when the one-step process step is completed, the system can feed back whether the assembly is qualified or where the problem occurs in real time.
Further, the camera group comprises two cameras which are respectively positioned at two sides of the working platform and used for obtaining the workpiece assembly state of the working platform.
Further, the data processing unit sets the sampling frequency of the camera module according to the key degree of the process.
Further, automatic triggering is performed, specifically, according to the characteristics of parallel process and non-parallel process in the process flow, the moment of detecting the assembly effect is automatically decided, and the event triggering time is designed.
The invention has the beneficial effects that:
the intelligent assembly auxiliary system is based on visual computing, intelligent detection and identification of the key parts are realized, the automatic identification accuracy of the key parts is improved through deep learning, and the intelligent assembly auxiliary system is formed.
The invention reduces the working difficulty of unskilled technical workers, is beneficial to the cultivation of new technical workers and the improvement of product quality in factories, and finally realizes the improvement of factory benefits and the improvement of skill level of workers.
Drawings
FIG. 1 is a block diagram of a system architecture of the present invention;
FIG. 2 is a schematic diagram of a neural network used in the present invention;
FIG. 3 is a schematic diagram of the work platform construction of the present invention;
FIG. 4 is a schematic diagram of a camera module according to the present invention;
fig. 5 (a) and fig. 5 (b) are schematic diagrams of camera ranging according to the present invention;
FIG. 6 is a schematic diagram of a binocular vision ranging reconstruction perspective;
FIG. 7 (a) is a schematic diagram of a normal process flow and process priority according to an embodiment of the invention;
FIG. 7 (b) is a schematic diagram of an automatic decision-making process flow according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but embodiments of the present invention are not limited thereto.
Examples
As shown in FIG. 1, an auxiliary assembly system based on visual feature recognition comprises
The cloud server is used for inputting product process flow information, product three-dimensional model information, process flow and process steps of the product in advance; the process comprises the following steps: there are strictly sequential non-parallel processes and non-sequential parallel processes, as well as critical component mounting processes and non-critical component mounting processes. And transmits the data information to an edge server of the lower plant.
The edge server is used for receiving input information of the cloud server, autonomously allocating production tasks according to production resources, establishing an information channel and sending related model image information and process auxiliary information to a designated data processing unit; and comparing the data information fed back by the data processing unit with the storage library model information to acquire the current product assembly state, automatically triggering and executing the corresponding functional strategy.
The time triggering time is as follows: according to the working hours of each working procedure set by the system, the sampling frequency of the camera is intelligently set, a certain time is automatically advanced to prompt the content of the next working procedure, and the starting time and the ending time for triggering the camera module to shoot the current working procedure progress condition are intelligently set. (this time does not conflict with the camera sampling frequency.)
The automatic decision is that the system automatically judges the currently available working procedure according to the working procedure priority set in the technological process, and is used for prompting the content of the working procedure or a plurality of working procedures available in the next step.
The system adopts an edge computing architecture, and an edge server automatically acquires process step information and model information of a product from a cloud server and downloads the process step information and the model information to the edge server; the camera module and the display are directly controlled by the data processing unit, wherein the image information collected by the camera is transmitted to an edge server of a factory after being simply processed and filtered by the data processing unit, so that the data flow is reduced, the real-time performance of the data is improved, and the edge server is used for identifying the image information collected by the camera and managing and controlling all whole equipment information.
The edge server locally directly controls a plurality of data processing units directly connected with the edge server, executes corresponding strategy group tasks according to the received feedback information from the data processing units at the lower end of the edge server, and simultaneously meets the real-time requirement of the system.
The data processing unit is respectively connected with the edge server, the display and the camera module through the data communication module.
As shown in fig. 3 and 4, the camera module specifically includes: the two camera modules have pixels of 800W, and the focal length can be automatically adjusted.
The camera module is supported by the vertical column and the horizontal support column, the horizontal support column is arranged on one side of the upper end of the vertical column, gears are arranged in the vertical column and the horizontal support column, and the servo motor drives the gears to rotate, so that the up-down and left-right rotation of the camera is realized, and all positions of the workbench are shot.
And (3) comparing the library models, adopting a visual characteristic recognition technology based on a convolutional neural network and a Pascal VOC library image storage technology, and recognizing the acquired and segmented image by utilizing a trained neural network model to obtain the current assembly state information so as to respond to other operations.
The strategy comprises the following steps:
1. and calculating and prompting the next process step according to the recognized process progress, and simultaneously feeding back and displaying the effect of the current process step. If the assembly is qualified, a pass is displayed, if the assembly is detected to be unqualified, a fail is displayed, recheck is requested, and the reason of the failed feedback is displayed.
2. And uploading the image information and the corresponding process information to the server, so that the server can store data in a classified manner.
3. According to the position of the mark position in the image, according to the triangle theorem, as shown in fig. 5 (a) and 5 (b):
WD: viewing angle = focal length: CCD size
Therefore, the marked assembly position and angle (LD and θ) can be calculated, the camera module is driven to rotate to a predetermined angle, so that the camera module focuses on the marked new mounting part position, and the marked position is ensured to be in the center of the new image, so that clearer image information is captured for accurately analyzing the assembly effect. As shown in fig. 6, the system adopts binocular vision technology, uses feature matching algorithm to reconstruct three-dimensional stereogram of images acquired by two cameras, and then compares the three-dimensional stereogram with a model to determine assembly effect.
The assembly effect is that whether the assembly is qualified or not is judged by detecting the types, the numbers and the positions of the parts in the image, and for unqualified products, an edge server feeds unqualified reasons back to a data processing unit so as to facilitate technical personal error correction.
The data processor is used for obtaining position image information shot by the camera group, identifying the newly installed parts and marking the newly installed parts by adopting differential comparison, judging the progress of the current installation process, feeding back to the edge server, and simultaneously responding to the corresponding functional strategy.
The data processor uploads the marked image A information or the image information shot after refocusing to the edge server, the edge server performs image cutting on the positions of the installed parts in the image, the redundant parts are removed, the image matrix is reduced, the image matrix is stored, specific parts are identified, the assembly effect is checked, and meanwhile, the checking result is fed back to the data processor.
And the edge server classifies and stores the cut and reduced images in a Pascal VOC database according to the types and specific names of the parts. And performing domestication learning on the stored image information by adopting a convolutional neural network (CNNNet) to obtain a standard model, and constructing a dynamic link library for accelerating the identification and processing of the image information.
The convolutional neural network includes a plurality of input layers, a hidden layer, and an output layer. As shown in fig. 2, the hidden layer includes a plurality of layers, each layer being composed of a plurality of neurons. The neurons in the layers are independent of each other and are not connected to each other. Each neuron establishes a connection with all neurons in the next layer, forming a neural network. The domestication learning specifically comprises the preprocessing operation of the acquired image: based on an Opencv library and a Pytorch frame, the filtering noise reduction and data enhancement operation on the acquired image are realized, and the contrast of the image is improved. The Opencv library is utilized, a watershed algorithm is used for dividing and extracting images, unnecessary backgrounds are deleted, or a small image matrix of a required part is extracted. Secondly, performing feature advance and training learning on all picture sample data through a convolution neural network with multiple layers and small convolution kernels (3 multiplied by 3), so as to obtain a standard model.
By means of the trained standard model, after the image is input into the model network, the type result of the input image can be obtained, so that whether the current image contains the target part or the type of the part is judged, and the current process content and the process completion condition are judged.
The data processor sets the sampling frequency of the camera according to the key degree of the process. For a fine key process link, the sampling frequency of a camera is set to be 10 images per minute; for non-critical process links, the sampling frequency is set to 3 images per minute.
Specifically, after the data processor identifies the current process step progress, it will determine, according to the parallel process step information set by the system, whether the current process step has a process step parallel to the current process step. If so, the submodule prompts the display module of other process steps which can be performed simultaneously currently, otherwise, the submodule does not prompt. When the one-step process step is completed, the system can feed back whether the assembly is qualified or where the problem occurs in real time.
The concrete explanation is as follows:
as shown in fig. 7 (a) and 7 (b), the process steps of the assembly are ABCDEF in sequence, and the priority relationships thereof are set for 3 grades (the higher the number is, the lower the priority is), wherein the process steps B, D and E are parallel processes, which means that several processes or steps can be performed simultaneously during the processing. Steps C and F are of the same priority, i.e. B, D and E belong to parallel process steps and C and F belong to parallel process steps. However, C must be performed after B is completed, whereas F must be performed after E is completed, i.e. the lower priority process step must be started after its previous high priority process step is completed, but may be performed simultaneously with its subsequent high priority process step.
Non-parallel processes refer to processes or steps in a process that have a strict sequence of processing.
For example, after the current assembly step a is completed, the data processing unit checks whether the step is acceptable, and after the step is acceptable, the next step is prompted on the display module, and since the process step BDE belongs to a parallel process, the BDE can be performed simultaneously. When step B is completed, step C can be performed, and if steps D and E are not completed, then steps C, D and E can be performed simultaneously. Although steps C and F are parallel processes, step F can only be performed after steps D and E are completed.
Through the above embodiments, the operation process and the corresponding strategy after the process judgment of the system of the present invention are completed can be understood. The invention can not only timely feed back the assembly effect to the technical workers and point out the problem of the assembly effect, but also guide the next installation operation of the workers, automatically judge the current progress of the technical workers through the information collected by the cameras, timely give feedback and prompt to the system, and do not need the workers to additionally click buttons or swing the positions of the workpieces, and the whole process is fully and automatically completed without human intervention.
The embodiments described above are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the embodiments described above, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principles of the present invention should be made in the equivalent manner, and are included in the scope of the present invention.

Claims (6)

1. An auxiliary assembly system based on visual feature recognition, comprising:
the cloud server is used for inputting product process flow information, product three-dimensional model information, process flow and process steps of the product in advance;
the edge server is used for receiving input information of the cloud server, comparing the data information fed back by the data processing unit with the model information of the storage library, acquiring the current product assembly state, automatically triggering and executing a corresponding functional strategy;
the data processing unit is used for obtaining position image information shot by the camera group, identifying and marking the newly installed parts by adopting differential comparison, feeding back to the edge server, and simultaneously responding to corresponding functional strategies;
a display for displaying the corresponding functional policies of the data processor;
the corresponding function policy includes:
identifying the process progress, prompting the next process step, and simultaneously feeding back and displaying the effect of the current process step, wherein the effect comprises whether the assembly is qualified or not, and if the assembly is qualified, displaying pass; otherwise, displaying 'unqualified, please recheck', and displaying the unqualified reason of feedback;
uploading the image information and the corresponding process information to an edge server for storage;
according to the position of the marking position in the image, according to the triangle theorem, the marked assembly position and angle are obtained, the camera group is driven to rotate to a preset angle, the camera group is focused to the position of the marked new installed part, and the marking position is ensured to be in the center;
the data processing unit uploads the marked image information or the image information shot after refocusing to the edge server, the edge server performs image cutting on the positions of the installed parts in the image, removes redundant parts, reduces an image matrix, stores, identifies specific parts, checks assembly effects, and feeds back a checking result to the data processing unit;
the edge server classifies and stores the received images in a Pascal VOC database according to the types and specific names of the parts, carries out domestication and learning on the stored image information by adopting a convolutional neural network to obtain a standard model, and constructs a dynamic link library for accelerating the identification and processing of the image information;
the domestication learning includes:
the image preprocessing operation is specifically filtering noise reduction and data enhancement operation on the acquired image;
the segmentation and extraction operation, which uses an Opencv library to segment and extract images by using a watershed algorithm, and delete unnecessary backgrounds or extract small image matrixes of required parts.
2. The auxiliary assembly system of claim 1, wherein the process steps of the product include a non-parallel process in a sequential order, a parallel process in a non-sequential order, a critical component mounting process, and a non-critical component mounting process.
3. The auxiliary assembly system according to claim 1, wherein the current product assembly state is determined, in particular, whether the current process step has a process step parallel to the current product assembly state is determined according to parallel process information set by the system, if so, other process steps currently performed by a worker are prompted on the display, otherwise, the other process steps are not prompted; when the one-step process step is completed, the system can feed back whether the assembly is qualified or where the problem occurs in real time.
4. The auxiliary assembly system of claim 1, wherein the camera head assembly comprises two camera head groups respectively positioned on two sides of the working platform for obtaining the workpiece assembly state of the working platform.
5. The auxiliary assembly system of claim 1, wherein the data processing unit sets the sampling frequency of the camera group based on a criticality of the process.
6. The auxiliary assembly system according to any of claims 1-5, wherein the automatic triggering, in particular the event triggering time, is designed based on parallel and non-parallel process characteristics in the process flow, automatically deciding the moment of detection of the assembly effect.
CN202110061685.0A 2021-01-18 2021-01-18 Auxiliary assembly system based on visual feature recognition Active CN112801959B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110061685.0A CN112801959B (en) 2021-01-18 2021-01-18 Auxiliary assembly system based on visual feature recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110061685.0A CN112801959B (en) 2021-01-18 2021-01-18 Auxiliary assembly system based on visual feature recognition

Publications (2)

Publication Number Publication Date
CN112801959A CN112801959A (en) 2021-05-14
CN112801959B true CN112801959B (en) 2023-08-22

Family

ID=75810035

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110061685.0A Active CN112801959B (en) 2021-01-18 2021-01-18 Auxiliary assembly system based on visual feature recognition

Country Status (1)

Country Link
CN (1) CN112801959B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113450356B (en) * 2021-09-01 2021-12-03 蘑菇物联技术(深圳)有限公司 Method, apparatus, and storage medium for recognizing mounting state of target component
CN114841952B (en) * 2022-04-28 2024-05-03 华南理工大学 Cloud-edge cooperative retinopathy of prematurity detection system and detection method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019031369A1 (en) * 2017-08-07 2019-02-14 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Coding device, decoding device, coding method and decoding method
CN109905675A (en) * 2019-03-13 2019-06-18 武汉大学 A kind of mine personnel monitoring system based on computer vision and method
CN110611793A (en) * 2019-08-30 2019-12-24 广州奇化有限公司 Supply chain information acquisition and data analysis method and device based on industrial vision
CN112132796A (en) * 2020-09-15 2020-12-25 佛山读图科技有限公司 Visual detection method and system for improving detection precision by means of feedback data autonomous learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11132533B2 (en) * 2017-06-07 2021-09-28 David Scott Dreessen Systems and methods for creating target motion, capturing motion, analyzing motion, and improving motion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019031369A1 (en) * 2017-08-07 2019-02-14 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Coding device, decoding device, coding method and decoding method
CN109905675A (en) * 2019-03-13 2019-06-18 武汉大学 A kind of mine personnel monitoring system based on computer vision and method
CN110611793A (en) * 2019-08-30 2019-12-24 广州奇化有限公司 Supply chain information acquisition and data analysis method and device based on industrial vision
CN112132796A (en) * 2020-09-15 2020-12-25 佛山读图科技有限公司 Visual detection method and system for improving detection precision by means of feedback data autonomous learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度强化学习的移动边缘计算任务卸载研究;卢海峰;顾春华;罗飞;丁炜超;杨婷;郑帅;;计算机研究与发展(07);第195-210页 *

Also Published As

Publication number Publication date
CN112801959A (en) 2021-05-14

Similar Documents

Publication Publication Date Title
CN112801959B (en) Auxiliary assembly system based on visual feature recognition
CN113762240B (en) Cladding layer geometric feature prediction method and system based on deep learning
CN111815605B (en) Sleeper defect detection method based on step-by-step deep learning and storage medium
CN109978835B (en) Online assembly defect identification system and method thereof
CN112016409A (en) Deep learning-based process step specification visual identification determination method and system
CN110020691B (en) Liquid crystal screen defect detection method based on convolutional neural network impedance type training
CN115456999B (en) Saw chain surface defect automatic detection system and defect detection method based on machine vision
WO2024212377A1 (en) Window detection method and device for bearing retainer
CN111948994A (en) Industrial production line closed-loop automatic quality control method based on data integration and correlation analysis
CN115984158A (en) Defect analysis method and device, electronic equipment and computer readable storage medium
CN112461846A (en) Workpiece defect detection method and device
CN115648644B (en) Self-adaptive press fitting device and method based on vision
US20210390490A1 (en) Systems, devices, and methods for quality control and inspection of parts and assemblies
CN109239074B (en) Green anode carbon block detection method based on machine vision
CN117817211B (en) Welding automation control method and system based on machine vision
CN114723748B (en) Detection method, device and equipment of motor controller and storage medium
CN111054640A (en) Automatic change mechanical detection device
CN117423043B (en) Visual detection method, device and system for lean assembly process
CN118397285B (en) Data labeling method, device, computing equipment and computer storage medium
CN117372434B (en) Positioning system and method for PCB production
CN115131606B (en) Two-stage process action detection method based on YOLO-TSM
CN118624611A (en) Mobile vision AI safety early warning system and method thereof
Wang et al. Research on testing methods for intelligent cockpit voice interaction performance based on speech and vision
CN115330713A (en) Cathode plate detection method based on machine vision, server and readable storage medium
CN118365906A (en) Mist removal method, control system and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant