US12260540B2 - Material completeness detection method and apparatus, and storage medium - Google Patents
Material completeness detection method and apparatus, and storage medium Download PDFInfo
- Publication number
- US12260540B2 US12260540B2 US17/749,195 US202217749195A US12260540B2 US 12260540 B2 US12260540 B2 US 12260540B2 US 202217749195 A US202217749195 A US 202217749195A US 12260540 B2 US12260540 B2 US 12260540B2
- Authority
- US
- United States
- Prior art keywords
- detection result
- completeness
- feature layer
- target object
- production line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/418—Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
- G05B19/41875—Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM] characterised by quality surveillance of production
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0008—Industrial image inspection checking presence/absence
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/809—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
- G06V10/811—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data the classifiers operating on different input data, e.g. multi-modal recognition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/32—Operator till task planning
- G05B2219/32178—Normal and correction transferline, transfer workpiece if fault
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/32—Operator till task planning
- G05B2219/32335—Use of ann, neural network
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/06—Recognition of objects for industrial automation
Definitions
- the present disclosure relates to the technical field of material detection, and in particular to a material completeness detection method and apparatus, and a storage medium.
- the present disclosure provides a material completeness detection method and apparatus, and a storage medium.
- An embodiment of the present disclosure provides a material completeness detection method, for detecting whether materials of a target object in a physical production line are complete, and including: inputting an image of the target object in the physical production line into a material completeness detection algorithm to acquire a first detection result; inputting a virtual model of the target object in a virtual production line into the material completeness detection algorithm to acquire a second detection result, where the virtual production line is a digital twin (DT) of the physical production line; and acquiring a material completeness detection result of the target object based on the first detection result and the second detection result.
- DT digital twin
- the material completeness detection result may include that the materials of the target object are incomplete; and when both the first detection result and the second detection result indicate that the materials of the target object are complete, the material completeness detection result may include that the materials of the target object are complete.
- the method further may include: controlling the physical production line to transfer the target object to an unqualified zone when the material completeness detection result includes that the materials of the target object are incomplete.
- the method further may include: controlling the physical production line to convey the target object to a next process when the material completeness detection result includes that the materials of the target object are complete.
- the material completeness detection result when the material completeness detection result includes that the materials of the target object are incomplete, the material completeness detection result further may include a type and/or a position of a missing material.
- the material completeness detection algorithm may include a backbone feature extraction network, configured to receive the image, and extract a backbone feature of the image to generate a 160*160 feature layer, an 80*80 feature layer, a 40*40 feature layer, and a 20*20 feature layer; an enhanced feature extraction network, configured to extract an enhanced feature from the input 160*160 feature layer, 80*80 feature layer, 40*40 feature layer and 20*20 feature layer to generate a 160*160*128 enhanced feature layer, an 80*80*256 enhanced feature layer, a 40*40*512 enhanced feature layer, and a 20*20*1,024 enhanced feature layer; and an output network, configured to output a detection result based on the 160*160*128 enhanced feature layer, the 80*80*256 enhanced feature layer, the 40*40*512 enhanced feature layer, and the 20*20*1,024 enhanced feature layer.
- a backbone feature extraction network configured to receive the image, and extract a backbone feature of the image to generate a 160*160 feature layer, an 80*80 feature layer,
- the enhanced feature extraction network may generate the 160*160*128 enhanced feature layer by performing the following operations on the 20*20 feature layer in sequence: up-sampling, connection with the 40*40 feature layer, feature extraction, up-sampling, connection with the 80*80 feature layer, feature extraction, up-sampling, connection with the 160*160 feature layer, and feature extraction.
- the method further may include: establishing a three-dimensional (3D) model of the physical production line and the materials in advance, and generating the DT of the physical production line according to the established 3D model.
- 3D three-dimensional
- the method further may include: displaying the virtual production line and the material completeness detection result in a graphical user interface (GUI).
- GUI graphical user interface
- An embodiment of the present disclosure provides a material completeness detection apparatus, for detecting whether materials of a target object in a physical production line are complete, and including: a first detection module configured to input an image of the target object in the physical production line into a material completeness detection algorithm to acquire a first detection result; a second detection module configured to input a virtual model of the target object in a virtual production line into the material completeness detection algorithm to acquire a second detection result, where the virtual production line is a DT of the physical production line; and a processing module configured to acquire a material completeness detection result of the target object based on the first detection result and the second detection result.
- An embodiment of the present disclosure further provides a computer-readable storage medium.
- the computer-readable storage medium stores a computer program, where the computer program is executed by a processor to implement the above material completeness detection method.
- the embodiments of the present disclosure propose the material completeness detection algorithm by combining the digital twin (DT) technology and the deep learning (DL) technology.
- the material completeness detection algorithm simultaneously detects the material completeness of the target object in the physical production line and the virtual model of the target object in the virtual production line of the physical production line. It finally determines the material completeness of the target object based on the two detection results. In this way, the embodiments of the present disclosure can realize efficient and accurate material completeness detection.
- FIG. 1 is a flowchart of a material completeness detection method according to an embodiment of the present disclosure
- FIG. 2 is a schematic view of a material completeness detection algorithm according to the present disclosure
- FIG. 3 is a schematic view of operation of the material completeness detection method according to the present disclosure.
- FIG. 4 is a structural view of a material completeness detection apparatus according to an embodiment of the present disclosure.
- module means
- component means
- unit means
- an embodiment of the present disclosure proposes a material completeness detection solution by combining the digital twin (DT) technology and the deep learning (DL) technology.
- the embodiment of the present disclosure is applicable to scenarios where it is necessary to detect whether the materials of the plates (target objects to be detected) in the production line are complete. For example, in production lines that require plate assembly or welding, before assembly or welding, each plate needs to be detected to determine whether there is a lack of small parts (screws, nuts, bolts, positioning pins, etc.) on the plate.
- the detection aims to detect and deal with the plate with incomplete materials in time, so as to improve assembly or welding efficiency and avoid unnecessary rework or quality problems.
- the DT technology and DL technology are advanced digital and intelligent technologies.
- the DT technology can achieve a high degree of integration of various types of specific physical information. Models established using the DT technology can receive various source data from physical objects and feed the results back to the physical objects.
- the DT technology can display the real environment in the virtual model, and control the physical objects in the virtual model to realize the combination of virtual and real.
- the DL technology is an efficient method for fast and uninterrupted learning.
- the detection efficiency of the DL technology is much higher than that of manual detection, and its detection error rate after training on large-scale data sets is much lower than that of manual detection.
- the DL technology has extremely high detection speed. By establishing DL algorithms for specific objects, the detection efficiency and accuracy can be further improved.
- the embodiment of the present disclosure proposes the material completeness detection solution by combining the DT technology and the DL technology, so as to improve the efficiency and accuracy of material completeness detection in the production line.
- the embodiment of the present disclosure will be described below with reference to the drawings. The embodiments of the present disclosure are described below with reference to the drawings.
- FIG. 1 is a flowchart of a material completeness detection method according to an embodiment of the present disclosure.
- the method can be used to detect whether materials (e.g., screws, nuts, bolts, positioning pins, etc.) on a target object (e.g., plate) in a production line are complete.
- materials e.g., screws, nuts, bolts, positioning pins, etc.
- target object e.g., plate
- Step S 12 the virtual production line is a DT of the physical production line.
- the material completeness detection result includes that the materials of the target object are incomplete.
- the material completeness detection result includes that the materials of the target object are complete. That is, only when both the detection results of the physical production line and the virtual production line indicate that the materials of the target object are complete, the materials of the target object are considered complete. In this way, the detection accuracy is improved.
- the material completeness detection result when the material completeness detection result includes that the materials of the target object are incomplete, the material completeness detection result further includes a type and/or a position of a missing material.
- the embodiment of the present disclosure proposes the material completeness detection algorithm by combining the DT technology and the DL technology.
- the material completeness detection algorithm simultaneously detects the material completeness of the target object in the physical production line and the virtual model of the target object in the virtual production line of the physical production line. It finally determines the material completeness of the target object based on the two detection results.
- the material completeness detection algorithm is described below.
- the material completeness detection algorithm used in Steps S 10 and S 12 may be, for example, a YOLOX target detection algorithm. Further, an embodiment of the present disclosure proposes an improved YOLOX target detection algorithm to enhance the sensitivity of the material completeness detection algorithm to small or tiny components, thereby further improving the detection effect. Specifically, as shown in FIG. 2 , the material completeness detection algorithm of the embodiment of the present disclosure structurally includes: a backbone feature extraction network 20 , an enhanced feature extraction network 22 , and an output network 24 .
- the backbone feature extraction network 20 is configured to receive the image (such as an image of a plate), and extract a backbone feature of the image to generate a 160*160 feature layer, an 80*80 feature layer, a 40*40 feature layer, and a 20*20 feature layer.
- the backbone feature extraction network may be implemented based on a residual network.
- the enhanced feature extraction network 22 is configured to extract an enhanced feature from the input 160*160 feature layer, 80*80 feature layer, 40*40 feature layer and 20*20 feature layer to generate a 160*160*128 enhanced feature layer, an 80*80*256 enhanced feature layer, a 40*40*512 enhanced feature layer, and a 20*20*1,024 enhanced feature layer.
- the output network 24 is configured to output a detection result based on the 160*160*128 enhanced feature layer, the 80*80*256 enhanced feature layer, the 40*40*512 enhanced feature layer, and the 20*20*1,024 enhanced feature layer.
- the output network 24 may analyze the 160*160*128 enhanced feature layer, the 80*80*256 enhanced feature layer, the 40*40*512 enhanced feature layer, and the 20*20*1,024 enhanced feature layer, respectively, to acquire four detection results, and determine a material completeness detection result based on the four detection results.
- Each of the four detection results may include three parameters, which are configured to indicate whether a material is included, a type of the material, and coordinates of the material.
- the enhanced feature network 22 introduces up-sampling from the 80*80 feature layer to the 160*160 feature layer, so as to output the 160*160*128 enhanced feature layer. This design avoids affecting the feature extraction, and improves the sensitivity to small and medium targets.
- the enhanced feature network 22 generates the 160*160*128 enhanced feature layer by performing the following operations on the 20*20 feature layer in sequence: up-sampling, connection with the 40*40 feature layer, feature extraction, up-sampling, connection with the 80*80 feature layer, feature extraction, up-sampling, connection with the 160*160 feature layer, and feature extraction.
- the 80*80*256 enhanced feature layer is generated by performing the following operations on the 20*20 feature layer in sequence: generating first data by up-sampling, connection with the 40*40 feature layer, feature extraction, up-sampling, and connection with the 80*80 feature layer; and connecting the first data with down-sampled data of the 60*160*128 enhanced feature layer, and performing feature extraction.
- the 40*40*512 enhanced feature layer is generated by performing the following operations on the 20*20 feature layer in sequence: generating second data by up-sampling, connection with the 40*40 feature layer, and feature extraction; and connecting the second data with down-sampled data of the 80*80*256 enhanced feature layer, and performing feature extraction.
- the 20*20*1,024 enhanced feature layer is generated by connecting the 20*20 feature layer with down-sampled data of the 40*40*512 enhanced feature layer, and performing feature extraction.
- the improved YOLOX target detection algorithm improves the sensitivity to small and medium targets, and it is very suitable for detecting the missing of small parts such as screws, nuts, bolts or positioning pins of the plate.
- a data set of complete and incomplete materials is created. After the target materials to be detected are determined, multiple pieces of different types of defective and non-defective materials are held by a fixture, and videos are recorded from multiple angles and changes. Then, the videos are input into a frame-taking program, and a path is entered to save images. An image is captured every 5 frames per second (FPS), and the data set is annotated with a data set annotation tool. The small parts to be detected are marked on the image, and an extensive markup language (XML) file including the position coordinates and types of each part is acquired.
- XML extensive markup language
- the prepared data set of complete and incomplete materials is input into the established neural network detection algorithm for multiple iterations of training.
- the prepared data set of complete and incomplete materials is randomly divided into a training set and a test set according to a ratio of, for example, 9:1.
- the training set is used to train the model, and the test set is used to verify the accuracy of the model.
- Pre-trained weights are imported, and the training set is input into the DL detection algorithm for training.
- the input image is preprocessed.
- the images are edited in the format of (batch, channel, size), and are converted to a tensor type that can be trained in the neural network.
- the batch of the training images is selected according to the different computing power.
- the results after training are tested on the test set.
- the model acquired after many iterations of training is saved.
- FIGS. 1 and 2 The embodiments of the present disclosure are described in detail with reference to FIGS. 1 and 2 , and the operation of the present disclosure in a real scenario is described below with reference to FIG. 3 .
- the real scenario in the physical production line is completely mapped to the DT.
- cameras, conveyors, workers, and robotic arms on the physical site are mapped to the virtual production line.
- Step 1 The real scenario video stream of the plate in the physical production line and the virtual model of the plate in the virtual production line are acquired, processed (processed into data that can be put into network detection), and input into the material completeness detection algorithm.
- Step 2 The material completeness detection algorithm is operated to detect the material completeness of the plate in the real site and virtual production line.
- Step 3 According to real and virtual detection results, it is determined whether the plate is qualified, that is, whether the materials are complete. Specifically, when either of the real and virtual detection result indicates that the materials are incomplete, it is determined that the plate is unqualified.
- Step 4 When the plate is unqualified, the physical production line is controlled to transfer the plate to an unqualified zone; and otherwise, the physical production line is controlled to convey the plate to a next process.
- GUI graphical user interface
- the DT technology is used to establish a DT of a physical production line.
- This process may be, for example, establishing a three-dimensional (3D) model of the physical production line and the materials in advance, and generating the DT of the physical production line according to the established 3D model.
- the 3D model of the physical production line is established by using digital modeling software.
- UG or PROE drawing software is used to perform 3D modeling of fixtures, normal plates, plates with various defects, robotic arms, conveyor belts, etc.
- the established 3D model is imported into digital simulation software to design a virtual production line. According to completeness detection requirements, the 3D models of the robotic arms, fixtures, clamps, conveyor belts, etc. established with UG or PROE are imported into the process simulation software.
- the following parameters are set in the established virtual production line: the angle and position of the robotic arm to be moved, the state parameters of the tightening and loosening of the fixture, the strength of the clamp, and the transmission speed of the conveyor belt, etc.
- the virtual production line is formed for material completeness detection.
- a 3D model of a detection platform is established in the digital modeling software, and the model is imported into the Unity3D software for model rendering.
- a corresponding scenario is designed, and the scenario is embedded into the complete production line model.
- UI user interface
- Page design is performed through the designer tool in Pyqt5.
- the required components such as buttons, labels, text boxes, etc. are first added, and then a horizontal or vertical layout is added to each component from the inside out.
- the designed page is converted into a program file, and various event functions are added in the program file through operations similar to pushButton.clicked.connect(self), so as to control the start and pause functions of the detection.
- a detection program (such as the material completeness detection algorithm) is embedded into a detection module of the constructed DT system.
- the physical production line in the DT detection module is connected.
- a detection program (such as the material completeness detection algorithm) is embedded into a detection module of the constructed DT system.
- the physical production line in the DT detection module is connected.
- the DT system according to the real-time display information on the UI page, when the start button is clicked, and a physical detection device starts detection. When the pause button is clicked, the corresponding physical detection device paused. In this way, the combination of virtual and real is realized.
- the material completeness detection is carried out together in the physical production line and the virtual production line.
- the defect information of the plate is acquired, and transmitted to the real production line.
- the detection of the physical production line is immediately stopped, and the material is released from the fixture and transferred to the unqualified zone for manual re-detection.
- the entire real-time image captured by the camera can be displayed in the DT system. Whether some parts are missing at each point will be displayed on the screen, and corresponding prompt information will be given on the detection page. There are also significant information statistics next to the detection window, including whether it is a specific part and whether the physical object is qualified. Therefore, the information can be integrated and analyzed in a timely manner.
- FIG. 4 is a structural diagram of a material completeness detection apparatus according to an embodiment of the present disclosure.
- the material completeness detection apparatus includes a first detection module 40 configured to input an image of a target object in a physical production line into a material completeness detection algorithm to acquire a first detection result; a second detection module 42 configured to input a virtual model of the target object in a virtual production line into the material completeness detection algorithm to acquire a second detection result, where the virtual production line is a DT of the physical production line; and a processing module 44 configured to acquire a material completeness detection result of the target object based on the first detection result and the second detection result.
- an embodiment of the present disclosure further provides a computer-readable storage medium.
- the computer-readable storage medium stores a computer program, where the computer program is executed by a processor to implement the above material completeness detection method.
- the above method in the embodiments may be implemented by means of software and a necessary general-purpose hardware platform. Certainly, the hardware may be used, but the former is a better implementation manner in many cases.
- the technical solution of the present disclosure essentially, or a part contributing to the prior art, may be embodied in a form of a software product.
- the computer software product is stored on a storage medium (such as a read only memory (ROM)/random access memory (RAM), a magnetic disk, an optical disk), and includes several instructions to enable a terminal device (may be a mobile phone, a computer, a server, or a network device) to execute the method according to each embodiment of the present disclosure.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Quality & Reliability (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Manufacturing & Machinery (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (7)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US17/749,195 US12260540B2 (en) | 2022-05-20 | 2022-05-20 | Material completeness detection method and apparatus, and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US17/749,195 US12260540B2 (en) | 2022-05-20 | 2022-05-20 | Material completeness detection method and apparatus, and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20230377123A1 US20230377123A1 (en) | 2023-11-23 |
| US12260540B2 true US12260540B2 (en) | 2025-03-25 |
Family
ID=88791810
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/749,195 Active 2043-07-07 US12260540B2 (en) | 2022-05-20 | 2022-05-20 | Material completeness detection method and apparatus, and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| US (1) | US12260540B2 (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115508285B (en) * | 2022-10-25 | 2024-10-01 | 淮阴工学院 | A device and method for detecting surface defects of steel |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7203147B2 (en) | 2002-04-01 | 2007-04-10 | Sony Corporation | Storage medium initialization method, and recording and reproducing method and apparatus |
| US7852597B2 (en) | 2008-03-19 | 2010-12-14 | Toshiba Storage Device Corporation | Storage medium driving apparatus and control method thereof |
| US9195169B2 (en) | 2013-06-28 | 2015-11-24 | Kyocera Document Solutions Inc. | Image forming apparatus, consumable material unit, and storage medium |
| CN113240798A (en) * | 2021-05-19 | 2021-08-10 | 郑州轻工业大学 | Intelligent material integrity detection and configuration method based on digital twinning and AR |
-
2022
- 2022-05-20 US US17/749,195 patent/US12260540B2/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7203147B2 (en) | 2002-04-01 | 2007-04-10 | Sony Corporation | Storage medium initialization method, and recording and reproducing method and apparatus |
| US7852597B2 (en) | 2008-03-19 | 2010-12-14 | Toshiba Storage Device Corporation | Storage medium driving apparatus and control method thereof |
| US9195169B2 (en) | 2013-06-28 | 2015-11-24 | Kyocera Document Solutions Inc. | Image forming apparatus, consumable material unit, and storage medium |
| CN113240798A (en) * | 2021-05-19 | 2021-08-10 | 郑州轻工业大学 | Intelligent material integrity detection and configuration method based on digital twinning and AR |
Also Published As
| Publication number | Publication date |
|---|---|
| US20230377123A1 (en) | 2023-11-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN116109777B (en) | A complementary pseudo-multimodal feature system and method | |
| CN110619620A (en) | Method, device and system for positioning abnormity causing surface defects and electronic equipment | |
| WO2021164448A1 (en) | Quality abnormity recording method and apparatus, and augmented reality device, system and medium | |
| CN106235485B (en) | Clothes sample clothing dimensional measurement analysis system and method | |
| CN109638959B (en) | Power equipment remote signaling function debugging method and system based on AR and deep learning | |
| CN111080633A (en) | Screen defect detection method and device, terminal equipment and storage medium | |
| US12260540B2 (en) | Material completeness detection method and apparatus, and storage medium | |
| CN116618878A (en) | A pre-welding process parameter determination method, welding quality online prediction method, device and storage medium | |
| CN113660482A (en) | A kind of automatic testing method and device of AI camera equipment or module | |
| CN116818769A (en) | Abnormality detection method and system based on machine vision | |
| US11647249B2 (en) | Testing rendering of screen objects | |
| Banerjee et al. | Object tracking test automation using a robotic arm | |
| Stavropoulos et al. | A CPS platform oriented for Quality Assessment in welding | |
| CN113326951A (en) | Auxiliary detection device for aircraft outer surface cover screws and use method thereof | |
| CN112380134A (en) | WebUI automatic testing method based on image recognition | |
| Supong et al. | PCB Surface Defect Detection Using Defect-Centered Image Generation and Optimized YOLOv8 Architecture | |
| CN102023163A (en) | System and method for detecting connector based on digital signal processor (DSP) | |
| WO2024065189A1 (en) | Method, system, apparatus, electronic device, and storage medium for evaluating work task | |
| Conrad et al. | Deep learning-based error recognition in manual cable assembly using synthetic training data | |
| CN106846302B (en) | A detection method for correctly picking up tools and an assessment bench based on this method | |
| CN116664509A (en) | Radiographic image defect recognition method, system and equipment for multi-model collaborative decision-making | |
| CN112434548B (en) | Video labeling method and device | |
| US11150849B2 (en) | Device and method for checking the printing of an article | |
| CN114418933A (en) | Friction stir welding diagnosis method based on YOLO, terminal and storage medium | |
| Lim et al. | Colour-assisted PCB Inspection System with Hardware Support for Real-time |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| AS | Assignment |
Owner name: ZHENGZHOU UNIVERSITY OF LIGHT INDUSTRY, CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LI, HAO;YAN, XINYU;LIU, GEN;AND OTHERS;REEL/FRAME:060490/0738 Effective date: 20220520 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |