CN118321203A - Robot remote control system and control method - Google Patents

Robot remote control system and control method Download PDF

Info

Publication number
CN118321203A
CN118321203A CN202410594995.2A CN202410594995A CN118321203A CN 118321203 A CN118321203 A CN 118321203A CN 202410594995 A CN202410594995 A CN 202410594995A CN 118321203 A CN118321203 A CN 118321203A
Authority
CN
China
Prior art keywords
product
feature
image
product appearance
shallow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410594995.2A
Other languages
Chinese (zh)
Inventor
陈传阳
陈娟娟
储锦玲
叶慧海
焦健
张伟达
李里
王建
马钰
孙博旸
葛炳南
杨斌
吴凯
尘超然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhonggong Gaoyuan Beijing Automobile Testing Technology Co ltd
Research Institute of Highway Ministry of Transport
Original Assignee
Zhonggong Gaoyuan Beijing Automobile Testing Technology Co ltd
Research Institute of Highway Ministry of Transport
Filing date
Publication date
Application filed by Zhonggong Gaoyuan Beijing Automobile Testing Technology Co ltd, Research Institute of Highway Ministry of Transport filed Critical Zhonggong Gaoyuan Beijing Automobile Testing Technology Co ltd
Publication of CN118321203A publication Critical patent/CN118321203A/en
Pending legal-status Critical Current

Links

Abstract

The application relates to the field of intelligent control, and particularly discloses a robot remote control system and a control method, which can acquire a product appearance image of a product to be inspected through a camera of a robot, and transmit the product appearance image to a background quality inspection server through a wireless transmission module so as to carry out image analysis and processing on the product appearance image in the background quality inspection server by utilizing an image processing and analysis algorithm based on artificial intelligence, thereby intelligently judging whether the product to be inspected has surface defects according to fusion interaction characteristics between shallow characteristic information and deep semantic characteristics of the product appearance image, responding to the quality inspection result to be the surface defects of the product to be inspected, and driving the robot to place the product to be inspected in a placement basket of an unqualified product.

Description

Robot remote control system and control method
Technical Field
The application relates to the field of intelligent control, and in particular relates to a robot remote control system and a control method.
Background
The modern manufacturing industry has increasingly high requirements on product quality, and surface defect detection is a key step for ensuring product quality, especially in the industries of automobiles, electronics, foods, medicines and the like. The quality of the product can be strictly controlled by detecting the defects on the surface of the product, so that enterprises can find problems in the production process in time, the product is ensured to meet the standards in terms of beauty and performance, the performance damage caused by the surface defects is avoided, and the reworking and waste generation are reduced.
However, conventional defect detection of the product surface is generally based on an automated system with simple image processing, such as relying on a fixed threshold to distinguish between defective and non-defective areas, but this may only detect some basic defects, such as obvious anomalies in area, color or shape, and cannot accurately identify small or complex defects, resulting in inaccurate detection.
Accordingly, a robotic remote control system is desired.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides a robot remote control system and a control method, which can acquire a product appearance image of a product to be inspected through a camera of a robot, and transmit the product appearance image to a background quality inspection server through a wireless transmission module, so that the product appearance image can be subjected to image analysis and processing in the background quality inspection server by utilizing an image processing and analysis algorithm based on artificial intelligence, whether the product to be inspected has surface defects or not can be intelligently judged according to fusion interaction characteristics between shallow characteristic information and deep semantic characteristics of the product appearance image, and the surface defects exist in the product to be inspected in response to the quality inspection result, wherein the product to be inspected is used for driving the robot to place the product to be inspected in a placement basket of a disqualified product. By the method, complex product surface defects can be identified, and the detection accuracy is improved, so that the overall quality of the product is improved.
According to one aspect of the present application, there is provided a robot remote control system comprising:
The product appearance image acquisition module is used for acquiring product appearance images of the product to be inspected through a camera of the robot;
The image transmission module is used for transmitting the product appearance image to a background quality inspection server through the wireless transmission module;
the gray processing module is used for carrying out gray processing on the product appearance image at the background quality inspection server so as to obtain a gray product appearance image;
The multi-scale feature extraction module is used for carrying out multi-scale feature extraction on the gray product appearance image at the background quality inspection server so as to obtain a product appearance shallow feature image and a product appearance semantic feature image;
The feature interaction compensation type fusion module is used for enabling the product appearance shallow feature map and the product appearance semantic feature map to pass through an image deep-shallow feature interaction compensation type fusion network at the background quality inspection server to obtain a product deep-shallow fusion interaction feature map as a product deep-shallow fusion interaction feature;
And the quality inspection result generation module is used for obtaining a quality inspection result based on the product depth-shallow fusion interaction characteristic at the background quality inspection server and generating a control instruction based on the quality inspection result.
In the above robot remote control system, the multi-scale feature extraction module is configured to: and at the background quality inspection server, the gray product appearance image passes through an image multi-scale feature extractor based on a pyramid network to obtain the product appearance shallow feature image and the product appearance semantic feature image.
In the above robot remote control system, the feature interaction compensation type fusion module includes: the up-sampling unit is used for up-sampling the product appearance semantic feature map to obtain an up-sampled product appearance semantic feature map; the global semantic extraction unit is used for extracting product appearance global semantic feature vectors from the up-sampling product appearance semantic feature images; the local semantic extraction unit is used for extracting product appearance local semantic feature vectors from the up-sampling product appearance semantic feature images; the multi-scale feature extraction unit is used for fusing the product appearance global semantic feature vector and the product appearance local semantic feature vector to obtain a multi-scale product appearance semantic weight feature vector; the semantic feature fusion unit is used for fusing the multi-scale product appearance semantic weight feature vector and the product appearance shallow feature map to obtain a product appearance semantic fusion shallow feature map; and the fusion interaction feature unit is used for fusing the product appearance semantic fusion shallow feature map and the product appearance shallow feature map to obtain the product deep-shallow fusion interaction feature map.
In the above robot remote control system, the global semantic extraction unit includes: the global average Chi Huazi unit is used for carrying out global average pooling treatment on each feature matrix along the channel dimension in the appearance semantic feature map of the up-sampling product so as to obtain appearance semantic channel feature vectors of the up-sampling product; and the point convolution coding subunit is used for carrying out point convolution coding on the up-sampling product appearance semantic channel feature vector so as to obtain the product appearance global semantic feature vector.
In the above robot remote control system, the local semantic extraction unit includes: the channel modulation subunit is used for carrying out point convolution coding on the appearance semantic feature map of the up-sampling product to obtain the appearance semantic feature map of the channel modulation up-sampling product; and the two-dimensional convolution coding subunit is used for carrying out two-dimensional convolution coding on the appearance semantic feature map of the channel modulation up-sampling product so as to obtain the appearance local semantic feature vector of the product.
In the above robot remote control system, the semantic feature fusion unit is configured to: and taking the characteristic values of all positions in the multi-scale product appearance semantic weight characteristic vector as weight values, and respectively weighting all corresponding characteristic matrixes of the product appearance shallow characteristic map along the channel dimension to obtain the product appearance semantic fusion shallow characteristic map.
In the above robot remote control system, the quality inspection result generating module includes: the product defect determining unit is used for obtaining a quality inspection result by the product deep-shallow fusion interaction feature map through the quality inspection result generator based on the classifier in the background quality inspection server, wherein the quality inspection result is used for indicating whether the product to be inspected has surface defects or not; and the robot control unit is used for responding to the quality inspection result to cause the product to be inspected to have surface defects and generating a first control instruction to the robot, wherein the first control instruction is used for driving the robot to place the product to be inspected in a placement basket of the unqualified product.
The robot remote control system further comprises a training module for training the pyramid network-based image multi-scale feature extractor, the image deep-shallow feature interaction compensation type fusion network and the classifier-based quality inspection result generator.
In the above robot remote control system, the training module includes: the training data acquisition unit is used for acquiring training data, wherein the training data comprises a training product appearance image of a product to be inspected and a true value of whether the product to be inspected has surface defects or not; the training image transmission unit is used for transmitting the training product appearance image to a background quality inspection server through a wireless transmission module; the training product appearance image graying unit is used for carrying out gray processing on the training product appearance image at the background quality inspection server to obtain a graying training product appearance image; the training product appearance shallow semantic extraction unit is used for enabling the gray training product appearance image to pass through the pyramid network-based image multi-scale feature extractor at the background quality inspection server so as to obtain a training product appearance shallow feature image and a training product appearance semantic feature image; the training product deep-shallow fusion interaction unit is used for enabling the training product appearance shallow feature map and the training product appearance semantic feature map to pass through the image deep-shallow feature interaction compensation type fusion network at the background quality inspection server so as to obtain a training product deep-shallow fusion interaction feature map; the feature optimization unit is used for performing feature optimization on the training product deep-shallow fusion interaction feature map to obtain an optimized training product deep-shallow fusion interaction feature map; the decoding loss unit is used for enabling the training product deep-shallow fusion interaction feature map to pass through the quality inspection result generator based on the classifier at the background quality inspection server so as to obtain a classification loss function value; and a training unit for training the pyramid network-based image multi-scale feature extractor, the image deep-shallow feature interaction compensation fusion network and the classifier-based quality inspection result generator based on the classification loss function value and through back propagation of gradient descent, wherein the training product deep-shallow fusion interaction feature map is optimized in each iteration process of a model.
According to another aspect of the present application, there is provided a robot remote control method including:
acquiring a product appearance image of a product to be inspected through a camera of the robot;
Transmitting the product appearance image to a background quality inspection server through a wireless transmission module;
The background quality inspection server is used for carrying out gray processing on the product appearance image to obtain a gray product appearance image;
The background quality inspection server performs multi-scale feature extraction on the gray product appearance image to obtain a product appearance shallow feature image and a product appearance semantic feature image;
in the background quality inspection server, the product appearance shallow feature map and the product appearance semantic feature map are subjected to an image deep-shallow feature interaction compensation type fusion network to obtain a product deep-shallow fusion interaction feature map serving as product deep-shallow fusion interaction features;
And obtaining a quality inspection result based on the product depth-shallow fusion interaction characteristic at the background quality inspection server, and generating a control instruction based on the quality inspection result.
Compared with the prior art, the remote control system and the control method for the robot can acquire the product appearance image of the product to be inspected through the camera of the robot, and transmit the product appearance image to the background quality inspection server through the wireless transmission module, so that the image analysis and the processing can be carried out on the product appearance image in the background quality inspection server by utilizing the image processing and the analysis algorithm based on artificial intelligence, whether the surface defect exists in the product to be inspected or not can be intelligently judged according to the fusion interaction characteristic between the shallow characteristic information and the deep semantic characteristic of the product appearance image, and the surface defect exists in the product to be inspected according to the quality inspection result, so that the robot is driven to place the product to be inspected in a placement basket of the unqualified product. By the method, complex product surface defects can be identified, and the detection accuracy is improved, so that the overall quality of the product is improved.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing embodiments of the present application in more detail with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 is a block diagram of a robot remote control system according to an embodiment of the present application.
Fig. 2 is a schematic architecture diagram of a remote control system for a robot according to an embodiment of the present application.
Fig. 3 is a block diagram of a feature interaction compensation type fusion module in a robot remote control system according to an embodiment of the present application.
Fig. 4 is a block diagram of a global semantic extraction unit in a robot remote control system according to an embodiment of the present application.
Fig. 5 is a block diagram of a training module in a robot remote control system according to an embodiment of the present application.
Fig. 6 is a flowchart of a robot remote control method according to an embodiment of the present application.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
In describing embodiments of the present disclosure, the term "comprising" and its like should be taken to be open-ended, i.e., including, but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The terms "first," "second," and the like, may refer to different or the same object. Other explicit and implicit definitions are also possible below.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be construed unless the context clearly indicates otherwise.
The modern manufacturing industry has increasingly high requirements on product quality, and surface defect detection is a key step for ensuring product quality, especially in the industries of automobiles, electronics, foods, medicines and the like. The quality of the product can be strictly controlled by detecting the defects on the surface of the product, so that enterprises can find problems in the production process in time, the product is ensured to meet the standards in terms of beauty and performance, the performance damage caused by the surface defects is avoided, and the reworking and waste generation are reduced.
However, conventional defect detection of the product surface is generally based on an automated system with simple image processing, such as relying on a fixed threshold to distinguish between defective and non-defective areas, but this may only detect some basic defects, such as obvious anomalies in area, color or shape, and cannot accurately identify small or complex defects, resulting in inaccurate detection.
Based on the above, in the technical scheme of the application, a robot remote control system is provided, which can collect a product appearance image of a product to be inspected through a camera of a robot, and transmit the product appearance image to a background quality inspection server through a wireless transmission module, so that the product appearance image can be subjected to image analysis and processing in the background quality inspection server by utilizing an image processing and analysis algorithm based on artificial intelligence, thereby intelligently judging whether the product to be inspected has a surface defect according to fusion interaction characteristics between shallow characteristic information and deep semantic characteristics of the product appearance image, and responding to the quality inspection result to be that the product to be inspected has the surface defect, and the robot is driven to place the product to be inspected in a placement basket of an unqualified product. By the method, complex product surface defects can be identified, and the detection accuracy is improved, so that the overall quality of the product is improved.
Fig. 1 is a block diagram of a robot remote control system according to an embodiment of the present application. Fig. 2 is a schematic architecture diagram of a remote control system for a robot according to an embodiment of the present application. As shown in fig. 1 and 2, a robot remote control system 100 according to an embodiment of the present application includes: the product appearance image acquisition module 110 is used for acquiring product appearance images of the product to be inspected through a camera of the robot; the image transmission module 120 is configured to transmit the product appearance image to a background quality inspection server through the wireless transmission module; the gray processing module 130 is configured to perform gray processing on the product appearance image at the background quality inspection server to obtain a gray product appearance image; the multi-scale feature extraction module 140 is configured to perform multi-scale feature extraction on the grayscale product appearance image at the background quality inspection server to obtain a product appearance shallow feature map and a product appearance semantic feature map; the feature interaction compensation type fusion module 150 is configured to, at the background quality inspection server, pass the product appearance shallow feature map and the product appearance semantic feature map through an image deep-shallow feature interaction compensation type fusion network to obtain a product deep-shallow fusion interaction feature map as a product deep-shallow fusion interaction feature; and a quality inspection result generating module 160, configured to obtain a quality inspection result based on the product deep-shallow fusion interaction feature at the background quality inspection server, and generate a control instruction based on the quality inspection result.
In the embodiment of the application, the product appearance image collection module 110 is configured to collect product appearance images of the product to be inspected through a camera of the robot. It should be understood that the product appearance image of the product to be inspected is considered to be important content for judging whether the product quality is qualified or not. Specifically, the product appearance image can show whether the surface of the product has scratches, pits, cracks or other damages, whether the texture meets the design requirements, and the like. Based on the method, in order to ensure the accuracy of product detection and improve customer satisfaction and loyalty, in the technical scheme of the application, the camera of the robot is used for collecting the product appearance image of the product to be inspected, that is to say, the camera of the robot can capture the tiny defects which are difficult to identify by human eyes, and the collected product is analyzed and processed to quickly and accurately find the problem of the product, so that the accuracy of product defect detection is improved.
In the embodiment of the present application, the image transmission module 120 is configured to transmit the product appearance image to a background quality inspection server through a wireless transmission module. Accordingly, considering that the wireless transmission module can eliminate the limitation of the traditional wired connection, real-time data transmission can be realized, so that the data transmission is more convenient and flexible. Specifically, the product appearance images on the factory production line can be quickly transmitted to a background quality inspection server for automatic quality detection and analysis, so that the production efficiency and the quality control level are improved.
In the embodiment of the present application, the gray processing module 130 is configured to perform gray processing on the product appearance image at the background quality inspection server to obtain a gray product appearance image. It should be appreciated that it is considered that the product appearance image contains information about the texture, shape and structural features of the product, which are critical for defect detection of the surface of the subsequent product. Based on the above, in order to more concentrate on analyzing and processing textures in the product appearance image, the difference between gray values of structures in the product appearance image is increased so as to more accurately display texture structure information in the product image. It is worth mentioning that the graying is a process of converting a color image into a gray image. Specifically, in a grayscale image, the grayscale value of each pixel represents the brightness of that pixel and no color information is contained. That is, by performing the graying process on the product appearance image, edge information, texture and structural features in the product appearance image can be highlighted, and basis and support are provided for feature extraction and analysis of subsequent product appearance images.
In the embodiment of the present application, the multi-scale feature extraction module 140 is configured to perform multi-scale feature extraction on the grayscale product appearance image at the background quality inspection server to obtain a product appearance shallow feature map and a product appearance semantic feature map. Specifically, in an embodiment of the present application, the multi-scale feature extraction module is configured to: and at the background quality inspection server, the gray product appearance image passes through an image multi-scale feature extractor based on a pyramid network to obtain the product appearance shallow feature image and the product appearance semantic feature image. Accordingly, appearance characteristic information of different layers is considered to be contained in the appearance image of the gray product. In particular, shallow features typically contain more low-level visual features such as edge information, texture information of the product appearance. While deep features are typically associated with high-level abstract and semantic information of the product appearance image. Based on the above, in order to obtain more abundant and comprehensive product appearance characteristic information from the grayscale product appearance image so as to improve the accuracy of detecting surface defects of subsequent products, in the technical scheme of the application, in the background quality inspection server, the grayscale product appearance image is passed through an image multi-scale characteristic extractor based on a pyramid network so as to obtain a product appearance shallow characteristic image and a product appearance semantic characteristic image. It should be noted that the pyramid network may extract both shallow features and semantic features in the appearance image of the gray product. That is, the shallow features can help capture low-level features such as textures, edges and the like of the appearance image of the gray product, while the semantic features pay more attention to high-level semantic information in the appearance image of the gray product, such as object types, shapes and the like, and the deep features and the shallow features are fused to help better describe the detailed information and the semantic features of the appearance image of the product, so that richer and comprehensive appearance feature characterization of the product can be obtained.
In the embodiment of the present application, the feature interaction compensation type fusion module 150 is configured to, at the background quality inspection server, pass the product appearance shallow feature map and the product appearance semantic feature map through an image deep-shallow feature interaction compensation type fusion network to obtain a product deep-shallow fusion interaction feature map as a product deep-shallow fusion interaction feature. Fig. 3 is a block diagram of a feature interaction compensation type fusion module in a robot remote control system according to an embodiment of the present application. Specifically, in the embodiment of the present application, as shown in fig. 3, the feature interaction compensation type fusion module 150 includes: an upsampling unit 151, configured to upsample the product appearance semantic feature map to obtain an upsampled product appearance semantic feature map; A global semantic extraction unit 152, configured to extract a product appearance global semantic feature vector from the upsampled product appearance semantic feature map; a local semantic extraction unit 153, configured to extract product appearance local semantic feature vectors from the upsampled product appearance semantic feature map; a multi-scale feature extraction unit 154, configured to fuse the product appearance global semantic feature vector and the product appearance local semantic feature vector to obtain a multi-scale product appearance semantic weight feature vector; a semantic feature fusion unit 155, configured to fuse the multi-scale product appearance semantic weight feature vector and the product appearance shallow feature map to obtain a product appearance semantic fusion shallow feature map; And a fusion interaction feature unit 156, configured to fuse the product appearance semantic fusion shallow feature map and the product appearance shallow feature map to obtain the product deep-shallow fusion interaction feature map. It should be appreciated that considering that the product appearance shallow feature map is typically higher resolution, contains product shallow detail and texture feature information in appearance, but lacks semantic expressive capabilities for the product. The product appearance semantic feature map is typically lower in resolution, contains deep semantic understanding capabilities about the product appearance, but lacks spatial location information. Based on the method, in order to fully utilize the advantages of the product appearance shallow feature map and the product appearance semantic feature map so as to more abundantly characterize the product appearance features, in the technical scheme of the application, the product appearance shallow feature map and the product appearance semantic feature map are subjected to an image deep-shallow feature interaction compensation type fusion network in the background quality inspection server so as to obtain a product deep-shallow fusion interaction feature map. In detail, the image deep-shallow feature interaction compensation type fusion network firstly up-samples the product appearance semantic feature image to obtain an up-sampled product appearance semantic feature image so as to increase the resolution of the product appearance semantic feature image, so that the product appearance image becomes clearer and finer at the pixel level. And then, extracting global semantic features and local semantic features from the up-sampling product appearance semantic feature map, and fusing the global semantic features and the local semantic features to give consideration to the whole information and the local semantic information of the product appearance, so as to obtain a multi-scale product appearance semantic weight feature vector. And then fusing the multi-scale product appearance semantic weight feature vector with the product appearance shallow feature map to enhance the transmission and expression of semantic information, and finally fusing the fused features with the product appearance shallow feature map to obtain a product deep-shallow fusion interaction feature map. therefore, the product appearance information of different layers can be effectively fused, the defects of the respective characteristics are mutually compensated, and the final characteristic representation has differentiation and expression capacity, so that the understanding and characterization capacity of the model to the product appearance image is improved, and the performance and effect of the model are improved.
Fig. 4 is a block diagram of a global semantic extraction unit in a robot remote control system according to an embodiment of the present application. More specifically, in the embodiment of the present application, as shown in fig. 4, the global semantic extraction unit 152 includes: the global mean Chi Huazi unit 1521 is configured to perform global mean pooling processing on each feature matrix along the channel dimension in the appearance semantic feature map of the upsampled product to obtain an appearance semantic channel feature vector of the upsampled product; and a point convolution encoding subunit 1522, configured to perform point convolution encoding on the upsampled product appearance semantic channel feature vector to obtain the product appearance global semantic feature vector.
Specifically, in an embodiment of the present application, the local semantic extraction unit includes: the channel modulation subunit is used for carrying out point convolution coding on the appearance semantic feature map of the up-sampling product to obtain the appearance semantic feature map of the channel modulation up-sampling product; and the two-dimensional convolution coding subunit is used for carrying out two-dimensional convolution coding on the appearance semantic feature map of the channel modulation up-sampling product so as to obtain the appearance local semantic feature vector of the product.
Specifically, in the embodiment of the present application, the semantic feature fusion unit is configured to: and taking the characteristic values of all positions in the multi-scale product appearance semantic weight characteristic vector as weight values, and respectively weighting all corresponding characteristic matrixes of the product appearance shallow characteristic map along the channel dimension to obtain the product appearance semantic fusion shallow characteristic map.
In the embodiment of the present application, the quality inspection result generating module 160 is configured to obtain, at the background quality inspection server, a quality inspection result based on the product deep-shallow fusion interaction feature, and generate a control instruction based on the quality inspection result. Specifically, in the embodiment of the present application, the quality inspection result generating module includes: the product defect determining unit is used for obtaining a quality inspection result by the product deep-shallow fusion interaction feature map through the quality inspection result generator based on the classifier in the background quality inspection server, wherein the quality inspection result is used for indicating whether the product to be inspected has surface defects or not; and the robot control unit is used for responding to the quality inspection result to cause the product to be inspected to have surface defects and generating a first control instruction to the robot, wherein the first control instruction is used for driving the robot to place the product to be inspected in a placement basket of the unqualified product. The product appearance shallow feature map and the product appearance semantic feature map are utilized to carry out classification processing through product deep-shallow fusion interaction features obtained through deep-shallow feature interaction, so that whether the product to be inspected has surface defects or not is intelligently judged, and the product to be inspected has the surface defects in response to the quality inspection result, and is used for driving the robot to place the product to be inspected in a placement basket of unqualified products. By the method, complex product surface defects can be identified, and the detection accuracy is improved, so that the overall quality of the product is improved.
It should be noted that those skilled in the art should know that the deep neural network model needs to be trained before the deep neural network model is applied to make the inference so that the deep neural network can implement a specific function.
Specifically, in the technical scheme of the application, the robot remote control system further comprises a training module for training the pyramid network-based image multi-scale feature extractor, the image deep-shallow feature interaction compensation type fusion network and the classifier-based quality inspection result generator.
Fig. 5 is a block diagram of a training module in a robot remote control system according to an embodiment of the present application. Specifically, in the embodiment of the present application, as shown in fig. 5, the training module 200 includes: a training data obtaining unit 210, configured to obtain training data, where the training data includes a training product appearance image of a product to be inspected, and a true value of whether the product to be inspected has a surface defect; the training image transmission unit 220 is configured to transmit the training product appearance image to a background quality inspection server through a wireless transmission module; a training product appearance image graying unit 230, configured to perform gray processing on the training product appearance image at the background quality inspection server to obtain a grayed training product appearance image; the training product appearance shallow semantic extraction unit 240 is configured to pass the grayscale training product appearance image through the pyramid network-based image multi-scale feature extractor at the background quality inspection server to obtain a training product appearance shallow feature map and a training product appearance semantic feature map; the training product deep-shallow fusion interaction unit 250 is configured to pass the training product appearance shallow feature map and the training product appearance semantic feature map through the image deep-shallow feature interaction compensation fusion network at the background quality inspection server to obtain a training product deep-shallow fusion interaction feature map; the feature optimization unit 260 is configured to perform feature optimization on the training product deep-shallow fusion interaction feature map to obtain an optimized training product deep-shallow fusion interaction feature map; a decoding loss unit 270, configured to, at the background quality inspection server, pass the training product deep-shallow fusion interaction feature map through the classifier-based quality inspection result generator to obtain a classification loss function value; and a training unit 280, configured to train the pyramid network-based image multi-scale feature extractor, the image deep-shallow feature interaction compensation fusion network, and the classifier-based quality inspection result generator based on the classification loss function value and through back propagation of gradient descent, where the training product deep-shallow fusion interaction feature map is optimized during each iteration of the model.
Specifically, in the technical scheme of the application, in each iteration process of the model, the training product deep-shallow fusion interaction feature map is optimized to obtain an optimized training product deep-shallow fusion interaction feature map, which comprises the following steps: dividing each characteristic value of the training product deep-shallow fusion interaction characteristic diagram by the maximum characteristic value to obtain a training product deep-shallow fusion interaction representation diagram; dividing the characteristic value mean value of the training product deep-shallow fusion interaction characteristic diagram by the characteristic value standard deviation to obtain a probability statistical characteristic value corresponding to the training product deep-shallow fusion interaction characteristic diagram; subtracting the probability statistical characteristic value from each characteristic value of the training product deep-shallow fusion interaction representation graph, and calculating a binary-based logarithmic value of the absolute value of each position of the training product deep-shallow fusion interaction representation graph to obtain a training product deep-shallow fusion interaction information graph; adding the probability statistical characteristic value to each characteristic value of the training product deep-shallow fusion interaction representation, and multiplying the characteristic value by a preset weight super parameter to obtain a training product deep-shallow fusion interaction pattern diagram; and obtaining the optimized training product deep-shallow fusion interaction characteristic diagram by combining the training product deep-shallow fusion interaction information diagram and the training product deep-shallow fusion interaction mode diagram point.
In the step, a short sequence containing a probability statistical feature value interaction representation represented by a mean value and a distribution interaction representation of a feature value and a maximum feature value serving as hidden variable features is used as a sub-manifold potential motif under a complex manifold network of the training product deep-shallow fusion interaction feature map, the obtained training product deep-shallow fusion interaction feature map is used as a global structure inference unit based on a latent motif feature information mode and a feature distribution mode of the training product deep-shallow fusion interaction pattern map based on the training product deep-shallow fusion interaction information map based on the multi-scale multi-depth image semantic features of the gray product appearance image, so that a complex manifold structure of the training product deep-shallow fusion interaction feature map is reconstructed in a global structure motif dictionary mode based on connection, the iterative model is generated for the feature stream structure corresponding to the complex feature representation dimension in the process, and the quality of the training product deep-shallow fusion interaction feature is accurately detected, and the quality detector is accurately classified based on the result of the training product deep-deep interaction feature detector. By the method, complex product surface defects can be identified, and the detection accuracy is improved, so that the overall quality of the product is improved.
In summary, the robot remote control system 100 according to the embodiment of the present application is illustrated, and is capable of collecting a product appearance image of a product to be inspected through a camera of a robot, and transmitting the product appearance image to a background quality inspection server through a wireless transmission module, so that image analysis and processing can be performed on the product appearance image in the background quality inspection server by using an image processing and analysis algorithm based on artificial intelligence, thereby intelligently judging whether a surface defect exists in the product to be inspected according to a fusion interaction feature between shallow feature information and deep semantic features of the product appearance image, and responding to the quality inspection result to be that the surface defect exists in the product to be inspected, so as to drive the robot to place the product to be inspected in a placement basket of an unqualified product. By the method, complex product surface defects can be identified, and the detection accuracy is improved, so that the overall quality of the product is improved.
As described above, the robot remote control system 100 according to the embodiment of the present application may be implemented in various wireless terminals, such as a server having a robot remote control algorithm, and the like. In one possible implementation, the robotic remote control system 100 according to embodiments of the application may be integrated into the wireless terminal as one software module and/or hardware module. For example, the robotic remote control system 100 may be a software module in the operating system of the wireless terminal, or may be an application developed for the wireless terminal; of course, the robotic remote control system 100 may also be one of a number of hardware modules of the wireless terminal.
Alternatively, in another example, the robotic remote control system 100 and the wireless terminal may be separate devices, and the robotic remote control system 100 may be connected to the wireless terminal through a wired and/or wireless network and transmit the interactive information in a agreed data format.
Fig. 6 is a flowchart of a robot remote control method according to an embodiment of the present application. As shown in fig. 6, a robot remote control method according to an embodiment of the present application includes: s110, acquiring a product appearance image of a product to be inspected through a camera of the robot; s120, transmitting the product appearance image to a background quality inspection server through a wireless transmission module; s130, carrying out gray processing on the product appearance image at the background quality inspection server to obtain a gray product appearance image; s140, in the background quality inspection server, multi-scale feature extraction is carried out on the gray product appearance image so as to obtain a product appearance shallow feature image and a product appearance semantic feature image; s150, at the background quality inspection server, the product appearance shallow feature map and the product appearance semantic feature map are subjected to an image deep-shallow feature interaction compensation type fusion network to obtain a product deep-shallow fusion interaction feature map as product deep-shallow fusion interaction features; and S160, obtaining a quality inspection result based on the product deep-shallow fusion interaction characteristic at the background quality inspection server, and generating a control instruction based on the quality inspection result.
Here, it will be understood by those skilled in the art that the specific operations of the respective steps in the above-described robot remote control method have been described in detail in the above description of the robot remote control system with reference to fig. 1 to 5, and thus, repetitive descriptions thereof will be omitted.
Implementations of the present disclosure have been described above, the foregoing description is exemplary rather than exhaustive. And is not limited to the implementations disclosed, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the implementations described. The terminology used herein was chosen in order to best explain the principles of each implementation, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand each of the implementations disclosed herein.

Claims (10)

1. A robot remote control system, comprising:
The product appearance image acquisition module is used for acquiring product appearance images of the product to be inspected through a camera of the robot;
The image transmission module is used for transmitting the product appearance image to a background quality inspection server through the wireless transmission module;
the gray processing module is used for carrying out gray processing on the product appearance image at the background quality inspection server so as to obtain a gray product appearance image;
The multi-scale feature extraction module is used for carrying out multi-scale feature extraction on the gray product appearance image at the background quality inspection server so as to obtain a product appearance shallow feature image and a product appearance semantic feature image;
The feature interaction compensation type fusion module is used for enabling the product appearance shallow feature map and the product appearance semantic feature map to pass through an image deep-shallow feature interaction compensation type fusion network at the background quality inspection server to obtain a product deep-shallow fusion interaction feature map as a product deep-shallow fusion interaction feature;
And the quality inspection result generation module is used for obtaining a quality inspection result based on the product depth-shallow fusion interaction characteristic at the background quality inspection server and generating a control instruction based on the quality inspection result.
2. The robotic remote control system of claim 1, wherein the multi-scale feature extraction module is configured to: and at the background quality inspection server, the gray product appearance image passes through an image multi-scale feature extractor based on a pyramid network to obtain the product appearance shallow feature image and the product appearance semantic feature image.
3. The robotic remote control system of claim 2, wherein the feature interaction compensation type fusion module comprises:
The up-sampling unit is used for up-sampling the product appearance semantic feature map to obtain an up-sampled product appearance semantic feature map;
The global semantic extraction unit is used for extracting product appearance global semantic feature vectors from the up-sampling product appearance semantic feature images;
the local semantic extraction unit is used for extracting product appearance local semantic feature vectors from the up-sampling product appearance semantic feature images;
the multi-scale feature extraction unit is used for fusing the product appearance global semantic feature vector and the product appearance local semantic feature vector to obtain a multi-scale product appearance semantic weight feature vector;
The semantic feature fusion unit is used for fusing the multi-scale product appearance semantic weight feature vector and the product appearance shallow feature map to obtain a product appearance semantic fusion shallow feature map;
And the fusion interaction feature unit is used for fusing the product appearance semantic fusion shallow feature map and the product appearance shallow feature map to obtain the product deep-shallow fusion interaction feature map.
4. A robotic remote control system as claimed in claim 3, wherein the global semantic extraction unit comprises:
The global average Chi Huazi unit is used for carrying out global average pooling treatment on each feature matrix along the channel dimension in the appearance semantic feature map of the up-sampling product so as to obtain appearance semantic channel feature vectors of the up-sampling product;
And the point convolution coding subunit is used for carrying out point convolution coding on the feature vector of the appearance semantic channel of the up-sampling product so as to obtain the global semantic feature vector of the appearance of the product.
5. The robotic remote control system according to claim 4, wherein the local semantic extraction unit comprises:
the channel modulation subunit is used for carrying out point convolution coding on the appearance semantic feature map of the up-sampling product to obtain the appearance semantic feature map of the channel modulation up-sampling product;
And the two-dimensional convolution coding subunit is used for carrying out two-dimensional convolution coding on the appearance semantic feature map of the channel modulation up-sampling product so as to obtain the appearance local semantic feature vector of the product.
6. The robotic remote control system according to claim 5, wherein the semantic feature fusion unit is configured to: and taking the characteristic values of all positions in the multi-scale product appearance semantic weight characteristic vector as weight values, and respectively weighting all corresponding characteristic matrixes of the product appearance shallow characteristic map along the channel dimension to obtain the product appearance semantic fusion shallow characteristic map.
7. The robotic remote control system according to claim 6, wherein the quality control result generation module includes:
the product defect determining unit is used for obtaining a quality inspection result by the product deep-shallow fusion interaction feature map through the quality inspection result generator based on the classifier in the background quality inspection server, wherein the quality inspection result is used for indicating whether the product to be inspected has surface defects or not;
And the robot control unit is used for responding to the quality inspection result to cause the product to be inspected to have surface defects, and generating a first control instruction to the robot, wherein the first control instruction is used for driving the robot to place the product to be inspected in a placement basket of the unqualified product.
8. The robotic remote control system of claim 7, further comprising a training module for training the pyramid network-based image multi-scale feature extractor, the image deep-shallow feature interaction compensation fusion network, and the classifier-based quality inspection result generator.
9. The robotic remote control system of claim 8, wherein the training module comprises:
the training data acquisition unit is used for acquiring training data, wherein the training data comprises a training product appearance image of a product to be inspected and a true value of whether the product to be inspected has surface defects or not;
the training image transmission unit is used for transmitting the training product appearance image to a background quality inspection server through a wireless transmission module;
the training product appearance image graying unit is used for carrying out gray processing on the training product appearance image at the background quality inspection server to obtain a graying training product appearance image;
The training product appearance shallow semantic extraction unit is used for enabling the gray training product appearance image to pass through the pyramid network-based image multi-scale feature extractor at the background quality inspection server so as to obtain a training product appearance shallow feature image and a training product appearance semantic feature image;
the training product deep-shallow fusion interaction unit is used for enabling the training product appearance shallow feature map and the training product appearance semantic feature map to pass through the image deep-shallow feature interaction compensation type fusion network at the background quality inspection server so as to obtain a training product deep-shallow fusion interaction feature map;
The feature optimization unit is used for performing feature optimization on the training product deep-shallow fusion interaction feature map to obtain an optimized training product deep-shallow fusion interaction feature map;
The decoding loss unit is used for enabling the training product deep-shallow fusion interaction feature map to pass through the quality inspection result generator based on the classifier at the background quality inspection server so as to obtain a classification loss function value;
The training unit is used for training the pyramid network-based image multi-scale feature extractor, the image deep-shallow feature interaction compensation type fusion network and the classifier-based quality inspection result generator based on the classification loss function value through back propagation of gradient descent, wherein the training product deep-shallow fusion interaction feature map is optimized in each iteration process of a model.
10. A robot remote control method, comprising:
acquiring a product appearance image of a product to be inspected through a camera of the robot;
Transmitting the product appearance image to a background quality inspection server through a wireless transmission module;
The background quality inspection server is used for carrying out gray processing on the product appearance image to obtain a gray product appearance image;
The background quality inspection server performs multi-scale feature extraction on the gray product appearance image to obtain a product appearance shallow feature image and a product appearance semantic feature image;
in the background quality inspection server, the product appearance shallow feature map and the product appearance semantic feature map are subjected to an image deep-shallow feature interaction compensation type fusion network to obtain a product deep-shallow fusion interaction feature map serving as product deep-shallow fusion interaction features;
And obtaining a quality inspection result based on the product depth-shallow fusion interaction characteristic at the background quality inspection server, and generating a control instruction based on the quality inspection result.
CN202410594995.2A 2024-05-14 Robot remote control system and control method Pending CN118321203A (en)

Publications (1)

Publication Number Publication Date
CN118321203A true CN118321203A (en) 2024-07-12

Family

ID=

Similar Documents

Publication Publication Date Title
CN112967243B (en) Deep learning chip packaging crack defect detection method based on YOLO
CN115294038A (en) Defect detection method based on joint optimization and mixed attention feature fusion
CN113971660B (en) Computer vision method for bridge health diagnosis and intelligent camera system
CN111932511B (en) Electronic component quality detection method and system based on deep learning
CN114972213A (en) Two-stage mainboard image defect detection and positioning method based on machine vision
CN114742799B (en) Industrial scene unknown type defect segmentation method based on self-supervision heterogeneous network
CN114463759A (en) Lightweight character detection method and device based on anchor-frame-free algorithm
CN109899281A (en) A kind of fracturing unit Weak fault diagnostic method and device based on infrared thermal imagery
CN112669274B (en) Multi-task detection method for pixel-level segmentation of surface abnormal region
CN115830004A (en) Surface defect detection method, device, computer equipment and storage medium
CN114972316A (en) Battery case end surface defect real-time detection method based on improved YOLOv5
JP2022003495A (en) Inspection device, unit selection device, inspection method, and inspection program
CN116012291A (en) Industrial part image defect detection method and system, electronic equipment and storage medium
CN117173461A (en) Multi-visual task filling container defect detection method, system and medium
CN113780423A (en) Single-stage target detection neural network based on multi-scale fusion and industrial product surface defect detection model
CN113111875A (en) Seamless steel rail weld defect identification device and method based on deep learning
CN111210417B (en) Cloth defect detection method based on convolutional neural network
CN115775236A (en) Surface tiny defect visual detection method and system based on multi-scale feature fusion
CN113095358A (en) Image fusion method and system
CN114037684B (en) Defect detection method based on yolov and attention mechanism model
CN113706496B (en) Aircraft structure crack detection method based on deep learning model
CN116363136B (en) On-line screening method and system for automatic production of motor vehicle parts
CN117495836A (en) Plain-color fabric defect detection method
CN116843657A (en) Welding defect detection method and device based on attention fusion
CN118321203A (en) Robot remote control system and control method

Legal Events

Date Code Title Description
PB01 Publication