CN116168243A - Intelligent production system and method for shaver - Google Patents

Intelligent production system and method for shaver Download PDF

Info

Publication number
CN116168243A
CN116168243A CN202310123281.9A CN202310123281A CN116168243A CN 116168243 A CN116168243 A CN 116168243A CN 202310123281 A CN202310123281 A CN 202310123281A CN 116168243 A CN116168243 A CN 116168243A
Authority
CN
China
Prior art keywords
welding
feature
classification
shaver
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202310123281.9A
Other languages
Chinese (zh)
Inventor
方峰
鲍启明
吴春军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Amsafe Sunshine Industries Shanghai Co ltd
Original Assignee
Amsafe Sunshine Industries Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Amsafe Sunshine Industries Shanghai Co ltd filed Critical Amsafe Sunshine Industries Shanghai Co ltd
Priority to CN202310123281.9A priority Critical patent/CN116168243A/en
Publication of CN116168243A publication Critical patent/CN116168243A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the field of intelligent production, and particularly discloses an intelligent production system and method of a shaver. Therefore, the welding quality of the welded shaver can be accurately detected, and the product yield and quality of the shaver are ensured.

Description

Intelligent production system and method for shaver
Technical Field
The present application relates to the field of intelligent production, and more particularly, to an intelligent production system of a shaver and a method thereof.
Background
Along with the wide application of razor products in life and the continuous expansion of market demands, the update of razor products is continuously accelerated, and the quality requirements are also more and more severe. In the traditional razor welding process, the razor welding effect cannot be well controlled, but the razor welding effect is an important ring which affects the quality and the appearance of the razor, so that the control of the razor welding link becomes an increasingly important task.
The traditional method is that the manual welding or the semi-automatic welding is performed by a manual detection method, and then the welding parameters are continuously modified to adjust the welding effect. The method is very inconvenient, meanwhile, welding problems of production equipment cannot be found more accurately, and therefore the yield of products is reduced, the production efficiency and quality of the shaver are affected, and the operation cost is increased.
Accordingly, an optimized intelligent production scheme for shavers is desired.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. The embodiment of the application provides an intelligent production system and method of a shaver, which are characterized in that hidden characteristics of surface welding quality of a welding area of the shaver and hidden characteristic information of a transition area between the welding area and a non-welding area of the shaver are dug by adopting a neural network model based on deep learning, and the hidden characteristics are further subjected to associated coding so as to classify, so that a classification result used for indicating whether the welding quality of the welded shaver meets a preset standard is obtained. Therefore, the welding quality of the welded shaver can be accurately detected, and the product yield and quality of the shaver are ensured.
According to one aspect of the present application, there is provided an intelligent production system of a shaver, comprising: the camera module is used for acquiring a detection image of the welded shaver; the welding target area detection module is used for enabling the detection image of the welded shaver to pass through a welding area target detection network so as to obtain a welding interested area; a masking module for applying a mask to the detected image based on the position of the welding region of interest in the detected image of the welded shaver to obtain a masked image; the welding feature extraction module is used for enabling the welding region of interest to pass through a first convolution neural network model serving as a filter to obtain a welding feature vector; a mask feature extraction module, configured to pass the mask image through a second convolutional neural network model that is a filter to obtain a mask feature vector; the association module is used for carrying out association coding on the welding feature vector and the mask feature vector to obtain a classification feature matrix; the characteristic value distinguishing degree strengthening module is used for strengthening the characteristic value distinguishing degree of the classification characteristic matrix to obtain an optimized classification characteristic matrix; and the generation control result generation module is used for enabling the optimized classification characteristic matrix to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the welding quality of the welded shaver meets a preset standard.
In the intelligent production system of the shaver, the welding area target detection network is an anchor window-based target detection network, and the anchor window-based target detection network is Fast R-CNN, fast R-CNN or RetinaNet.
In the above-mentioned intelligent production system of razor, the welding feature extraction module is further used for: each layer of the first convolutional neural network model used as the filter performs the following steps on input data in forward transfer of the layer: carrying out convolution processing on input data to obtain a convolution characteristic diagram; pooling the convolution feature images based on the local feature matrix to obtain pooled feature images; performing nonlinear activation on the pooled feature map to obtain an activated feature map; the output of the last layer of the first convolution neural network serving as the filter is the welding feature vector, and the input of the first layer of the first convolution neural network serving as the filter is the welding region of interest.
In the above-mentioned intelligent production system of a shaver, the mask feature extraction module is further configured to: each layer of the second convolutional neural network model used as the filter performs the following steps on input data in forward transfer of the layer: carrying out convolution processing on input data to obtain a convolution characteristic diagram; pooling the convolution feature images based on a feature matrix to obtain pooled feature images; performing nonlinear activation on the pooled feature map to obtain an activated feature map; wherein the output of the last layer of the second convolutional neural network as a filter is the mask feature vector, and the input of the first layer of the second convolutional neural network as a filter is the mask image.
In the above-mentioned intelligent production system of the shaver, the association module is further configured to: performing association coding on the welding feature vector and the mask feature vector to obtain a classification feature matrix according to the following formula; wherein, the formula is:
Figure SMS_1
wherein the method comprises the steps of
Figure SMS_2
Representing the welding feature vector,/->
Figure SMS_3
A transpose vector representing the welding feature vector, < >>
Figure SMS_4
Representing the mask feature vector,/->
Figure SMS_5
Representing the classification feature matrix,/->
Figure SMS_6
Representing vector multiplication.
In the above-mentioned intelligent production system of razor, the eigenvalue distinction degree reinforces the module, further is used for: performing interactive reinforcement based on distinguishable physical excitation on the classification characteristic matrix by using the following formula to obtain the optimized classification characteristic matrix; wherein, the formula is:
Figure SMS_7
Figure SMS_8
Figure SMS_9
wherein the method comprises the steps of
Figure SMS_10
Is the classification feature matrix,/a>
Figure SMS_11
And->
Figure SMS_12
Is a predetermined superparameter,/->
Figure SMS_13
And->
Figure SMS_14
Representing the addition and subtraction of the feature matrix by position, division representing each position of the feature matrix divided by the corresponding value, and +.>
Figure SMS_15
Representing convolution operations through a single convolution layer, +.>
Figure SMS_16
Is the optimized classification feature matrix.
In the above-mentioned intelligent production system of razor, the generation control result generation module includes: the unfolding unit is used for unfolding the optimized classification feature matrix into classification feature vectors based on row vectors or column vectors; the full-connection coding unit is used for carrying out full-connection coding on the classification characteristic vectors by using a plurality of full-connection layers of the classifier so as to obtain coded classification characteristic vectors; and the classification result generating unit is used for enabling the coding classification feature vector to pass through a Softmax classification function of the classifier to obtain the classification result.
According to another aspect of the present application, there is provided an intelligent production method of a shaver, which includes: acquiring a detection image of the welded shaver; passing the detection image of the welded shaver through a welding area target detection network to obtain a welding interested area; applying a mask to the detected image based on the position of the welding region of interest in the detected image of the welded shaver to obtain a mask image; passing the welding region of interest through a first convolutional neural network model as a filter to obtain a welding feature vector; passing the mask image through a second convolutional neural network model as a filter to obtain a mask feature vector; performing association coding on the welding feature vector and the mask feature vector to obtain a classification feature matrix; performing eigenvalue discrimination enhancement on the classification characteristic matrix to obtain an optimized classification characteristic matrix; and passing the optimized classification feature matrix through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the welding quality of the welded shaver meets a preset standard.
According to still another aspect of the present application, there is provided an electronic apparatus including: a processor; and a memory in which computer program instructions are stored which, when executed by the processor, cause the processor to perform the intelligent production method of a shaver as described above.
According to a further aspect of the present application, there is provided a computer readable medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the method of intelligent production of a shaver as described above.
Compared with the prior art, the intelligent production system and the method for the shaver, provided by the application, have the advantages that the hidden characteristic of the surface welding quality of the welding area of the shaver and the hidden characteristic information of the transition area between the welding area and the non-welding area of the shaver are dug by adopting the neural network model based on deep learning, and the hidden characteristic information is further subjected to association coding to classify the hidden characteristic information and the hidden characteristic information, so that a classification result used for indicating whether the welding quality of the welded shaver meets the preset standard or not is obtained. Therefore, the welding quality of the welded shaver can be accurately detected, and the product yield and quality of the shaver are ensured.
Drawings
The foregoing and other objects, features and advantages of the present application will become more apparent from the following more particular description of embodiments of the present application, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 is an application scenario diagram of an intelligent production system of a shaver according to an embodiment of the application.
Fig. 2 is a block diagram of an intelligent production system for a shaver according to an embodiment of the application.
Fig. 3 is a system architecture diagram of an intelligent production system of a shaver according to an embodiment of the application.
Fig. 4 is a flowchart of a first convolutional neural network code in an intelligent production system of a shaver according to an embodiment of the application.
Fig. 5 is a flowchart of a second convolutional neural network code in an intelligent production system of a shaver according to an embodiment of the application.
Fig. 6 is a block diagram of a control result generation module in an intelligent production system of a shaver according to an embodiment of the application.
Fig. 7 is a flowchart of an intelligent production method of a shaver according to an embodiment of the application.
Fig. 8 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Summary of the application: as described above, in the conventional welding process of the shaver, the welding effect of the shaver cannot be well controlled, but the welding effect of the shaver is just an important ring affecting the quality and the appearance of the shaver, so that the control of the welding link of the shaver is an increasingly important task. The traditional method is that the manual welding or the semi-automatic welding is performed by a manual detection method, and then the welding parameters are continuously modified to adjust the welding effect. The method is very inconvenient, meanwhile, welding problems of production equipment cannot be found more accurately, and therefore the yield of products is reduced, the production efficiency and quality of the shaver are affected, and the operation cost is increased. Accordingly, an optimized intelligent production scheme for shavers is desired.
Accordingly, in consideration of the fact that the welding quality characteristic information of the welded shaver is represented in the surface image in the welding quality detection process of the shaver, the welding quality detection can be realized by analyzing the detection image of the welded shaver, so that the product yield and quality of the shaver are ensured, the production efficiency is improved, and the operation cost is reduced. However, since there is a large amount of information in the detected image of the welded shaver, the characteristic information of the welded quality is small-scale characteristic information in the image, which is difficult to capture and acquire, and the implicit characteristic information of the transition region between the welded region and the non-welded region of the shaver is blurred due to various external factors of welding in the actual detection process, the characteristic information of the region is difficult to extract, and the problem is most likely to occur in the process implementation process, that is, the characteristic information of the region has a relatively high proportion in the welding quality detection process, so that the accuracy of the welding quality detection of the welded shaver is reduced.
In recent years, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, text signal processing, and the like. In addition, deep learning and neural networks have also shown levels approaching and even exceeding humans in the fields of image classification, object detection, semantic segmentation, text translation, and the like.
The development of deep learning and neural networks provides new solutions and schemes for welding quality detection of welded shavers.
Specifically, in the technical scheme of the application, first, a detection image of a welded-formed shaver is acquired. It should be understood that, considering that the surface welding quality hiding feature of the welding area of the shaver should be focused when the welding quality of the welded shaver is detected, it is obvious that the accuracy of the welding quality detection of the shaver can be improved if the remaining useless disturbance feature information can be filtered out when the feature mining is performed on the welding quality of the welding area of the welded shaver. Based on the above, in the technical solution of the present application, the detection image of the welded shaver is further passed through a welding area target detection network to obtain a welding region of interest. Specifically, the target anchoring layer of the welding area target detection network is used for anchoring frameBAnd processing the detection image by sliding to frame a welding region of interest of the shaver, thereby obtaining the welding region of interest. In particular, here, the welding zone target detection network is an anchor window based target detection network, and the anchor window based target detection network is FastR-CNN, faster R-CNN, or RetinaNet.
Further, feature mining of the welding region of interest is performed using a first convolutional neural network model as a filter having excellent performance in terms of implicit feature extraction of images to extract welding quality high-dimensional implicit feature information about the welding-shaped shaver in the welding region of interest, thereby obtaining a welding feature vector.
Then, it is also considered that since there is a lot of characteristic information in the transition region between the welding region and the non-welding region of the shaver when the welding quality of the welded-formed shaver is actually detected, the implicit characteristic information of the transition region reflects the welding quality and appearance condition of the shaver, which has an important meaning in the actual use of the shaver, and the transition region is also most problematic in the implementation of the welding process. Therefore, in the technical solution of the present application, it is desirable to perform the excavation of the characteristic information of the partial region to comprehensively perform the welding quality detection of the welded shaver. Specifically, based on the position of the welding region of interest in the detection image of the welding-shaped shaver, masking is applied to the detection image to obtain a masking image, thereby strengthening welding quality characteristic information about the welding-shaped shaver in a transition region between a welding region and a non-welding region.
And then, further performing feature mining on the mask image through a second convolution neural network model serving as a filter to extract welding quality implicit feature distribution information about the welded shaver in the transition region, so as to obtain mask feature vectors.
Further, in order to integrate welding quality detection by integrating welding quality characteristics of the shaver in the welding interested region and the transition region, the welding characteristic vector and the mask characteristic vector are further subjected to associated coding to obtain a classification characteristic matrix. Specifically, in the embodiment of the present application, a vector multiplication between the transpose vector of the welding feature vector and the mask feature vector is calculated to obtain the classification feature matrix. And then, classifying the classification characteristic matrix in a classifier to obtain a classification result for indicating whether the welding quality of the welded shaver meets the preset standard. That is, in the technical solution of the present application, the label of the classifier includes that the welding quality of the welded shaver meets a predetermined criterion, and the welding quality of the welded shaver does not meet a predetermined criterion, wherein the classifier determines to which classification label the classification feature matrix belongs through a soft maximum function. Therefore, the welding quality of the welded shaver can be accurately detected, and the product yield and quality of the shaver are ensured.
Particularly, in the technical scheme of the application, when the welding feature vector and the mask feature vector are subjected to association coding to obtain the classification feature matrix, the welding feature vector and the mask feature vector are multiplied by the position to obtain feature values of corresponding positions of the classification feature matrix, and the welding feature vector and the mask feature vector are obtained by detecting the welding region-of-interest image and the mask image of the image respectively, so that feature values expressing corresponding image semantic features also exist in the welding feature vector and the mask feature vector, and therefore associated features formed by the feature values, namely, the feature values in the classification feature matrix have more remarkable importance relative to the feature values of other positions, and if the feature values can be effectively distinguished in classification, the training speed of the classifier and the accuracy of the classification result can be obviously improved.
Thus, the applicant of the present application refers to the matrix of classification features, e.g. denoted as
Figure SMS_17
Interactive augmentation based on distinguishable physical stimulus is performed, expressed as:
Figure SMS_18
Figure SMS_19
Figure SMS_20
wherein the method comprises the steps of
Figure SMS_21
Is an optimized classification characteristic matrix, +. >
Figure SMS_22
And->
Figure SMS_23
Is a predetermined superparameter,/->
Figure SMS_24
And->
Figure SMS_25
Representing the addition and subtraction of the feature matrix by position, division representing each position of the feature matrix divided by the corresponding value, and +.>
Figure SMS_26
Representing a convolution operation through a single convolution layer.
Here, the discriminative physical stimulus-based interaction enhancement is used to promote interactions between feature space and solution space of classification problems during back propagation through gradient descent, which extracts and mimics viable features (actionable feature) in a physical stimulus-like manner, whereby a general purpose low-dimensional guided physical stimulus approach is used to obtain a physical representation of viable features with gradient discriminativity, thereby enhancing the classification feature matrix during training
Figure SMS_27
Active part in order to promote the optimized classification characteristic matrix +.>
Figure SMS_28
Through the training speed of the classifier and the accuracy of the classification result of the trained classification features. Therefore, the welding quality of the welded shaver can be accurately detected, so that the product yield and quality of the shaver are ensured. The production efficiency is improved, and the operation cost is reduced. />
Based on this, this application proposes an intelligent production system of razor, it includes: the camera module is used for acquiring a detection image of the welded shaver; the welding target area detection module is used for enabling the detection image of the welded shaver to pass through a welding area target detection network so as to obtain a welding interested area; a masking module for applying a mask to the detected image based on the position of the welding region of interest in the detected image of the welded shaver to obtain a masked image; the welding feature extraction module is used for enabling the welding region of interest to pass through a first convolution neural network model serving as a filter to obtain a welding feature vector; a mask feature extraction module, configured to pass the mask image through a second convolutional neural network model that is a filter to obtain a mask feature vector; the association module is used for carrying out association coding on the welding feature vector and the mask feature vector to obtain a classification feature matrix; the characteristic value distinguishing degree strengthening module is used for strengthening the characteristic value distinguishing degree of the classification characteristic matrix to obtain an optimized classification characteristic matrix; and the generation control result generation module is used for enabling the optimized classification characteristic matrix to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the welding quality of the welded shaver meets a preset standard.
Fig. 1 is an application scenario diagram of an intelligent production system of a shaver according to an embodiment of the application. As shown in fig. 1, in this application scenario. A detection image of the welded-on shaver is obtained by a camera (e.g., C as illustrated in fig. 1). The image is then input to a server (e.g., S in fig. 1) that is deployed with an intelligent production algorithm for the shaver, wherein the server is capable of processing the input image with the intelligent production algorithm for the shaver to generate a classification result that indicates whether the welding quality of the welded-formed shaver meets a predetermined criterion.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Exemplary System: fig. 2 is a block diagram of an intelligent production system for a shaver according to an embodiment of the application. As shown in fig. 2, the intelligent production system 300 of a shaver according to an embodiment of the application includes: a camera module 310; a welding target area detection module 320; masking module 330; a welding feature extraction module 340; mask feature extraction module 350; an association module 360; a eigenvalue differentiation strengthening module 370; and a generation control result generation module 380.
The camera module 310 is configured to obtain a detection image of the welded shaver; the welding target area detection module 320 is configured to pass the detection image of the welded shaver through a welding target detection network to obtain a welding region of interest; the masking module 330 is configured to apply a mask to the detected image based on the position of the welding region of interest in the detected image of the welded shaver to obtain a mask image; the welding feature extraction module 340 is configured to pass the welding region of interest through a first convolutional neural network model that is a filter to obtain a welding feature vector; the mask feature extraction module 350 is configured to pass the mask image through a second convolutional neural network model that is a filter to obtain a mask feature vector; the association module 360 is configured to perform association encoding on the welding feature vector and the mask feature vector to obtain a classification feature matrix; the feature value discrimination strengthening module 370 is configured to strengthen the feature value discrimination of the classification feature matrix to obtain an optimized classification feature matrix; and the generation control result generation module 380 is configured to pass the optimized classification feature matrix through a classifier to obtain a classification result, where the classification result is used to indicate whether the welding quality of the welded shaver meets a predetermined standard.
Fig. 3 is a system architecture diagram of an intelligent production system of a shaver according to an embodiment of the application. As shown in fig. 3, in the network architecture, a detection image of the welded shaver is first acquired by the camera module 310; then, the welding target area detection module 320 passes the detection image of the welded shaver obtained by the camera module 310 through a welding target detection network to obtain a welding region of interest; the mask module 330 applies a mask to the detected image based on the position of the welding region of interest in the detected image of the welding-shaped shaver obtained by the welding target region detection module 320 to obtain a mask image; then, the welding feature extraction module 340 passes the welding region of interest obtained by the welding target region detection module 320 through a first convolutional neural network model as a filter to obtain a welding feature vector; the mask feature extraction module 350 passes the mask image through a second convolutional neural network model as a filter to obtain a mask feature vector; then, the association module 360 performs association encoding on the welding feature vector obtained by the welding feature extraction module 340 and the mask feature vector obtained by the mask feature extraction module 350 to obtain a classification feature matrix; the feature value distinguishing degree strengthening module 370 strengthens the feature value distinguishing degree of the classification feature matrix calculated by the association module 360 to obtain an optimized classification feature matrix; further, the generation control result generation module 380 passes the optimized classification feature matrix through a classifier to obtain a classification result, where the classification result is used to indicate whether the welding quality of the welded shaver meets a predetermined standard.
Specifically, during operation of the intelligent production system 300 of the shaver, the camera module 310 is configured to obtain a detection image of the welded shaver. It should be understood that in the actual process of detecting the welding quality of the shaver, the welding quality characteristic information of the welded shaver is represented in the surface image thereof, so that the detection of the welding quality can be realized by analyzing the detected image of the welded shaver, thereby ensuring the product yield and quality of the shaver, improving the production efficiency and reducing the operation cost. In one specific example of the present application, the detection image of the welded-on shaver may be acquired by a camera.
Specifically, during operation of the intelligent production system 300 of the shaver, the welding target area detection module 320 is configured to pass the detection image of the welded shaver through a welding target area detection network to obtain a welding region of interest. Considering that the surface welding quality hiding features should be focused on the welding area of the shaver when the welding quality of the welded shaver is detected, if the welding quality hiding features can be used on the welding area of the welded shaver And when the welding quality is subjected to feature mining, other useless interference feature information is filtered out, and the accuracy of the welding quality detection of the shaver can be obviously improved. Based on the above, in the technical solution of the present application, the detection image of the welded shaver is further passed through a welding area target detection network to obtain a welding region of interest. Specifically, the target anchoring layer of the welding area target detection network is used for anchoring frameBAnd processing the detection image by sliding to frame a welding region of interest of the shaver, thereby obtaining the welding region of interest. In particular, here, the welding zone target detection network is an anchor window based target detection network, and the anchor window based target detection network is Fast R-CNN, or RetinaNet.
Specifically, during operation of the intelligent production system 300 of the shaver, the masking module 330 is configured to apply a mask to the detected image of the welded-in shaver based on the position of the welding region of interest in the detected image to obtain a masked image. It will be appreciated that since there is also a lot of characteristic information in the transition area between the welded area and the non-welded area of the shaver when the welded quality of the welded-formed shaver is actually detected, the implicit characteristic information of the transition area reflects the welded quality and appearance of the shaver, which is of great significance in practical use of the shaver, and which is also most problematic in the implementation of the welding process. Therefore, in the technical solution of the present application, it is desirable to perform the excavation of the characteristic information of the partial region to comprehensively perform the welding quality detection of the welded shaver. Specifically, based on the position of the welding region of interest in the detection image of the welding-shaped shaver, masking is applied to the detection image to obtain a masking image, thereby strengthening welding quality characteristic information about the welding-shaped shaver in a transition region between a welding region and a non-welding region.
Specifically, during operation of the intelligent production system 300 of the shaver, the welding feature extraction module 340 is configured to pass the welding region of interest through a first convolutional neural network model as a filter to obtain a welding feature vector. That is, in the technical solution of the present application, feature mining of the welding region of interest is performed using a first convolutional neural network model as a filter having excellent performance in terms of implicit feature extraction of images, so as to extract high-dimensional implicit feature information of welding quality about the welding-shaped shaver in the welding region of interest, thereby obtaining a welding feature vector. In one particular example, the first convolutional neural network includes a plurality of neural network layers that are cascaded with one another, wherein each neural network layer includes a convolutional layer, a pooling layer, and an activation layer. In the encoding process of the first convolutional neural network, each layer of the first convolutional neural network performs convolutional processing based on a convolutional kernel on input data by using the convolutional layer in the forward transmission process of the layer, performs pooling processing on a convolutional feature map output by the convolutional layer by using the pooling layer, and performs activation processing on the pooled feature map output by the pooling layer by using the activation layer, wherein the output of the last layer of the first convolutional neural network serving as a filter is the welding feature vector, and the input of the first layer of the first convolutional neural network serving as the filter is the welding region of interest.
Fig. 4 is a flowchart of a first convolutional neural network code in an intelligent production system of a shaver according to an embodiment of the application. As shown in fig. 4, in the first convolutional neural network coding process, the method includes: each layer of the first convolutional neural network model used as the filter performs the following steps on input data in forward transfer of the layer: s210, carrying out convolution processing on input data to obtain a convolution characteristic diagram; s220, pooling the convolution feature map based on a local feature matrix to obtain a pooled feature map; s230, carrying out nonlinear activation on the pooled feature map to obtain an activated feature map; the output of the last layer of the first convolution neural network serving as the filter is the welding feature vector, and the input of the first layer of the first convolution neural network serving as the filter is the welding region of interest.
Specifically, during operation of the intelligent production system 300 of the shaver, the mask feature extraction module 350 is configured to pass the mask image through a second convolutional neural network model as a filter to obtain a mask feature vector. And performing feature mining on the mask image through a second convolution neural network model serving as a filter to extract welding quality implicit feature distribution information about the welded shaver in the transition region, so as to obtain mask feature vectors. In a specific example of the present application, the network structure of the second convolutional neural network is consistent with the structure of the first convolutional neural network model. More specifically, the passing the mask image through a second convolutional neural network model as a filter to obtain a mask feature vector includes: and each layer of the second convolutional neural network performs convolution processing based on convolution kernel on input data by using the convolutional layer in the forward transfer process of the layer, performs pooling processing on the convolution feature map output by the convolutional layer by using the pooling layer and performs activation processing on the pooled feature map output by the pooling layer by using the activation layer, wherein the output of the last layer of the second convolutional neural network serving as a filter is the mask feature vector, and the input of the first layer of the second convolutional neural network serving as the filter is the mask image.
Fig. 5 is a flowchart of a second convolutional neural network code in an intelligent production system of a shaver according to an embodiment of the application. As shown in fig. 5, in the second convolutional neural network coding process, it includes: each layer of the second convolutional neural network model used as the filter performs the following steps on input data in forward transfer of the layer: s310, carrying out convolution processing on input data to obtain a convolution characteristic diagram; s320, pooling the convolution feature map based on a feature matrix to obtain a pooled feature map; s330, performing nonlinear activation on the pooled feature map to obtain an activated feature map; wherein the output of the last layer of the second convolutional neural network as a filter is the mask feature vector, and the input of the first layer of the second convolutional neural network as a filter is the mask image.
Specifically, during operation of the intelligent production system 300 of the shaver, the association module 360 is configured to perform association encoding on the welding feature vector and the mask feature vector to obtain a classification feature matrix. In the technical scheme of the application, in order to comprehensively detect welding quality by integrating welding quality characteristics of the shaver in the welding interested region and the transition region, the welding characteristic vector and the mask characteristic vector are further subjected to associated coding to obtain a classification characteristic matrix. Specifically, in the embodiment of the present application, a vector multiplication between the transpose vector of the welding feature vector and the mask feature vector is calculated to obtain the classification feature matrix. Specifically, the performing association coding on the welding feature vector and the mask feature vector to obtain a classification feature matrix includes: performing association coding on the welding feature vector and the mask feature vector to obtain a classification feature matrix according to the following formula; wherein, the formula is:
Figure SMS_29
Wherein the method comprises the steps of
Figure SMS_30
Representing the welding feature vector,/->
Figure SMS_31
A transpose vector representing the welding feature vector, < >>
Figure SMS_32
Representing the mask feature vector,/->
Figure SMS_33
Representing the classification feature matrix,/->
Figure SMS_34
Representing vector multiplication.
Specifically, during operation of the intelligent production system 300 of the shaver, the feature value differentiation strengthening module 370 is configured to strengthen the classification feature matrix with respect to the feature value differentiation so as to obtain an optimized classification feature matrix. In the technical scheme of the application, when the welding feature vector and the mask feature vector are subjected to association coding to obtain the classification feature matrix, the welding feature vector and the mask feature vector are multiplied by the position to obtain feature values of corresponding positions of the classification feature matrix, and because the welding feature vector and the mask feature vector are obtained by detecting the welding region-of-interest image and the mask image of the image respectively, feature values expressing corresponding image semantic features also exist in the welding feature vector and the mask feature vector, so that associated features formed by the feature values, namely, the feature values of the classification feature matrix have more remarkable importance relative to the feature values of other positions, and if the feature values can be effectively distinguished in classification, the training speed of the classifier and the accuracy of classification results can be obviously improved. Thus, the applicant of the present application refers to the matrix of classification features, e.g. denoted as
Figure SMS_35
Interactive augmentation based on distinguishable physical stimulus is performed, expressed as:
Figure SMS_36
Figure SMS_37
Figure SMS_38
wherein the method comprises the steps of
Figure SMS_41
Is a matrix of the classification characteristic,/>
Figure SMS_44
and->
Figure SMS_45
Is a predetermined superparameter,/->
Figure SMS_40
And->
Figure SMS_43
Representing the addition and subtraction of the feature matrix by position, division representing each position of the feature matrix divided by the corresponding value, and +.>
Figure SMS_46
Representing convolution operations through a single convolution layer, +.>
Figure SMS_47
Is the optimized classification feature matrix. Here, the discriminative physical stimulus-based interaction enhancement is used to promote interactions between feature space and solution space of classification problems during back propagation through gradient descent, which extracts and mimics viable features (actionable feature) in a physical stimulus-like manner, whereby a general purpose low-dimensional guided physical stimulus approach is used to obtain a physical representation of viable features with gradient discriminativity, thereby enhancing the classification feature matrix during training>
Figure SMS_39
Active part in order to promote the optimized classification characteristic matrix +.>
Figure SMS_42
Through the training speed of the classifier and the accuracy of the classification result of the trained classification features. Therefore, the welding quality of the welded shaver can be accurately detected, so that the product yield and quality of the shaver are ensured. The production efficiency is improved, and the operation cost is reduced.
Specifically, during operation of the intelligent production system 300 of the shaver, the generation control result generating module 380 is configured to pass the optimized classification feature matrix through a classifier to obtain a classification result, where the classification result is used to indicate whether the welding quality of the welded shaver meets a predetermined standard. That is, the optimized classification feature matrix is classified by a classifier to obtain a classification result indicating whether the welding quality of the welded shaver meets a predetermined standard. Specifically, the step of passing the optimized classification feature matrix through a classifier to obtain a classification result includes: processing the optimized classification feature matrix using the classifier to obtain a classification result according to the following formula:
Figure SMS_48
wherein->
Figure SMS_49
Representing the projection of the optimized classification feature matrix as a vector,/->
Figure SMS_50
To->
Figure SMS_51
Weight matrix for all connection layers of each layer, < ->
Figure SMS_52
To->
Figure SMS_53
Representing the bias vector for each fully connected layer. Specifically, the classifier includes a plurality of fully connected layers and a Softmax layer cascaded with a last fully connected layer of the plurality of fully connected layers. In the classification process of the classifier, the optimized classification feature matrix is first projected as a vector, for example, in a specific example, the optimized classification feature matrix is expanded along a row vector or a column vector to form a classification feature vector; then, the classification feature vectors are subjected to multiple full-join encoding by using multiple full-join layers of the classifier to obtain codes Classifying the feature vectors; further, the encoded classification feature vector is input to a Softmax layer of the classifier, i.e. the encoded classification feature vector is classified using the Softmax classification function to obtain a classification label, in a specific example the label of the classifier comprises that the welding quality of the welded-formed shaver meets a predetermined criterion and that the welding quality of the welded-formed shaver does not meet the predetermined criterion, wherein the classifier determines to which classification label the classification feature matrix belongs by means of a soft maximum function. Therefore, the welding quality of the welded shaver can be accurately detected, and the product yield and quality of the shaver are ensured.
Fig. 6 is a block diagram of a control result generation module in an intelligent production system of a shaver according to an embodiment of the application. As shown in fig. 6, the generating control result generating module 380 includes: a developing unit 381 for developing the optimized classification feature matrix into classification feature vectors based on row vectors or column vectors; a full-connection encoding unit 382, configured to perform full-connection encoding on the classification feature vector by using multiple full-connection layers of the classifier to obtain an encoded classification feature vector; and a classification result generating unit 383, configured to pass the encoded classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
In summary, the intelligent production system 300 of a shaver according to the embodiment of the application is illustrated, which uses a neural network model based on deep learning to mine out the hidden characteristic information of the surface welding quality of the welding area of the shaver and the hidden characteristic information of the transition area between the welding area and the non-welding area of the shaver, and further carries out associated coding on the hidden characteristic information and the hidden characteristic information to classify the hidden characteristic information to obtain a classification result for indicating whether the welding quality of the welded shaver meets the preset standard. Therefore, the welding quality of the welded shaver can be accurately detected, and the product yield and quality of the shaver are ensured.
As described above, the intelligent production system of a shaver according to the embodiment of the present application can be implemented in various terminal devices. In one example, the intelligent production system 300 of a shaver according to an embodiment of the present application may be integrated into the terminal device as one software module and/or hardware module. For example, the intelligent production system 300 of the shaver may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the intelligent production system 300 of the shaver can also be one of a number of hardware modules of the terminal device.
Alternatively, in another example, the intelligent production system 300 of the shaver and the terminal device may be separate devices, and the intelligent production system 300 of the shaver may be connected to the terminal device through a wired and/or wireless network and transmit the interaction information in a agreed data format.
An exemplary method is: fig. 7 is a flowchart of an intelligent production method of a shaver according to an embodiment of the application. As shown in fig. 7, the intelligent production method of the shaver according to the embodiment of the application comprises the following steps: s110, acquiring a detection image of the welded shaver; s120, passing the detection image of the welded shaver through a welding area target detection network to obtain a welding region of interest; s130, applying a mask to the detection image based on the position of the welding region of interest in the detection image of the welding-shaped shaver to obtain a mask image; s140, passing the welding region of interest through a first convolutional neural network model serving as a filter to obtain a welding feature vector; s150, passing the mask image through a second convolutional neural network model serving as a filter to obtain mask feature vectors; s160, performing association coding on the welding feature vector and the mask feature vector to obtain a classification feature matrix; s170, carrying out eigenvalue discrimination enhancement on the classification characteristic matrix to obtain an optimized classification characteristic matrix; and S180, the optimized classification characteristic matrix is passed through a classifier to obtain a classification result, and the classification result is used for indicating whether the welding quality of the welded shaver meets a preset standard.
In one example, in the above-mentioned intelligent production method of a shaver, the step S120 includes: the welding area target detection network is an anchor window-based target detection network, and the anchor window-based target detection network is Fast R-CNN, fast R-CNN or RetinaNet.
In one example, in the above-mentioned intelligent production method of a shaver, the step S140 includes: each layer of the first convolutional neural network model used as the filter performs the following steps on input data in forward transfer of the layer: carrying out convolution processing on input data to obtain a convolution characteristic diagram; pooling the convolution feature images based on the local feature matrix to obtain pooled feature images; performing nonlinear activation on the pooled feature map to obtain an activated feature map; the output of the last layer of the first convolution neural network serving as the filter is the welding feature vector, and the input of the first layer of the first convolution neural network serving as the filter is the welding region of interest.
In one example, in the above-mentioned intelligent production method of a shaver, the step S150 includes: each layer of the second convolutional neural network model used as the filter performs the following steps on input data in forward transfer of the layer: carrying out convolution processing on input data to obtain a convolution characteristic diagram; pooling the convolution feature images based on a feature matrix to obtain pooled feature images; performing nonlinear activation on the pooled feature map to obtain an activated feature map; wherein the output of the last layer of the second convolutional neural network as a filter is the mask feature vector, and the input of the first layer of the second convolutional neural network as a filter is the mask image.
In one example, in the above-mentioned intelligent production method of a shaver, the step S160 includes: performing association coding on the welding feature vector and the mask feature vector to obtain a classification feature matrix according to the following formula; wherein, the formula is:
Figure SMS_54
wherein the method comprises the steps of
Figure SMS_55
Representing the welding feature vector,/->
Figure SMS_56
A transpose vector representing the welding feature vector, < >>
Figure SMS_57
Representing the mask feature vector,/->
Figure SMS_58
Representing the classification feature matrix,/->
Figure SMS_59
Representing vector multiplication.
In one example, in the above-mentioned intelligent production method of a shaver, the step S170 includes: performing interactive reinforcement based on distinguishable physical excitation on the classification characteristic matrix by using the following formula to obtain the optimized classification characteristic matrix; wherein, the formula is:
Figure SMS_60
Figure SMS_61
Figure SMS_62
wherein the method comprises the steps of
Figure SMS_63
Is the classification feature matrix,/a>
Figure SMS_64
And->
Figure SMS_65
Is a predetermined superparameter,/->
Figure SMS_66
And->
Figure SMS_67
Representing the addition and subtraction of the feature matrix by position, division representing each position of the feature matrix divided by the corresponding value, and +.>
Figure SMS_68
Representing convolution operations through a single convolution layer, +.>
Figure SMS_69
Is the optimized classification feature matrix.
In one example, in the above-mentioned intelligent production method of a shaver, the step S180 includes: expanding the optimized classification feature matrix into classification feature vectors based on row vectors or column vectors; performing full-connection coding on the classification feature vectors by using a plurality of full-connection layers of the classifier to obtain coded classification feature vectors; and passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
In summary, the intelligent production method of the shaver according to the embodiment of the application is explained, wherein hidden characteristic information of surface welding quality of a welding area of the shaver and hidden characteristic information of a transition area between the welding area and a non-welding area of the shaver are dug by adopting a neural network model based on deep learning, and the hidden characteristic information are further subjected to associated coding so as to classify, so that a classification result for indicating whether the welding quality of the welded shaver meets a preset standard is obtained. Therefore, the welding quality of the welded shaver can be accurately detected, and the product yield and quality of the shaver are ensured.
Exemplary electronic device: next, an electronic device according to an embodiment of the present application is described with reference to fig. 8.
Fig. 8 illustrates a block diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 8, the electronic device 10 includes one or more processors 11 and a memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. On which one or more computer program instructions may be stored that the processor 11 may execute to implement the functions in the intelligent production system of a shaver of the various embodiments of the present application described above and/or other desired functions. Various contents such as a classification feature matrix may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
The input means 13 may comprise, for example, a keyboard, a mouse, etc.
The output device 14 may output various information including the classification result and the like to the outside. The output means 14 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device 10 that are relevant to the present application are shown in fig. 8 for simplicity, components such as buses, input/output interfaces, etc. are omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer readable storage Medium: in addition to the methods and apparatus described above, embodiments of the present application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform steps in the functions of the method of intelligent production of a shaver according to the various embodiments of the present application, described in the "exemplary systems" section of this specification.
The computer program product may write program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform steps in the functions of the method for intelligent production of a shaver according to the various embodiments of the present application, described in the above-mentioned "exemplary systems" section of the specification.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not intended to be limited to the details disclosed herein as such.
The block diagrams of the devices, apparatuses, devices, systems referred to in this application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent to the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (10)

1. An intelligent production system of a shaver, comprising: the camera module is used for acquiring a detection image of the welded shaver; the welding target area detection module is used for enabling the detection image of the welded shaver to pass through a welding area target detection network so as to obtain a welding interested area; a masking module for applying a mask to the detected image based on the position of the welding region of interest in the detected image of the welded shaver to obtain a masked image; the welding feature extraction module is used for enabling the welding region of interest to pass through a first convolution neural network model serving as a filter to obtain a welding feature vector; a mask feature extraction module, configured to pass the mask image through a second convolutional neural network model that is a filter to obtain a mask feature vector; the association module is used for carrying out association coding on the welding feature vector and the mask feature vector to obtain a classification feature matrix; the characteristic value distinguishing degree strengthening module is used for strengthening the characteristic value distinguishing degree of the classification characteristic matrix to obtain an optimized classification characteristic matrix; and the generation control result generation module is used for enabling the optimized classification characteristic matrix to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the welding quality of the welded shaver meets a preset standard.
2. The intelligent production system of shavers according to claim 1, wherein the welding area target detection network is an anchor window based target detection network, which is Fast R-CNN, fast R-CNN or RetinaNet.
3. The intelligent production system of a shaver according to claim 2, wherein the welding feature extraction module is further configured to: each layer of the first convolutional neural network model used as the filter performs the following steps on input data in forward transfer of the layer: carrying out convolution processing on input data to obtain a convolution characteristic diagram; pooling the convolution feature images based on the local feature matrix to obtain pooled feature images; non-linear activation is carried out on the pooled feature map so as to obtain an activated feature map; the output of the last layer of the first convolution neural network serving as the filter is the welding feature vector, and the input of the first layer of the first convolution neural network serving as the filter is the welding region of interest.
4. The intelligent production system of a razor of claim 3, wherein the mask feature extraction module is further configured to: each layer of the second convolutional neural network model used as the filter performs the following steps on input data in forward transfer of the layer: carrying out convolution processing on input data to obtain a convolution characteristic diagram; pooling the convolution feature images based on a feature matrix to obtain pooled feature images; non-linear activation is carried out on the pooled feature map so as to obtain an activated feature map; wherein the output of the last layer of the second convolutional neural network as a filter is the mask feature vector, and the input of the first layer of the second convolutional neural network as a filter is the mask image.
5. The intelligent production system of a shaver according to claim 4, wherein the association module is further configured to: performing association coding on the welding feature vector and the mask feature vector to obtain a classification feature matrix according to the following formula; wherein, the formula is:
Figure QLYQS_1
wherein the method comprises the steps of
Figure QLYQS_2
Representing the welding feature vector,/->
Figure QLYQS_3
A transpose vector representing the welding feature vector, < >>
Figure QLYQS_4
Representing the mask feature vector,/->
Figure QLYQS_5
Representing the classification feature matrix,/->
Figure QLYQS_6
Representing vector multiplication.
6. The intelligent production system of a razor of claim 5, wherein the eigenvalue differentiation strengthening module is further configured to: performing interactive reinforcement based on distinguishable physical excitation on the classification characteristic matrix by using the following formula to obtain the optimized classification characteristic matrix; wherein, the formula is:
Figure QLYQS_7
/>
Figure QLYQS_8
Figure QLYQS_9
wherein the method comprises the steps of
Figure QLYQS_10
Is the classification feature matrix,/a>
Figure QLYQS_11
And->
Figure QLYQS_12
Is a predetermined superparameter,/->
Figure QLYQS_13
And->
Figure QLYQS_14
Representing the addition and subtraction of the feature matrix by position, division representing each position of the feature matrix divided by the corresponding value, and +.>
Figure QLYQS_15
Representing convolution operations through a single convolution layer, +.>
Figure QLYQS_16
Is the optimized classification feature matrix.
7. The intelligent production system of a razor according to claim 6, wherein the generation control result generation module comprises: the unfolding unit is used for unfolding the optimized classification feature matrix into classification feature vectors based on row vectors or column vectors; the full-connection coding unit is used for carrying out full-connection coding on the classification characteristic vectors by using a plurality of full-connection layers of the classifier so as to obtain coded classification characteristic vectors; and the classification result generating unit is used for passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
8. An intelligent production method of a shaver, which is characterized by comprising the following steps: acquiring a detection image of the welded shaver; passing the detection image of the welded shaver through a welding area target detection network to obtain a welding interested area; applying a mask to the detected image based on the position of the welding region of interest in the detected image of the welded shaver to obtain a mask image; passing the welding region of interest through a first convolutional neural network model as a filter to obtain a welding feature vector; passing the mask image through a second convolutional neural network model as a filter to obtain a mask feature vector; performing association coding on the welding feature vector and the mask feature vector to obtain a classification feature matrix; performing eigenvalue discrimination enhancement on the classification characteristic matrix to obtain an optimized classification characteristic matrix; and passing the optimized classification feature matrix through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the welding quality of the welded shaver meets a preset standard.
9. The intelligent production method of the shaver according to claim 8, wherein the passing the mask image through a second convolutional neural network model as a filter to obtain a mask feature vector, comprises: each layer of the second convolutional neural network model used as the filter performs the following steps on input data in forward transfer of the layer: carrying out convolution processing on input data to obtain a convolution characteristic diagram; pooling the convolution feature images based on a feature matrix to obtain pooled feature images; non-linear activation is carried out on the pooled feature map so as to obtain an activated feature map; wherein the output of the last layer of the second convolutional neural network as a filter is the mask feature vector, and the input of the first layer of the second convolutional neural network as a filter is the mask image.
10. The intelligent production method of the shaver according to claim 9, wherein the step of passing the optimized classification feature matrix through a classifier to obtain a classification result indicating whether the welding quality of the welded shaver meets a predetermined criterion comprises: expanding the optimized classification feature matrix into classification feature vectors based on row vectors or column vectors; performing full-connection coding on the classification feature vectors by using a plurality of full-connection layers of the classifier to obtain coded classification feature vectors; and passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
CN202310123281.9A 2023-02-16 2023-02-16 Intelligent production system and method for shaver Withdrawn CN116168243A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310123281.9A CN116168243A (en) 2023-02-16 2023-02-16 Intelligent production system and method for shaver

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310123281.9A CN116168243A (en) 2023-02-16 2023-02-16 Intelligent production system and method for shaver

Publications (1)

Publication Number Publication Date
CN116168243A true CN116168243A (en) 2023-05-26

Family

ID=86411005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310123281.9A Withdrawn CN116168243A (en) 2023-02-16 2023-02-16 Intelligent production system and method for shaver

Country Status (1)

Country Link
CN (1) CN116168243A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309446A (en) * 2023-03-14 2023-06-23 浙江固驰电子有限公司 Method and system for manufacturing power module for industrial control field
CN116935036A (en) * 2023-07-24 2023-10-24 杭州糖吉医疗科技有限公司 Visualized stent delivery system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309446A (en) * 2023-03-14 2023-06-23 浙江固驰电子有限公司 Method and system for manufacturing power module for industrial control field
CN116309446B (en) * 2023-03-14 2024-05-07 浙江固驰电子有限公司 Method and system for manufacturing power module for industrial control field
CN116935036A (en) * 2023-07-24 2023-10-24 杭州糖吉医疗科技有限公司 Visualized stent delivery system

Similar Documents

Publication Publication Date Title
CN115203380B (en) Text processing system and method based on multi-mode data fusion
CN116168243A (en) Intelligent production system and method for shaver
CN115783923B (en) Elevator fault mode identification system based on big data
CN115564766B (en) Preparation method and system of water turbine volute seat ring
CN115759658B (en) Enterprise energy consumption data management system suitable for smart city
CN115761642A (en) Image processing-based crushing operation monitoring method and system
CN116015837A (en) Intrusion detection method and system for computer network information security
Elakkiya et al. An optimized generative adversarial network based continuous sign language classification
CN116025319A (en) Multi-medium thermal fluid operation monitoring system and method thereof
CN116247824B (en) Control method and system for power equipment
CN115909260A (en) Method and system for early warning of workplace intrusion based on machine vision
CN115827257B (en) CPU capacity prediction method and system for processor system
CN116279504A (en) AR-based vehicle speed assist system and method thereof
CN116311005A (en) Apparatus, method and storage medium for moving image processing
CN116665086A (en) Teaching method and system based on intelligent analysis of learning behaviors
CN116285481A (en) Method and system for producing and processing paint
Yu et al. Abnormal event detection using adversarial predictive coding for motion and appearance
CN117036271A (en) Production line quality monitoring method and system thereof
CN116091414A (en) Cardiovascular image recognition method and system based on deep learning
CN116759053A (en) Medical system prevention and control method and system based on Internet of things system
Zhang et al. Pedestrian detection based on hierarchical co-occurrence model for occlusion handling
CN116797814A (en) Intelligent building site safety management system
CN115984745A (en) Moisture control method for black garlic fermentation
CN116797248A (en) Data traceability management method and system based on block chain
CN116258947B (en) Industrial automatic processing method and system suitable for home customization industry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20230526

WW01 Invention patent application withdrawn after publication