CN117155041B - Outer rotor motor and intelligent production method thereof - Google Patents

Outer rotor motor and intelligent production method thereof Download PDF

Info

Publication number
CN117155041B
CN117155041B CN202310421226.8A CN202310421226A CN117155041B CN 117155041 B CN117155041 B CN 117155041B CN 202310421226 A CN202310421226 A CN 202310421226A CN 117155041 B CN117155041 B CN 117155041B
Authority
CN
China
Prior art keywords
decoding
training
feature map
neural network
shallow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310421226.8A
Other languages
Chinese (zh)
Other versions
CN117155041A (en
Inventor
邵明元
郭豪峰
王韬
唐章俊
松尾繁
金波
李英杰
边树军
陈昱
费利明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huzhou Yueqiu Motor Co ltd
Original Assignee
Huzhou Yueqiu Motor Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huzhou Yueqiu Motor Co ltd filed Critical Huzhou Yueqiu Motor Co ltd
Priority to CN202310421226.8A priority Critical patent/CN117155041B/en
Publication of CN117155041A publication Critical patent/CN117155041A/en
Application granted granted Critical
Publication of CN117155041B publication Critical patent/CN117155041B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02KDYNAMO-ELECTRIC MACHINES
    • H02K15/00Methods or apparatus specially adapted for manufacturing, assembling, maintaining or repairing of dynamo-electric machines
    • H02K15/02Methods or apparatus specially adapted for manufacturing, assembling, maintaining or repairing of dynamo-electric machines of stator or rotor bodies
    • H02K15/03Methods or apparatus specially adapted for manufacturing, assembling, maintaining or repairing of dynamo-electric machines of stator or rotor bodies having permanent magnets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02KDYNAMO-ELECTRIC MACHINES
    • H02K1/00Details of the magnetic circuit
    • H02K1/06Details of the magnetic circuit characterised by the shape, form or construction
    • H02K1/22Rotating parts of the magnetic circuit
    • H02K1/27Rotor cores with permanent magnets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Power Engineering (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Manufacturing & Machinery (AREA)
  • Image Analysis (AREA)

Abstract

An external rotor motor and an intelligent production method thereof acquire a detection image of filling powder acquired by a camera; and excavating hidden characteristic information of a detection image of the filling powder by adopting an artificial intelligence technology based on deep learning, and fully expressing the hidden distribution characteristic information of the material characteristic of the filling powder in the detection image so as to carry out self-adaptive control on sintering temperature and time based on the material characteristics of the actual magnetic conductive powder and the permanent magnetic powder. Thus, the phenomena of over-burning or under-burning can be avoided, and the preparation quality of the outer rotor is further improved.

Description

Outer rotor motor and intelligent production method thereof
Technical Field
The application relates to the technical field of intelligent production, and more particularly relates to an outer rotor motor and an intelligent production method thereof.
Background
The rotor of the outer rotor motor is composed of an outer layer and an inner layer, wherein the outer layer is generally made of steel plates, but the preparation process is complex and has limitation. In order to improve the rotation balance and stability of the outer rotor motor, a permanent magnet ring is generally bonded on the inner surface of the outer rotor layer of the outer rotor motor to serve as a permanent magnet layer, but the purity and magnetic performance of the existing bonded permanent magnet are low, and the efficiency and performance of the motor are affected. The sintered permanent magnet with high magnetism is difficult to manufacture into a permanent magnet ring with a fixed form, is difficult to realize close fit with a rotor, is easy to crack in the assembly process, and limits the application of the sintered permanent magnet.
In order to solve the above problems, chinese patent CN103545998A discloses a rotor of an external rotor motor and a manufacturing method thereof, which adopts a powder metallurgy process to sinter and form layers filled with magnetic conductive powder and permanent magnetic powder at high temperature at the same time to form the rotor of the external rotor motor without gap transition, so as to reduce magnetic resistance, improve efficiency and stability. And the magnetic conductive layer is formed by sintering magnetic conductive powder at high temperature, the thickness of the magnetic conductive layer can be realized by controlling the amount of filling the magnetic conductive powder and adjusting the pressing die, and compared with a mode of preparing a steel plate, the magnetic conductive layer has lower manufacturing difficulty and limitation.
However, in the practical manufacturing process of the outer rotor motor using the above-mentioned scheme, it is found that the combination between the magnetically conductive powder and the permanent magnetic powder is not firm, and the mechanical strength and magnetic properties of the rotor are reduced. The reason for this is: in the high-temperature sintering control process, the sintering temperature and the sintering time are controlled within a certain reasonable range, the magnetic conductive powder and the permanent magnetic powder are not adaptively controlled according to the material characteristics and the pressing density of the magnetic conductive powder and the permanent magnetic powder, the phenomena of over-burning or under-burning are caused, the rotor preparation quality is affected, and the energy waste is caused.
Accordingly, an optimized intelligent production scheme for an external rotor motor is desired.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. The embodiment of the application provides an outer rotor motor and an intelligent production method thereof, wherein the outer rotor motor acquires a detection image of filling powder acquired by a camera; and excavating hidden characteristic information of a detection image of the filling powder by adopting an artificial intelligence technology based on deep learning, and fully expressing the hidden distribution characteristic information of the material characteristic of the filling powder in the detection image so as to carry out self-adaptive control on sintering temperature and time based on the material characteristics of the actual magnetic conductive powder and the permanent magnetic powder. Thus, the phenomena of over-burning or under-burning can be avoided, and the preparation quality of the outer rotor is further improved.
In a first aspect, an intelligent production method of an external rotor motor is provided, which includes:
acquiring a detection image of the filling powder acquired by the camera;
passing the powder-filled detection image through a first convolutional neural network model as a shallow extractor to extract a plurality of shallow feature maps from layers of the first convolutional neural network model;
fusing the shallow feature maps to obtain a multi-scale shallow feature map;
passing the multi-scale shallow feature map through a second convolutional neural network model as a deep feature extractor to obtain a deep feature map from the last layer of the second convolutional neural network model;
Fusing the multi-scale shallow layer feature map and the deep layer feature map to obtain a decoding feature map; and
the decoding profile is passed through a decoder to obtain decoding values representing the recommended sintering temperature and sintering time.
In the above-described intelligent production method of an external rotor motor, passing the detection image filled with powder through a first convolutional neural network model as a shallow extractor to extract a plurality of shallow feature maps from each layer of the first convolutional neural network model, comprising: the detection image of the filling powder is subjected to convolution processing, pooling processing and nonlinear activation processing in forward transfer of layers using the layers of the first convolutional neural network model as a shallow extractor, respectively, to extract the plurality of shallow feature maps from the layers of the first convolutional neural network model as a shallow extractor.
In the above intelligent production method of an external rotor motor, passing the multi-scale shallow layer feature map through a second convolutional neural network model as a deep layer feature extractor to obtain a deep layer feature map from a last layer of the second convolutional neural network model, including: and respectively carrying out convolution processing, pooling processing and nonlinear activation processing on the multi-scale shallow layer feature map in forward transfer of layers by using each layer of the second convolution neural network model serving as a deep layer feature extractor to extract the deep layer feature map from the last layer of the second convolution neural network model serving as the deep layer feature extractor.
In the above intelligent production method of an external rotor motor, the decoding feature map is passed through a decoder to obtain decoding values, where the decoding values are used to represent recommended sintering temperatures and sintering times, and the method includes: performing decoding regression on the decoding feature map with a decoding formula using the decoder to obtain the decoded value; wherein, the decoding formula is: F d representing the decoding feature map, Y representing decoding values, W representing a weight matrix, B representing a bias vector,representing a matrix multiplication.
The intelligent production method of the outer rotor motor further comprises the training steps of: for training the first convolutional neural network model as a shallow layer extractor, the second convolutional neural network model as a deep layer feature extractor, and the decoder.
In the above intelligent production method of an external rotor motor, the training step includes: acquiring training data, wherein the training data comprises training detection images of filling powder acquired by the camera, and the recommended sintering temperature and sintering time; passing the powder-filled training test image through the first convolutional neural network model as a shallow extractor to extract a plurality of training shallow feature maps from layers of the first convolutional neural network model; fusing the training shallow feature maps to obtain a training multi-scale shallow feature map; passing the training multi-scale shallow feature map through the second convolutional neural network model as a deep feature extractor to obtain a training deep feature map from the last layer of the second convolutional neural network model; fusing the training multi-scale shallow feature map and the training deep feature map to obtain a training decoding feature map; performing feature redundancy optimization based on low-cost bottleneck mechanism stacking on the training decoding feature map to obtain an optimized training decoding feature map; passing the optimized training decoding feature map through the decoder to obtain a decoding loss function value; and training the first convolutional neural network model as a shallow layer extractor, the second convolutional neural network model as a deep layer feature extractor, and the decoder based on the decoding loss function value and traveling through a direction of gradient descent.
In the above intelligent production method of an external rotor motor, performing feature redundancy optimization on the training decoding feature map based on stacking of a low-cost bottleneck mechanism to obtain an optimized training decoding feature map, including: performing feature redundancy optimization on the training decoding feature map based on low-cost bottleneck mechanism stacking by using the following optimization formula to obtain the optimized training decoding feature map; wherein, the optimization formula is:
F b =Cov(F a )
wherein F is the training decoding characteristic diagram, cov represents single-layer convolution operation,position-by-position addition of the representation feature map, +.>Represents the position-wise subtraction of the feature map, +. 1 And B 2 And F is the optimized training decoding characteristic diagram for biasing the characteristic diagram.
In the above intelligent production method of an external rotor motor, the step of passing the optimized training decoding feature map through the decoder to obtain a decoding loss function value includes: performing decoding regression on the optimized training decoding feature map with a training decoding formula using the decoder to obtain training decoding values; wherein, training decoding formula is: wherein X is the optimized training decoding feature map, Y is the training decoding value, W is a weight matrix, < > >Representing a matrix multiplication; and calculating a variance between the training decoded value and a true value of the recommended sintering temperature and sintering time as the decoding loss function value.
In a second aspect, an external rotor motor is provided, and a rotor of the external rotor motor is manufactured by the intelligent production method of the external rotor motor.
Compared with the prior art, the outer rotor motor and the intelligent production method thereof provided by the application acquire the detection image of the filling powder acquired by the camera; and excavating hidden characteristic information of a detection image of the filling powder by adopting an artificial intelligence technology based on deep learning, and fully expressing the hidden distribution characteristic information of the material characteristic of the filling powder in the detection image so as to carry out self-adaptive control on sintering temperature and time based on the material characteristics of the actual magnetic conductive powder and the permanent magnetic powder. Thus, the phenomena of over-burning or under-burning can be avoided, and the preparation quality of the outer rotor is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of a scenario of an intelligent production method of an external rotor motor according to an embodiment of the present application.
Fig. 2 is a flowchart of an intelligent production method of an external rotor motor according to an embodiment of the present application.
Fig. 3 is a schematic diagram of an intelligent production method of an external rotor motor according to an embodiment of the present application.
Fig. 4 is a flowchart of the sub-steps of step 170 in the intelligent production method of the external rotor motor according to the embodiment of the present application.
Fig. 5 is a block diagram of an intelligent production system for an external rotor motor according to an embodiment of the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Unless defined otherwise, all technical and scientific terms used in the examples of this application have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application.
In the description of the embodiments of the present application, unless otherwise indicated and defined, the term "connected" should be construed broadly, and for example, may be an electrical connection, may be a communication between two elements, may be a direct connection, or may be an indirect connection via an intermediary, and it will be understood by those skilled in the art that the specific meaning of the term may be understood according to the specific circumstances.
It should be noted that, the term "first\second\third" in the embodiments of the present application is merely to distinguish similar objects, and does not represent a specific order for the objects, it is to be understood that "first\second\third" may interchange a specific order or sequence where allowed. It is to be understood that the "first\second\third" distinguishing objects may be interchanged where appropriate such that the embodiments of the present application described herein may be implemented in sequences other than those illustrated or described herein.
As described above, the rotor of the external rotor motor and the manufacturing method thereof disclosed in chinese patent CN103545998A find that the combination between the magnetic conductive powder and the permanent magnetic powder is not firm in the practical application process, and the mechanical strength and magnetic performance of the rotor are reduced. The reason for this is: in the high-temperature sintering control process, the sintering temperature and the sintering time are controlled within a certain reasonable range, the magnetic conductive powder and the permanent magnetic powder are not adaptively controlled according to the material characteristics and the pressing density of the magnetic conductive powder and the permanent magnetic powder, the phenomena of over-burning or under-burning are caused, the rotor preparation quality is affected, and the energy waste is caused. Accordingly, an optimized intelligent production scheme for an external rotor motor is desired.
Accordingly, considering that high-temperature sintering is an important process step for preparing the rotor of the external rotor motor in the actual intelligent production process of the external rotor motor, the magnetic conductive powder and the permanent magnetic powder can form firm combination, and the mechanical strength and the magnetic performance of the rotor are improved. In the actual high-temperature sintering control process, the sintering temperature and time are reasonably selected according to the material characteristics and the pressing density of the magnetic conductive powder and the permanent magnetic powder so as to avoid the phenomena of over-burning or under-burning. Generally, the higher the sintering temperature, the longer the sintering time, the better the degree of bonding between the powders, but also the grain growth of the powders may result, degrading the magnetic properties of the rotor. Therefore, it is necessary to reduce the sintering temperature and the sintering time as much as possible while ensuring the strength of the rotor.
In view of this, in the technical solution of the present application, it is desirable to control the actual sintering temperature and sintering time based on analysis of the detected images of the filling powder, i.e., the magnetically conductive powder and the permanent magnet powder, acquired by the camera. However, since a large amount of information exists in the detection image, hidden characteristic information about material characteristics of the magnetic conductive powder and the permanent magnetic powder is small in scale in the image, that is, the proportion of the hidden characteristic information in the occupied image is small, it is difficult to sufficiently capture and describe, and thus accuracy of sintering temperature and time control is reduced. Therefore, in this process, it is difficult to fully express the implicit distribution characteristic information about the material characteristics of the filling powder in the detected image, so as to adaptively control the sintering temperature and time based on the material characteristics of the actual magnetic conductive powder and the permanent magnetic powder, so as to avoid the occurrence of the phenomena of over-burning or under-burning, and further improve the preparation quality of the outer rotor.
In recent years, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, text signal processing, and the like. The development of deep learning and neural networks provides new solutions and solutions for mining implicit distribution characteristic information about the material properties of the filling powder in the detected images.
Specifically, in the technical scheme of the application, first, a detection image of the filling powder is acquired by a camera. In particular, here, the filler powder includes a magnetically conductive powder and a permanent magnetic powder. Then, feature mining of the detection image of the filling powder is performed using a convolutional neural network model having excellent expression in terms of implicit feature extraction of the image, and in particular, in order to be able to more accurately capture the performance implicit feature information of the filling powder in order to accurately control the subsequent sintering temperature and time, it is necessary to pay more attention to shallow features such as color and texture of the filling powder, which have an important meaning for the performance detection of the filling powder, in consideration of extracting the hidden features of the detection image of the filling powder. While convolutional neural networks are coded, as their depth deepens, shallow features become blurred and even buried in noise. Therefore, in the technical solution of the present application, deep feature extraction and shallow feature extraction of an image are used to comprehensively perform performance implicit feature capture of the filling powder.
More specifically, passing the detection image of the filling powder through a first convolutional neural network model as a shallow extractor to extract a plurality of shallow feature maps from each layer of the first convolutional neural network model, namely shallow performance feature information about the color, texture and the like of the filling powder in the detection image; and fusing the shallow feature maps to fuse the shallow features related to the filling powder performance to obtain a multi-scale shallow feature map. And then, further carrying out feature extraction on the multi-scale shallow feature map through a second convolution neural network model serving as a deep feature extractor to obtain a deep feature map from the last layer of the second convolution neural network model, namely deep semantic feature information about filling powder performance in the detection image. It should be appreciated that the convolutional neural network model according to the present application can retain the shallow and deep features of the filling powder, compared to a standard convolutional neural network model, so that not only feature information is more abundant, but also features of different depths can be retained, so as to improve the accuracy of the filling powder performance detection.
And then fusing the multi-scale shallow feature map and the deep feature map to fuse shallow associated feature information such as color, texture and the like of the filling powder performance and deep semantic feature information in the detection image, so as to obtain a decoding feature map with deep and shallow fusion features of the filling powder. Further, the decoding characteristic map is subjected to decoding regression in a decoder to obtain decoding values for representing the recommended sintering temperature and sintering time. That is, the characteristic implicit characteristic of the filling powder is used for decoding, so that the self-adaptive control of sintering temperature and time is performed, the phenomenon of over-burning or under-burning is avoided, and the preparation quality of the outer rotor is improved.
In particular, in the technical solution of the present application, when the multi-scale shallow feature map and the deep feature map are fused to obtain the decoding feature map, in order to make full use of multi-scale shallow image semantics and deep image semantics of the detection image, the decoding feature map is preferably obtained by directly concatenating the multi-scale shallow feature map and the deep feature map along a channel dimension. However, considering that the deep feature map is further obtained by the second convolutional neural network model as the deep feature extractor on the basis of the multi-scale shallow feature map, the decoded feature map inevitably has redundant features due to the correlation between the shallow image semantics and the deep image semantics, so that the efficiency of decoding regression of the decoded feature map by the decoder is reduced, that is, the training speed of the model and the accuracy of the decoded values obtained by the decoder are improved.
Therefore, the applicant of the present application performs feature redundancy optimization on the decoded feature map F based on the low-cost bottleneck-mechanism stack to obtain an optimized decoded feature map F The method is specifically expressed as follows:
F b =Cov(F a )
cov denotes a single-layer convolution operation,and +. 1 And B 2 For biasing feature maps, e.g. global mean feature maps or unit feature maps, which may initially be provided as said decoding feature maps, wherein initial biasing feature map B 1 And B 2 Different.
Here, the feature redundancy optimization based on the low-cost bottleneck-mechanism stack may use the low-cost bottleneck mechanism of the multiply-add stack of two low-cost transform features to perform feature expansion, and match the residual paths by biasing the stack channels with uniform values, so as to reveal hidden distribution information under intrinsic features in the redundancy features through low-cost low-operation transformation similar to the basic residual modules, to obtain more intrinsic expression of features through simple and effective convolution operation architecture, so as to optimize the redundant feature expression of the decoding feature map, improve the efficiency of decoding regression of the decoder, and improve the training speed of the model and the accuracy of the decoded values obtained by the decoder. Therefore, the self-adaptive control of sintering temperature and time can be performed based on the material characteristics of the actual magnetic conductive powder and the permanent magnetic powder, so that the phenomenon of over-burning or under-burning is avoided, the preparation quality of the outer rotor is improved, the rotation stability and the service life of the motor are improved, and the energy waste is reduced.
Fig. 1 is a schematic view of a scenario of an intelligent production method of an external rotor motor according to an embodiment of the present application. As shown in fig. 1, in this application scenario, first, a detection image of the filling powder acquired by the camera is acquired (e.g., C as illustrated in fig. 1); the acquired detection image of the filling powder is then input into a server (e.g., S as illustrated in fig. 1) that deploys an intelligent production algorithm of the external rotor motor, wherein the server is capable of processing the detection image of the filling powder based on the intelligent production algorithm of the external rotor motor to generate a decoded value representing the recommended sintering temperature and sintering time.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
In one embodiment of the present application, fig. 2 is a flowchart of an intelligent production method of an external rotor motor according to an embodiment of the present application. As shown in fig. 2, an intelligent production method 100 of an external rotor motor according to an embodiment of the present application includes: 110, acquiring a detection image of the filling powder acquired by the camera; 120 passing the powder-filled detection image through a first convolutional neural network model as a shallow extractor to extract a plurality of shallow feature maps from layers of the first convolutional neural network model; 130, fusing the shallow feature maps to obtain a multi-scale shallow feature map; 140, passing the multi-scale shallow layer feature map through a second convolutional neural network model as a deep layer feature extractor to obtain a deep layer feature map from the last layer of the second convolutional neural network model; 150, fusing the multi-scale shallow layer feature map and the deep layer feature map to obtain a decoding feature map; and 160, passing the decoding characteristic map through a decoder to obtain decoding values, wherein the decoding values are used for representing recommended sintering temperature and sintering time.
Fig. 3 is a schematic diagram of an intelligent production method of an external rotor motor according to an embodiment of the present application. As shown in fig. 3, in the network architecture, first, a detection image of the filling powder acquired by the camera is acquired; then, passing the powder-filled detection image through a first convolutional neural network model as a shallow extractor to extract a plurality of shallow feature maps from each layer of the first convolutional neural network model; then, fusing the shallow feature maps to obtain a multi-scale shallow feature map; then, the multi-scale shallow layer feature map passes through a second convolution neural network model serving as a deep layer feature extractor to obtain a deep layer feature map from the last layer of the second convolution neural network model; then, fusing the multi-scale shallow layer feature map and the deep layer feature map to obtain a decoding feature map; and finally, passing the decoding characteristic map through a decoder to obtain decoding values, wherein the decoding values are used for representing recommended sintering temperature and sintering time.
Specifically, in step 110, a detection image of the filling powder acquired by the camera is acquired. As described above, the rotor of the external rotor motor and the manufacturing method thereof disclosed in chinese patent CN103545998A find that the combination between the magnetic conductive powder and the permanent magnetic powder is not firm in the practical application process, and the mechanical strength and magnetic performance of the rotor are reduced. The reason for this is: in the high-temperature sintering control process, the sintering temperature and the sintering time are controlled within a certain reasonable range, the magnetic conductive powder and the permanent magnetic powder are not adaptively controlled according to the material characteristics and the pressing density of the magnetic conductive powder and the permanent magnetic powder, the phenomena of over-burning or under-burning are caused, the rotor preparation quality is affected, and the energy waste is caused. Accordingly, an optimized intelligent production scheme for an external rotor motor is desired.
Accordingly, considering that high-temperature sintering is an important process step for preparing the rotor of the external rotor motor in the actual intelligent production process of the external rotor motor, the magnetic conductive powder and the permanent magnetic powder can form firm combination, and the mechanical strength and the magnetic performance of the rotor are improved. In the actual high-temperature sintering control process, the sintering temperature and time are reasonably selected according to the material characteristics and the pressing density of the magnetic conductive powder and the permanent magnetic powder so as to avoid the phenomena of over-burning or under-burning. Generally, the higher the sintering temperature, the longer the sintering time, the better the degree of bonding between the powders, but also the grain growth of the powders may result, degrading the magnetic properties of the rotor. Therefore, it is necessary to reduce the sintering temperature and the sintering time as much as possible while ensuring the strength of the rotor.
In view of this, in the technical solution of the present application, it is desirable to control the actual sintering temperature and sintering time based on analysis of the detected images of the filling powder, i.e., the magnetically conductive powder and the permanent magnet powder, acquired by the camera. However, since a large amount of information exists in the detection image, hidden characteristic information about material characteristics of the magnetic conductive powder and the permanent magnetic powder is small in scale in the image, that is, the proportion of the hidden characteristic information in the occupied image is small, it is difficult to sufficiently capture and describe, and thus accuracy of sintering temperature and time control is reduced. Therefore, in this process, it is difficult to fully express the implicit distribution characteristic information about the material characteristics of the filling powder in the detected image, so as to adaptively control the sintering temperature and time based on the material characteristics of the actual magnetic conductive powder and the permanent magnetic powder, so as to avoid the occurrence of the phenomena of over-burning or under-burning, and further improve the preparation quality of the outer rotor.
In recent years, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, text signal processing, and the like. The development of deep learning and neural networks provides new solutions and solutions for mining implicit distribution characteristic information about the material properties of the filling powder in the detected images.
Specifically, in the technical scheme of the application, first, a detection image of the filling powder is acquired by a camera. In particular, here, the filler powder includes a magnetically conductive powder and a permanent magnetic powder.
Specifically, in steps 120 and 130, passing the powder-filled detection image through a first convolutional neural network model as a shallow extractor to extract a plurality of shallow feature maps from each layer of the first convolutional neural network model; and fusing the shallow feature maps to obtain a multi-scale shallow feature map. Then, feature mining of the powder-filled detection image is performed using a convolutional neural network model having excellent performance in implicit feature extraction of the image.
In particular, in order to be able to capture the performance implicit characteristic information of the filling powder more accurately when extracting the hidden characteristics of the detected image of the filling powder, so as to accurately control the subsequent sintering temperature and time, it is considered that the shallow characteristics such as the color and texture of the filling powder need to be focused more, and these shallow characteristics have important significance for the performance detection of the filling powder. While convolutional neural networks are coded, as their depth deepens, shallow features become blurred and even buried in noise. Therefore, in the technical solution of the present application, deep feature extraction and shallow feature extraction of an image are used to comprehensively perform performance implicit feature capture of the filling powder.
More specifically, passing the detection image of the filling powder through a first convolutional neural network model as a shallow extractor to extract a plurality of shallow feature maps from each layer of the first convolutional neural network model, namely shallow performance feature information about the color, texture and the like of the filling powder in the detection image; and fusing the shallow feature maps to fuse the shallow features related to the filling powder performance to obtain a multi-scale shallow feature map.
Wherein passing the powder-filled detection image through a first convolutional neural network model as a shallow extractor to extract a plurality of shallow feature maps from layers of the first convolutional neural network model, comprising: the detection image of the filling powder is subjected to convolution processing, pooling processing and nonlinear activation processing in forward transfer of layers using the layers of the first convolutional neural network model as a shallow extractor, respectively, to extract the plurality of shallow feature maps from the layers of the first convolutional neural network model as a shallow extractor.
Specifically, in step 140, the multi-scale shallow layer feature map is passed through a second convolutional neural network model as a deep layer feature extractor to obtain a deep layer feature map from a last layer of the second convolutional neural network model. And then, further carrying out feature extraction on the multi-scale shallow feature map through a second convolution neural network model serving as a deep feature extractor to obtain a deep feature map from the last layer of the second convolution neural network model, namely deep semantic feature information about filling powder performance in the detection image.
It should be appreciated that the convolutional neural network model according to the present application can retain the shallow and deep features of the filling powder, compared to a standard convolutional neural network model, so that not only feature information is more abundant, but also features of different depths can be retained, so as to improve the accuracy of the filling powder performance detection.
Further, passing the multi-scale shallow feature map through a second convolutional neural network model as a deep feature extractor to obtain a deep feature map from a last layer of the second convolutional neural network model, comprising: and respectively carrying out convolution processing, pooling processing and nonlinear activation processing on the multi-scale shallow layer feature map in forward transfer of layers by using each layer of the second convolution neural network model serving as a deep layer feature extractor to extract the deep layer feature map from the last layer of the second convolution neural network model serving as the deep layer feature extractor.
The convolutional neural network (Convolutional Neural Network, CNN) is an artificial neural network and has wide application in the fields of image recognition and the like. The convolutional neural network may include an input layer, a hidden layer, and an output layer, where the hidden layer may include a convolutional layer, a pooling layer, an activation layer, a full connection layer, etc., where the previous layer performs a corresponding operation according to input data, outputs an operation result to the next layer, and obtains a final result after the input initial data is subjected to a multi-layer operation.
The convolutional neural network model has excellent performance in the aspect of image local feature extraction by taking a convolutional kernel as a feature filtering factor, and has stronger feature extraction generalization capability and fitting capability compared with the traditional image feature extraction algorithm based on statistics or feature engineering.
It should be understood that, compared with a standard convolutional neural network model, the convolutional neural network model according to the application can retain the shallow layer characteristics and the deep layer characteristics of the reaction liquid state of each reaction monitoring key frame, so that the characteristic information is richer, and the characteristics of different depths can be retained, so as to improve the accuracy of the classification result. Meanwhile, the structure of the deep neural network is complex, a large amount of sample data is needed for training and adjusting, the training time of the deep neural network is long, and fitting is easy. Therefore, in the design of the neural network model, the combination of the shallow network and the deep network is generally adopted, and through depth feature fusion, the complexity of the network and the risk of overfitting can be reduced to a certain extent, and meanwhile, the feature extraction capability and the generalization capability of the model are improved.
Specifically, in step 150, the multi-scale shallow layer feature map and the deep layer feature map are fused to obtain a decoded feature map. And then fusing the multi-scale shallow feature map and the deep feature map to fuse shallow associated feature information such as color, texture and the like of the filling powder performance and deep semantic feature information in the detection image, so as to obtain a decoding feature map with deep and shallow fusion features of the filling powder.
Specifically, in step 160, the decoding profile is passed through a decoder to obtain decoded values representing the recommended sintering temperature and sintering time. Further, the decoding characteristic map is subjected to decoding regression in a decoder to obtain decoding values for representing the recommended sintering temperature and sintering time. That is, the characteristic implicit characteristic of the filling powder is used for decoding, so that the self-adaptive control of sintering temperature and time is performed, the phenomenon of over-burning or under-burning is avoided, and the preparation quality of the outer rotor is improved.
Wherein the decoding feature map is passed through a decoder to obtain decoding values, the decoding values being used to represent recommended sintering temperatures and sintering times, comprising: performing decoding regression on the decoding feature map with a decoding formula using the decoder to obtain the decoded value; wherein, the decoding formula is: F d Representing the decoding feature map, Y representing the decoding value, W representing the weight matrix, B representing the bias vector, +.>Representing a matrix multiplication.
Further, the intelligent production method of the outer rotor motor further comprises the training steps of: for training the first convolutional neural network model as a shallow layer extractor, the second convolutional neural network model as a deep layer feature extractor, and the decoder. Fig. 4 is a flowchart of the sub-steps of step 170 in the intelligent production method of the external rotor motor according to the embodiment of the present application, as shown in fig. 4, the training step 170 includes: 171, acquiring training data, wherein the training data comprises training detection images of filling powder acquired by the camera, and the recommended sintering temperature and sintering time; 172 passing the powder-filled training test image through the first convolutional neural network model as a shallow extractor to extract a plurality of training shallow feature maps from layers of the first convolutional neural network model; 173, fusing the plurality of training shallow feature maps to obtain a training multi-scale shallow feature map; 174 passing the training multi-scale shallow feature map through the second convolutional neural network model as a deep feature extractor to obtain a training deep feature map from a last layer of the second convolutional neural network model; 175, fusing the training multi-scale shallow feature map and the training deep feature map to obtain a training decoding feature map; 176, performing feature redundancy optimization on the training decoding feature map based on the low-cost bottleneck mechanism stack to obtain an optimized training decoding feature map; 177, passing the optimized training decoding feature map through the decoder to obtain a decoding loss function value; and, 178, training the first convolutional neural network model as a shallow layer extractor, the second convolutional neural network model as a deep layer feature extractor, and the decoder based on the decoding loss function value and traveling through a direction of gradient descent.
In particular, in the technical solution of the present application, when the multi-scale shallow feature map and the deep feature map are fused to obtain the decoding feature map, in order to make full use of multi-scale shallow image semantics and deep image semantics of the detection image, the decoding feature map is preferably obtained by directly concatenating the multi-scale shallow feature map and the deep feature map along a channel dimension. However, considering that the deep feature map is further obtained by the second convolutional neural network model as the deep feature extractor on the basis of the multi-scale shallow feature map, the decoded feature map inevitably has redundant features due to the correlation between the shallow image semantics and the deep image semantics, so that the efficiency of decoding regression of the decoded feature map by the decoder is reduced, that is, the training speed of the model and the accuracy of the decoded values obtained by the decoder are improved.
Therefore, the applicant of the present application performs feature redundancy optimization on the decoded feature map F based on the low-cost bottleneck-mechanism stack to obtain an optimized decoded feature map F The method is specifically expressed as follows: performing feature redundancy optimization on the training decoding feature map based on low-cost bottleneck mechanism stacking by using the following optimization formula to obtain the optimized training decoding feature map; wherein, the optimization formula is:
F b =Cov(F a )
Wherein F is the training decoding characteristic diagram, cov represents single-layer convolution operation,position-by-position addition of the representation feature map, +.>Represents the position-wise subtraction of the feature map, +. 1 And B 2 To bias the characteristic diagram, F Decoding a feature map for the optimization training.
Here, the feature redundancy optimization based on the low-cost bottleneck-mechanism stack may use the low-cost bottleneck mechanism of the multiply-add stack of two low-cost transform features to perform feature expansion, and match the residual paths by biasing the stack channels with uniform values, so as to reveal hidden distribution information under intrinsic features in the redundancy features through low-cost low-operation transformation similar to the basic residual modules, to obtain more intrinsic expression of features through simple and effective convolution operation architecture, so as to optimize the redundant feature expression of the decoding feature map, improve the efficiency of decoding regression of the decoder, and improve the training speed of the model and the accuracy of the decoded values obtained by the decoder. Therefore, the self-adaptive control of sintering temperature and time can be performed based on the material characteristics of the actual magnetic conductive powder and the permanent magnetic powder, so that the phenomenon of over-burning or under-burning is avoided, the preparation quality of the outer rotor is improved, the rotation stability and the service life of the motor are improved, and the energy waste is reduced.
Wherein passing the optimized training decoding feature map through the decoder to obtain a decoding loss function value comprises: performing decoding regression on the optimized training decoding feature map with a training decoding formula using the decoder to obtain training decoding values; wherein, training decoding formula is:wherein X is the optimized training decoding feature map, Y is the training decoding value, W is a weight matrix, < >>Representing a matrix multiplication; and calculating a variance between the training decoded value and a true value of the recommended sintering temperature and sintering time as the decoding loss function value
In summary, an intelligent production method 100 of an external rotor motor according to an embodiment of the present application is illustrated, which acquires a detection image of filling powder acquired by a camera; and excavating hidden characteristic information of a detection image of the filling powder by adopting an artificial intelligence technology based on deep learning, and fully expressing the hidden distribution characteristic information of the material characteristic of the filling powder in the detection image so as to carry out self-adaptive control on sintering temperature and time based on the material characteristics of the actual magnetic conductive powder and the permanent magnetic powder. Thus, the phenomena of over-burning or under-burning can be avoided, and the preparation quality of the outer rotor is further improved.
In one embodiment of the present application, there is also provided an external rotor motor, the rotor of which is manufactured by the intelligent production method of the external rotor motor.
In one embodiment of the present application, fig. 5 is a block diagram of an intelligent production system of an external rotor motor according to an embodiment of the present application. As shown in fig. 5, an intelligent production system 200 of an external rotor motor according to an embodiment of the present application includes: an image acquisition module 210 for acquiring a detection image of the filling powder acquired by the camera; a first feature extraction module 220 for passing the powder-filled detection image through a first convolutional neural network model as a shallow extractor to extract a plurality of shallow feature maps from each layer of the first convolutional neural network model; a multi-scale feature map generating module 230, configured to fuse the plurality of shallow feature maps to obtain a multi-scale shallow feature map; a second feature extraction module 240, configured to pass the multi-scale shallow feature map through a second convolutional neural network model that is a deep feature extractor to obtain a deep feature map from a last layer of the second convolutional neural network model; a fusion module 250, configured to fuse the multi-scale shallow feature map and the deep feature map to obtain a decoded feature map; and a decoding module 260 for passing the decoded signature through a decoder to obtain decoded values representing the recommended sintering temperature and sintering time.
In a specific example, in the intelligent production system of the outer rotor motor, the first feature extraction module is configured to: the detection image of the filling powder is subjected to convolution processing, pooling processing and nonlinear activation processing in forward transfer of layers using the layers of the first convolutional neural network model as a shallow extractor, respectively, to extract the plurality of shallow feature maps from the layers of the first convolutional neural network model as a shallow extractor.
In a specific example, in the intelligent production system of the outer rotor motor, the second feature extraction module is configured to: and respectively carrying out convolution processing, pooling processing and nonlinear activation processing on the multi-scale shallow layer feature map in forward transfer of layers by using each layer of the second convolution neural network model serving as a deep layer feature extractor to extract the deep layer feature map from the last layer of the second convolution neural network model serving as the deep layer feature extractor.
In a specific example, in the intelligent production system of the outer rotor motor, the decoding module is configured to: performing decoding regression on the decoding feature map with a decoding formula using the decoder to obtain the decoded value; wherein, the decoding formula is: F d Representing the decoding feature map, Y representing the decoding value, W representing the weight matrix, B representing the bias vector, +.>Representing a matrix multiplication.
In a specific example, in the intelligent production system of the outer rotor motor, the intelligent production system further comprises a training module for training the first convolutional neural network model serving as the shallow layer extractor, the second convolutional neural network model serving as the deep layer feature extractor and the decoder.
In a specific example, in the intelligent production system of the outer rotor motor, the training module includes: a training image acquisition unit configured to acquire training data including a training detection image of the filling powder acquired by the camera, and a true value of the recommended sintering temperature and sintering time; a training first feature extraction unit for passing the powder-filled training detection image through the first convolutional neural network model as a shallow extractor to extract a plurality of training shallow feature maps from each layer of the first convolutional neural network model; the training multi-scale feature map generation unit is used for fusing the plurality of training shallow feature maps to obtain a training multi-scale shallow feature map; a training second feature extraction unit, configured to pass the training multi-scale shallow feature map through the second convolutional neural network model as a deep feature extractor to obtain a training deep feature map from a last layer of the second convolutional neural network model; the training fusion unit is used for fusing the training multi-scale shallow feature map and the training deep feature map to obtain a training decoding feature map; the training optimization unit is used for performing feature redundancy optimization on the training decoding feature map based on the stacking of the low-cost bottleneck mechanisms so as to obtain an optimized training decoding feature map; a decoding loss function value calculation unit, configured to pass the optimization training decoding feature map through the decoder to obtain a decoding loss function value; and a training unit for training the first convolutional neural network model as a shallow layer extractor, the second convolutional neural network model as a deep layer feature extractor, and the decoder based on the decoding loss function value and traveling in a direction of gradient descent.
In a specific example, in the intelligent production system of the outer rotor motor, the training optimizing unit is configured to: performing feature redundancy optimization on the training decoding feature map based on low-cost bottleneck mechanism stacking by using the following optimization formula to obtain the optimized training decoding feature map; wherein, the optimization formula is:
F b =Cov(F a )
wherein F is the training decoding characteristic diagram, cov represents single-layer convolution operation,position-by-position addition of the representation feature map, +.>Represents the position-wise subtraction of the feature map, +. 1 And B 2 To bias the characteristic diagram, F Decoding a feature map for the optimization training.
In a specific example, in the above-described intelligent production system of an external rotor motor, the decoding loss function value calculation unit includes: a training decoding subunit for performing decoding regression on the optimized training decoding feature map with a training decoding formula using the decoder to obtain training decoding values; wherein, training decoding formula is: wherein X is the optimized training decoding feature map, Y is the training decoding value, W is a weight matrix, < >>Representing a matrix multiplication; and a calculating subunit for calculating, as the decoding loss function value, a variance between the training decoding value and a true value of the recommended sintering temperature and sintering time.
Here, it will be understood by those skilled in the art that the specific functions and operations of the respective units and modules in the above-described intelligent production system of the external rotor motor have been described in detail in the above description of the intelligent production method of the external rotor motor with reference to fig. 1 to 4, and thus, repetitive descriptions thereof will be omitted.
As described above, the intelligent production system 200 of an external rotor motor according to the embodiment of the present application can be implemented in various terminal devices, such as a server for intelligent production of an external rotor motor, and the like. In one example, the intelligent production system 200 of an external rotor motor according to an embodiment of the present application may be integrated into the terminal device as one software module and/or hardware module. For example, the intelligent production system 200 of the external rotor motor may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the intelligent production system 200 of the external rotor motor can also be one of a plurality of hardware modules of the terminal device.
Alternatively, in another example, the intelligent production system 200 of the external rotor motor and the terminal device may be separate devices, and the intelligent production system 200 of the external rotor motor may be connected to the terminal device through a wired and/or wireless network and transmit the interactive information in a agreed data format.
The present application also provides a computer program product comprising instructions which, when executed, cause an apparatus to perform operations corresponding to the above-described methods.
In one embodiment of the present application, there is also provided a computer readable storage medium storing a computer program for executing the above-described method.
It should be appreciated that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the forms of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects may be utilized. Furthermore, the computer program product may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Methods, systems, and computer program products of embodiments of the present application are described in terms of flow diagrams and/or block diagrams. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not intended to be limited to the details disclosed herein as such.
The block diagrams of the devices, apparatuses, devices, systems referred to in this application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent to the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (6)

1. An intelligent production method of an external rotor motor is characterized by comprising the following steps:
acquiring a detection image of the filling powder acquired by the camera;
passing the powder-filled detection image through a first convolutional neural network model as a shallow extractor to extract a plurality of shallow feature maps from layers of the first convolutional neural network model;
fusing the shallow feature maps to obtain a multi-scale shallow feature map;
passing the multi-scale shallow feature map through a second convolutional neural network model as a deep feature extractor to obtain a deep feature map from the last layer of the second convolutional neural network model;
fusing the multi-scale shallow layer feature map and the deep layer feature map to obtain a decoding feature map; and
passing the decoded signature through a decoder to obtain decoded values, the decoded values being indicative of a recommended sintering temperature and sintering time;
wherein passing the powder-filled detection image through a first convolutional neural network model as a shallow extractor to extract a plurality of shallow feature maps from layers of the first convolutional neural network model, comprising: performing convolution processing, pooling processing and nonlinear activation processing on the powder-filled detection image in forward transfer of layers using the layers of the first convolutional neural network model as a shallow extractor to extract the plurality of shallow feature maps from the layers of the first convolutional neural network model as a shallow extractor, respectively;
Wherein passing the multi-scale shallow feature map through a second convolutional neural network model as a deep feature extractor to obtain a deep feature map from a last layer of the second convolutional neural network model, comprising: performing convolution processing, pooling processing and nonlinear activation processing on the multi-scale shallow layer feature map in forward transfer of layers by using each layer of the second convolutional neural network model serving as a deep layer feature extractor to extract the deep layer feature map from the last layer of the second convolutional neural network model serving as a deep layer feature extractor;
wherein the decoding feature map is passed through a decoder to obtain decoding values, the decoding values being used to represent recommended sintering temperatures and sintering times, comprising:
performing decoding regression on the decoding feature map with a decoding formula using the decoder to obtain the decoded value;
wherein, the decoding formula is:F d representing the decoding feature map, Y representing the decoding value, W representing the weight matrix, B representing the bias vector, +.>Representing a matrix multiplication.
2. The intelligent production method of an external rotor motor according to claim 1, further comprising a training step of: for training the first convolutional neural network model as a shallow layer extractor, the second convolutional neural network model as a deep layer feature extractor, and the decoder.
3. The intelligent production method of an external rotor motor according to claim 2, wherein the training step includes:
acquiring training data, wherein the training data comprises training detection images of filling powder acquired by the camera, and the recommended sintering temperature and sintering time;
passing the powder-filled training test image through the first convolutional neural network model as a shallow extractor to extract a plurality of training shallow feature maps from layers of the first convolutional neural network model;
fusing the training shallow feature maps to obtain a training multi-scale shallow feature map;
passing the training multi-scale shallow feature map through the second convolutional neural network model as a deep feature extractor to obtain a training deep feature map from the last layer of the second convolutional neural network model;
fusing the training multi-scale shallow feature map and the training deep feature map to obtain a training decoding feature map;
performing feature redundancy optimization based on low-cost bottleneck mechanism stacking on the training decoding feature map to obtain an optimized training decoding feature map;
passing the optimized training decoding feature map through the decoder to obtain a decoding loss function value; and
Training the first convolutional neural network model as a shallow layer extractor, the second convolutional neural network model as a deep layer feature extractor, and the decoder based on the decoding loss function value and traveling through a direction of gradient descent.
4. The intelligent production method of an external rotor motor according to claim 3, wherein performing feature redundancy optimization based on low-cost bottleneck-mechanism stacking on the training decoding feature map to obtain an optimized training decoding feature map comprises:
performing feature redundancy optimization on the training decoding feature map based on low-cost bottleneck mechanism stacking by using the following optimization formula to obtain the optimized training decoding feature map;
wherein, the optimization formula is:
F b =Cov(F a )
wherein F is the training decoding characteristic diagram, cov represents single-layer convolution operation,the position-by-position addition of the feature maps is represented,represents a position-wise subtraction of the feature map, +. 1 And B 2 To bias the characteristic diagram, F Decoding a feature map for the optimization training.
5. The intelligent production method of an external rotor motor according to claim 4, wherein passing the optimized training decoding profile through the decoder to obtain a decoding loss function value comprises: performing decoding regression on the optimized training decoding feature map with a training decoding formula using the decoder to obtain training decoding values; wherein, training decoding formula is: Wherein X is the optimized training decoding feature map, Y is the training decoding value, W is a weight matrix, < >>Representing a matrix multiplication; and
and calculating a variance between the training decoding value and a true value of the recommended sintering temperature and sintering time as the decoding loss function value.
6. An external rotor motor, characterized in that the rotor of the external rotor motor is manufactured by the intelligent production method of the external rotor motor according to any one of claims 1 to 5.
CN202310421226.8A 2023-04-19 2023-04-19 Outer rotor motor and intelligent production method thereof Active CN117155041B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310421226.8A CN117155041B (en) 2023-04-19 2023-04-19 Outer rotor motor and intelligent production method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310421226.8A CN117155041B (en) 2023-04-19 2023-04-19 Outer rotor motor and intelligent production method thereof

Publications (2)

Publication Number Publication Date
CN117155041A CN117155041A (en) 2023-12-01
CN117155041B true CN117155041B (en) 2024-02-20

Family

ID=88885543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310421226.8A Active CN117155041B (en) 2023-04-19 2023-04-19 Outer rotor motor and intelligent production method thereof

Country Status (1)

Country Link
CN (1) CN117155041B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008031177A1 (en) * 2006-09-11 2008-03-20 Gerdau Açominas S/A Process using artificial neural network for predictive control in sinter machine
WO2020042662A1 (en) * 2018-08-31 2020-03-05 北京金风科创风电设备有限公司 Wind power generator set, electromagnetic device, and heat exchange or drying device for iron core
CN111950191A (en) * 2020-07-07 2020-11-17 湖南大学 Rotary kiln sintering temperature prediction method based on hybrid deep neural network
KR20220001728A (en) * 2020-06-30 2022-01-06 한국전력공사 Rotor bending correction method using high frequency heat and rotor bending correction apparatus using the same
CN114169640A (en) * 2021-12-27 2022-03-11 中南大学 Method and system for predicting moisture of returned powder of cooling cylinder in sintering process

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7172812B2 (en) * 2019-04-08 2022-11-16 株式会社デンソー LAMINATED CORE, ROTATING ELECTRICAL MACHINE, AND LAMINATED CORE MANUFACTURING METHOD

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008031177A1 (en) * 2006-09-11 2008-03-20 Gerdau Açominas S/A Process using artificial neural network for predictive control in sinter machine
WO2020042662A1 (en) * 2018-08-31 2020-03-05 北京金风科创风电设备有限公司 Wind power generator set, electromagnetic device, and heat exchange or drying device for iron core
KR20220001728A (en) * 2020-06-30 2022-01-06 한국전력공사 Rotor bending correction method using high frequency heat and rotor bending correction apparatus using the same
CN111950191A (en) * 2020-07-07 2020-11-17 湖南大学 Rotary kiln sintering temperature prediction method based on hybrid deep neural network
CN114169640A (en) * 2021-12-27 2022-03-11 中南大学 Method and system for predicting moisture of returned powder of cooling cylinder in sintering process

Also Published As

Publication number Publication date
CN117155041A (en) 2023-12-01

Similar Documents

Publication Publication Date Title
CN107545889A (en) Suitable for the optimization method, device and terminal device of the model of pattern-recognition
CN109784183B (en) Video saliency target detection method based on cascade convolution network and optical flow
Pei et al. Data augmentation for rolling bearing fault diagnosis using an enhanced few-shot Wasserstein auto-encoder with meta-learning
CN104700100A (en) Feature extraction method for high spatial resolution remote sensing big data
CN116051549A (en) Method, system, medium and equipment for dividing defects of solar cell
Wang et al. TF-SOD: a novel transformer framework for salient object detection
CN114494973A (en) Training method, system, equipment and storage medium of video semantic segmentation network
CN115797349A (en) Defect detection method, device and equipment
CN116343043A (en) Remote sensing image change detection method with multi-scale feature fusion function
CN117155041B (en) Outer rotor motor and intelligent production method thereof
Kim et al. Efficient semantic segmentation using spatio-channel dilated convolutions
Liu et al. Fully convolutional multi‐scale dense networks for monocular depth estimation
Guo A novel Multi to Single Module for small object detection
CN110633706A (en) Semantic segmentation method based on pyramid network
Tao et al. F-pvnet: Frustum-level 3-d object detection on point–voxel feature representation for autonomous driving
Li et al. A discriminative self‐attention cycle GAN for face super‐resolution and recognition
CN116596900A (en) Method and system for manufacturing graphite crucible
CN112200817A (en) Sky region segmentation and special effect processing method, device and equipment based on image
CN116797248A (en) Data traceability management method and system based on block chain
Zhu et al. Video snapshot: Single image motion expansion via invertible motion embedding
CN116177858A (en) Preparation method and system of high-purity quartz crucible
CN107729381B (en) Interactive multimedia resource aggregation method and system based on multi-dimensional feature recognition
CN109101972A (en) A kind of semantic segmentation convolutional neural networks with contextual information coding
Li et al. Blind image quality evaluation method based on cyclic generative adversarial network
Guo et al. Parallel matters: Efficient polyp segmentation with parallel structured feature augmentation modules

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant