CN117532885A - Intelligent auxiliary system, method and storage medium for 3D printing - Google Patents

Intelligent auxiliary system, method and storage medium for 3D printing Download PDF

Info

Publication number
CN117532885A
CN117532885A CN202410037425.3A CN202410037425A CN117532885A CN 117532885 A CN117532885 A CN 117532885A CN 202410037425 A CN202410037425 A CN 202410037425A CN 117532885 A CN117532885 A CN 117532885A
Authority
CN
China
Prior art keywords
memristor
representing
hierarchical
mechanical arm
printing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410037425.3A
Other languages
Chinese (zh)
Inventor
门正兴
王阳合
高曦
王莲莲
白晶斐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Aeronautic Polytechnic
Original Assignee
Chengdu Aeronautic Polytechnic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Aeronautic Polytechnic filed Critical Chengdu Aeronautic Polytechnic
Priority to CN202410037425.3A priority Critical patent/CN117532885A/en
Publication of CN117532885A publication Critical patent/CN117532885A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/30Auxiliary operations or equipment
    • B29C64/386Data acquisition or data processing for additive manufacturing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/30Auxiliary operations or equipment
    • B29C64/386Data acquisition or data processing for additive manufacturing
    • B29C64/393Data acquisition or data processing for additive manufacturing for controlling or regulating additive manufacturing processes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y50/00Data acquisition or data processing for additive manufacturing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y50/00Data acquisition or data processing for additive manufacturing
    • B33Y50/02Data acquisition or data processing for additive manufacturing for controlling or regulating additive manufacturing processes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The embodiment of the application discloses a 3D prints intelligent auxiliary system, method and storage medium, wherein 3D prints intelligent auxiliary system and includes: the multi-processor system-on-chip is used for acquiring real-time picture data of the 3D printing mechanical arm, initializing and training a hierarchical overrun learning machine memristor hardware network model based on hierarchical overrun learning machine memristor hardware network optimization based on the real-time picture data to acquire training tag data, carrying out randomization weight distribution based on the training tag data, and sending the acquired weight distribution data to the hierarchical overrun learning machine memristor hardware network to acquire parameter adjustment decisions of the mechanical arm based on an output result of the hierarchical overrun learning machine memristor hardware network model, thereby assisting control of the mechanical arm in 3D printing. The method solves the problems of high calculation power consumption and large delay in the prior art.

Description

Intelligent auxiliary system, method and storage medium for 3D printing
Technical Field
The application relates to the technical field of artificial intelligence auxiliary 3D printing, in particular to a 3D printing intelligent auxiliary system, a method and a storage medium.
Background
In the prior art, machine learning algorithms such as neural networks for assisting 3D printing are all run in the context of a von neumann architecture such as a CPU or GPU, where the computation and storage functions are separate, and are performed by the CPU and memory, respectively. With the development of technology, the speed and capacity of a CPU and a memory are rapidly improved, but the speed of a bus for transmitting data and instructions is limited, so that the frequent data transmission between the CPU and the memory causes a bottleneck of information processing, namely a von Neumann bottleneck; on the other hand, the memory data access speed cannot keep up with the data processing speed of the CPU, and the difference is pulled more and more, so that the memory wall phenomenon is caused, and the performance of the CPU is severely limited by the memory performance.
Therefore, the existing 3D printing intelligent auxiliary system adopting the von Neumann architecture has the problems of high calculation power consumption and large delay.
Disclosure of Invention
An object of the embodiments of the present application is to provide a 3D printing intelligent auxiliary system, a method and a storage medium, which are used for solving the problems of high computational power consumption and large delay existing in the prior art that a von neumann architecture is adopted as a 3D printing intelligent auxiliary system.
To achieve the above object, an embodiment of the present application provides a 3D printing intelligent auxiliary system, including:
the multiprocessor system-on-chip and the hierarchical overrun learning machine memristor hardware network are connected, wherein,
the multiprocessor on-chip system is used for acquiring real-time picture data of the 3D printing mechanical arm, initializing and training a hierarchical overrun learning machine memristor hardware network model based on hierarchical overrun learning machine memristor hardware network optimization based on the real-time picture data to acquire training tag data, carrying out randomization weight distribution based on the training tag data, and sending the acquired weight distribution data to the hierarchical overrun learning machine memristor hardware network to acquire parameter adjustment decisions of the mechanical arm based on an output result of the hierarchical overrun learning machine memristor hardware network model, thereby assisting control of the mechanical arm in 3D printing.
Optionally, after the deriving the parameter adjustment decision for the robotic arm, the multiprocessor system-on-chip is further configured to:
and acquiring a 3D printing path code, planning a motion track of the mechanical arm based on the 3D printing path code, fusing a parameter adjustment decision of the mechanical arm, and performing parameterization adjustment on the mechanical arm so as to realize control on the mechanical arm.
Optionally, the hierarchical overrunning learner memristor hardware network includes:
a multiplexer, a generating circuit, a memristor array and an inner layer overrun learning machine memristor node, wherein,
the multiplexer is used for carrying out randomized weight distribution based on the input weight distribution data;
the generation circuit is used for generating analog direct current according to the input weight distribution data and inputting the analog direct current into the memristor array so that the memristor array copies the weight distribution data into the memristor nodes of the inner-layer overrun learning machine of each column and multiplies the weight distribution data with the random weight generated in the current mirror image array;
the memristor array is used for summing currents in the same column according to kirchhoff's law, and taking a current summation result as input of hidden layer neurons;
and the memristor node of the inner-layer overrun learning machine is used for obtaining the output result of the hidden layer neuron, completing small grid calculation, obtaining the output weight through a column scanner, and finally obtaining the output result of the memristor hardware network model of the hierarchical overrun learning machine.
Optionally, the hierarchical overrunning learner memristor hardware network further includes:
And the peak neuron circuit is used for converting the current output by the memristor array into an output result of the hidden layer neuron.
Optionally, the output result of the hierarchical overrunning learner memristor hardware network model is expressed as:
wherein,representing the output of the memristor hardware network model of the hierarchical overrunning learning machine,the hidden matrix is represented by a representation of the hidden matrix,representing M independent hierarchical overrunning learner memristor hardware network models,representing the output weight of all hierarchical overrun learner memristor hardware network models,representing the output weights of M independent hierarchical overrunning learner memristor hardware network models,output weight, O, of each independent hierarchical overrun learner memristor hardware network model m Output of memristor node of inner layer overrun learning machine is represented, H m A hidden layer output matrix representing memristor nodes of the inner layer overrun learning machine,represents the Moore-Penrose generalized inverse of the matrix,is a sigmod activation function that,the object matrix is represented by a matrix of objects,representing a given training dataset, said training dataset being formed by said real-time picture data preprocessing,andthe random input weight and the initial bias of the memristor node of the inner layer overrun learning machine are respectively represented.
Optionally, the initializing and training the hierarchical overrun learning memristor hardware network model based on the hierarchical overrun learning memristor hardware network optimization based on the real-time picture data includes:
minimizing the output weight parameters of the memristor hardware network model of the hierarchical overrunning learning machine in the process of training the memristor hardware network model; the method specifically comprises the following steps:
based on the formula:andperforming convex optimization on the memristor hardware network model of the hierarchical overrun learning machine,
wherein the method comprises the steps ofRepresenting a minimized objective functionTime output weightIs used for the value of (a) and (b),the function of the object is represented by a function of the object,representing the constraint function(s),the representation takes the L2 norm,a training sample is represented and a sample is represented,representing the number of training samples to be used,the constant is represented by a value that is a function of,representing an error between the target truth value and the predicted value;
defining an objective function for convex optimization problem augmentation:
wherein the method comprises the steps ofThe lagrangian function is represented as such,the function of the object is represented by a function of the object,representing the constraint function(s),the output weight is represented as a function of the output weight,the gradient constant is represented by a gradient constant,the constant is represented by a value that is a function of,the representation takes the L2 norm,represent the firstThe gradient constant of the individual training samples is set,represents a target true value,representing an error between the target truth value and the predicted value;
based on the objective function, obtaining a closed solution for convex optimization and finally outputting the weight parameters:
Wherein the method comprises the steps ofThe output weight is represented as a function of the output weight,representation ofThe order-unit matrix is used for the data processing,representation ofThe order-unit matrix is used for the data processing,the inverse matrix is represented by a representation,the hidden layer output matrix is represented as such,representing a transpose of the hidden layer output matrix,the constant is represented by a value that is a function of,representing the target true value output,representing the number of training samples to be used,and the number of memristor hidden layer nodes of the overrun learning machine is represented.
Optionally, the acquiring real-time picture data of the 3D printing mechanical arm includes:
acquiring real-time picture data of an original image of the mechanical arm, which is acquired by a mechanical arm image acquisition device connected with the multiprocessor system-on-chip;
the obtaining the 3D printing path code, and planning the motion trail of the mechanical arm based on the 3D printing path code, includes:
and acquiring the sliced 3D printing path code input through the storage medium, converting the motion trail of the mechanical arm based on the 3D printing path code, converting the 3D printing path into the motion trail of the mechanical arm, solving the motion trail of the mechanical arm in an inversion way, and planning the motion trail of the mechanical arm by using a planning tool.
Optionally, the weight allocation data includes address and weight parameter data;
the parameter adjustment decisions of the mechanical arm comprise three parameter adjustment decisions of the speed, the acceleration and the path of the mechanical arm.
In order to achieve the above object, the present application further provides a 3D printing intelligent auxiliary method, including: acquiring real-time picture data of a 3D printing mechanical arm, and carrying out feature extraction and binarization;
initializing and training a memristor hardware network model of the hierarchical overrun learning machine based on memristor hardware network optimization based on the extracted characteristic data to obtain training tag data;
and carrying out randomized weight distribution based on the training tag data, and sending the obtained weight distribution data to the memristor hardware network of the hierarchical overrun learning machine so as to obtain a parameter adjustment decision of the mechanical arm based on an obtained output result of the memristor hardware network model of the hierarchical overrun learning machine, thereby assisting in the control of the mechanical arm in 3D printing.
To achieve the above object, the present application also provides a computer storage medium having stored thereon a computer program which, when executed by a machine, implements the steps of the method as described above.
The embodiment of the application has the following advantages:
the embodiment of the application provides a 3D prints intelligent auxiliary system, include: the multi-processor system-on-chip is used for acquiring real-time picture data of the 3D printing mechanical arm, initializing and training a hierarchical overrun learning machine memristor hardware network model optimized based on the hierarchical overrun learning machine memristor hardware network based on the real-time picture data to acquire training tag data, carrying out randomized weight distribution based on the training tag data, and sending the acquired weight distribution data to the hierarchical overrun learning machine memristor hardware network to acquire parameter adjustment decisions of the mechanical arm based on an output result of the hierarchical overrun learning machine memristor hardware network model, thereby assisting control of the mechanical arm in 3D printing.
Through the system, a circuit based on the memristor is used as a hardware neural network computing unit, a multiprocessor system-on-chip is used as an auxiliary control system of the 3D printing mechanical arm, and more intelligent system assistance is provided for real-time intelligent decision making in a complex environment. As the memristor is used as a basic passive device, the memristor has nanoscale size and nonvolatile property, not only can realize continuous change of synaptic weight when simulating nerve synapses and realize integration of memory computation, but also can construct a neural network structure with higher integration level, so that the 3D printing intelligent auxiliary system adopting the hierarchical overrunning learning memristor hardware network has low calculation power consumption and small delay.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It will be apparent to those skilled in the art from this disclosure that the drawings described below are merely exemplary and that other embodiments may be derived from the drawings provided without undue effort.
Fig. 1 is a schematic structural diagram of a multiprocessor system-on-chip of a 3D printing intelligent auxiliary system according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a hierarchical overrun learning machine memristor hardware network model of a 3D printing intelligent auxiliary system according to an embodiment of the present application;
FIG. 3 is a schematic structural diagram of an ELM memristor small network of a 3D printing intelligent auxiliary system according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a spiking neuron circuit of a 3D printing intelligent auxiliary system according to an embodiment of the present disclosure;
fig. 5 is a flowchart of a 3D printing intelligent assisting method provided in an embodiment of the present application.
Detailed Description
Other advantages and advantages of the present application will become apparent to those skilled in the art from the following description of specific embodiments, which is to be read in light of the present disclosure, wherein the present embodiments are described in some, but not all, of the several embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In addition, the technical features described below in the different embodiments of the present application may be combined with each other as long as they do not collide with each other.
An embodiment of the present application provides a 3D printing intelligent auxiliary system, including: and the multiprocessor system-on-chip and the hierarchical overrun learning machine memristor hardware network are connected.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a multiprocessor system-on-chip of a 3D printing intelligent auxiliary system provided in an embodiment of the present application, and it should be understood that the structural diagram may further include additional blocks not shown and/or blocks shown may be omitted, and the scope of the present application is not limited in this respect.
The multiprocessor system on chip (MPSOC) is used for acquiring real-time picture data of a 3D printing mechanical arm (such as a SCARA horizontal joint mechanical arm), initializing and training a hierarchical overrun learning machine memristor hardware network model (which can be simply called as a HELM model in the following embodiment, namely the hierarchical overrun learning machine) optimized based on the hierarchical overrun learning machine memristor hardware network based on the real-time picture data to acquire training tag data, carrying out randomization weight distribution based on the training tag data, and sending the obtained weight distribution data to the hierarchical overrun learning machine memristor hardware network to obtain parameter adjustment decisions of the mechanical arm based on an output result of the hierarchical overrun learning machine memristor hardware network model, thereby assisting control of the mechanical arm in 3D printing.
Referring to fig. 1, in some embodiments, the acquiring real-time picture data of the 3D printing robot includes:
and acquiring real-time picture data of an original image of the mechanical arm, which is acquired by a mechanical arm image acquisition device (such as a camera) connected with the multiprocessor system-on-chip.
Referring to fig. 1, in some embodiments, after the deriving the parameter adjustment decisions for the robotic arm, the multiprocessor system-on-chip is further configured to:
and acquiring a 3D printing path code (such as G-code), planning a motion track of the mechanical arm based on the 3D printing path code, fusing a parameter adjustment decision of the mechanical arm, and performing parameterization adjustment on the mechanical arm so as to realize control on the mechanical arm.
Referring to fig. 1, in some embodiments, the acquiring the 3D printing path code, and performing the planning of the motion trajectory of the mechanical arm based on the 3D printing path code, includes:
and acquiring the sliced 3D printing path code input through the storage medium, converting the motion trail of the mechanical arm based on the 3D printing path code, converting the 3D printing path into the motion trail of the mechanical arm, solving the motion trail of the mechanical arm in an inversion way, and planning the motion trail of the mechanical arm by using a planning tool.
Referring to fig. 1, in particular, the inside of the multiprocessor system-on-a-chip is mainly composed of two parts, namely a PS (ARM) part and a PL (FPGA) part. The PS part is a processing system part, i.e., a central processor chip part, and the PL part is a programmable logic part, i.e., a field programmable gate array.
Specifically, the PS (ARM) part is realized by a program therein: acquiring real-time image data of an original image of the mechanical arm acquired by the mechanical arm image acquisition device, extracting features, and binarizing the extracted feature data; initializing and training a HELM model based on the extracted feature data; the resulting data (including training tag data and/or binarized feature data) is sent to the PL (FPGA) section. PL (FPGA) part is implemented by a program therein: processing the acquired training label data and configuring a randomizing weightAnd initial biasAnd performing outer layer randomization weight distribution and inner layer randomization weight distribution, and passing through an address line A<6:0>And B<6:0>And outputting the data to a hierarchical overrun learning machine (HELM) memristor hardware network. Calculating an HELM model output layer, judging model convergence, and if not, acquiring a target matrix And a lagrange function constructing the HELM model is provided to a convex optimizer (IP core), formed by closed form solutionObtain the optimal biasThe input update bias is returned until the model converges. After training, decision is made through a decision circuit based on the FPGA to obtain a mechanical arm parameter adjustment decision (specifically, the mechanical arm parameter adjustment decision can comprise three parameters of the speed, the acceleration and the path of the mechanical arm). Specifically, the calculation of the HELM output layer is mainly a regression clustering algorithm, such as K-means, and the prediction of the speed, the acceleration and the path is finally obtained through the calculation of the output weight.
Specifically, the PS (ARM) section is also realized by a program therein: acquiring a sliced 3D printing path G-code input through storage media such as an SD card, converting a mechanical arm motion track based on the 3D printing path code, converting the 3D printing path into the mechanical arm motion track, solving the mechanical arm motion track by means of ROS moveit ikfast and the like, planning the mechanical arm motion track by means of ROS OMPL and the like, fusing mechanical arm parameter adjustment decisions, and carrying out parameterization adjustment on the mechanical arm; the robot arm is controlled by means of ROS ActionServer or the like.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a memristor hardware network of a 3D printing intelligent auxiliary system provided in an embodiment of the present application, it should be understood that the structural diagram may further include additional blocks not shown and/or blocks shown may be omitted, and the scope of the present application is not limited in this respect.
In some embodiments, as shown in fig. 2, in the hierarchical ultra-limit learner (hellm) memristor hardware network, the hierarchical ultra-limit learner memristor hardware network includes: the multi-path selector, the generating circuit, the memristor array and the memristor node of the inner-layer overrun learning machine (hereinafter, the embodiment and the drawings are simply referred to as an inner-layer ELM memristor node, wherein ELM is called Extreme Learning Machine, namely the overrun learning machine).
Wherein the multiplexer may be a 128-way multiplexer for inputting address A based on<6:0>The random weight distribution is carried out on the weight parameter Data data_in1, and the random weight of the input network is calculated and simulated firstlyAt the same time, address A<6:0>And the weight parameter data_in1 Data is demultiplexed according to the address and sent to a specific channel in the system using the serial peripheral interface. Address a <6:0>And the weight parameter Data in1 Data are stored in the shift register for configuring the input current magnitude of the analog-to-digital converter in the input generation circuit (IGC, input Generting Circuit).
The IGC functions to generate analog DC current from the input address and weight parameter data and input the analog DC current into the memristor array, and copy the address and weight parameter data into the ELM memristor small network (understood as an outer neuron hidden layer) of each column by using the outer layer overrun learning machine memristor nodes (hereinafter referred to as outer layer ELM memristor nodes in the embodiments and the drawings) formed by the 1T1R memristor array) And then multiplied by the random weights generated in the 1T1R memristor array. The data processing and output modes of the inner layer ELM memristor nodes in the small ELM memristor network are the same as those of the outer layer ELM memristor nodes (the scanning modes are different, the outer layer ELM memristor nodes are scanned by columns, and the inner layer ELM memristor nodes are scanned by rows).
In fig. 1 to 3, the rn_in and clk_in signals are used to configure reset signals and clock signals of the IGC input generation circuit for resetting and refresh control of address and weight parameter data. The clk_out clock signal output by the PL (FPGA) portion clock module and the control module on the multiprocessor system-on-chip (mpssoc) drives each ELM memristor small network by column scan and stores output data into output registers C <13:0>, the output registers C <13:0> being connected to the hellm output layer computation module of the PL (FPGA) portion on the multiprocessor system-on-chip (mpssoc). FIG. 3 is a schematic structural diagram of an ELM memristor small network of a 3D printing intelligent auxiliary system according to an embodiment of the present disclosure. In addition, the same letters a to I, which are labeled next to the arrowed lines representing data transmission in fig. 2 and 3, are used to represent the same arrowed lines in fig. 2 and 3, respectively.
Specifically, as shown in fig. 2, the currents in the same column are summed according to kirchhoff's law, and the result is taken as the input to the hidden layer neurons. The hidden layer neurons generate spike oscillations with different frequencies according to the input current, and the spike oscillations are counted by an asynchronous counter of the generation matrix. And transmitting the hidden layer neuron output result to an ELM memristor small network, completing small network calculation in the ELM memristor small network, and passing through a column scanner so as to obtain output weight. And finally obtaining and outputting the calculation result of the HELM.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an ELM memristor small network of a 3D printing intelligent auxiliary system provided in an embodiment of the present application, it should be understood that the structural diagram may further include additional blocks not shown and/or blocks shown may be omitted, and the scope of the present application is not limited in this respect.
Specifically, as shown in fig. 3, the schematic structural diagram of the ELM memristor small network in the mechanical arm 3D printing intelligent auxiliary system described in the present application is shown, the Data a of the previous hidden layer is connected to the multiplexer Data input terminal through data_in2, the hierarchical node random weight C is calculated, and the Data a is connected to the multiplexer address input terminal through the input address B <6:0 >.
The input data is then used to generate analog direct current through IGC and the inner ELM memristor nodes formed by the 1T1R memristor array are used to copy the analog direct current to the hidden layers of the neurons of each row (understood as the hidden layers of the inner neurons)) Is a kind of medium. The output mode is the same as the outer layer.
Referring to fig. 2 or 3, in some embodiments, the outer layer ELM memristor node or the inner layer ELM memristor node further includes a spiking neuron circuit that functions to convert an input current into an output of the neuron.
Peak neuron circuits are used in FIGS. 2 and 3Indicating that the neu_en signal output by the clock and control modules in the PL (FPGA) section on the multiprocessor system-on-chip (mpssoc) is enabled.
Wherein the method comprises the steps ofRepresenting L spike neuron counters. The spiking neuron counter may be reset by an RN _ cnt signal output by a clock module and a control module in a PL (FPGA) portion on a multiprocessor system-on-chip (mpssoc).
The help framework is capable of automatically learning and combining different single overrun learner ELM memristor small networks (i.e., the ELM memristor small networks in fig. 2), and the application adopts ELM (overrun learner) algorithm to learn fusion parameters of a plurality of ELM models (overrun learner models), so that final results are calculated by combining outputs of the individual ELM models.
In some embodiments, the training process of the hierarchical overrunning learner memristor hardware network model is divided into two steps:
inputting a given training data setTarget matrixTraining data set in embodiments of the present applicationThe method comprises the steps of preprocessing real-time picture data acquired from a 3D printing mechanical arm, and determining the number of nodes of an inner ELM memristorOuter ELM memristor nodeNumber ofThe hidden layer nodes of the inner layer and the outer layer ELM memristor are the same in number
Step one: training a single ELM memristor model, namely training of inner-layer ELM memristor nodes of the hierarchical overrun learning machine memristor hardware network model, and considering the number of the inner-layer ELM memristor nodes in the embodiment of the applicationEqual to the number of outer ELM memristor nodesThereby trainingAnd the hierarchical overrunning learning machine memristor hardware network model with independent nodes.
The inner layer ELM memristor node calculation steps in fig. 3 mainly include:
1. generating random input weights for inner layer ELM memristor nodesAnd initial biasBy address A<6:0>A data input connected to the multi-way selector via a letter C line;
2. hidden layer output matrix H of inner layer ELM memristor node is calculated m
3. Calculating output weight of inner ELM memristor node
4. Calculating the output O of an inner ELM memristor node m
Wherein the method comprises the steps ofThe spike neuron circuit is a sigmod activation function, can be any nonlinear piecewise continuous function, meets the ELM general approximation capability theorem, and can be understood as an actual hardware activation function. Wherein the method comprises the steps ofRepresents the Moore-Penrose generalized inverse of the matrix.
Step two: based on the learning of the first step, calculating an output value of an inner ELM memristor node, and then learning fusion parameters of a plurality of ELM models; in the hierarchical overrun learning machine memristor hardware network model, cascading all inner-layer ELM memristor node outputs is used as a hidden matrix of the whole hierarchical overrun learning machine memristor hardware network model
Wherein the method comprises the steps ofIs the output of the m-th inner ELM memristor node; wherein if all of the inner ELM memristor nodes have the same size, thenIs oneA rank matrix; the calculated fusion parameter is the output weight of the outer ELM memristor node,Is oneRepresenting the combined weights that M different inner layer ELM memristor nodes contribute to the final result. The outer ELM memristor node is parameterized by its parametersOuter layer output weightFully defined;
the output weight of each independent hierarchical overrunning learner memristor hardware network model is expressed as:
Output weight of each inner layer ELM memristor nodeIs an NxN square matrix, which represents the connection between the m-th inner ELM memristor node and the final output, and all output weightsExpressed as:the method comprises the steps of carrying out a first treatment on the surface of the The output result of the memristor hardware network model of the hierarchical overrun learning machine is expressed as follows:
it is understood that the output of the outer layer ELM memristor node is a linear combination of the outputs of the different inner layer ELM memristor nodes, whereRepresenting the output of the memristor hardware network model of the hierarchical overrunning learning machine,the hidden matrix is represented by a representation of the hidden matrix,representing M independent inner layer ELM memristor nodes in training step one,representing the output weights of the hierarchical overrunning learner memristor hardware network model of all ELM models,representing the output weights of the M independent inner layer ELM memristor nodes in training step one,representing the output weight of each individual inner layer ELM memristor node.
It will be appreciated by those skilled in the art that HELM is a hierarchical neural network that includes a small network in each hidden node.
In some embodiments, for the parameter solving problem in the HELM model training process, in addition to minimizing the network predicted value error, the output weight parameter thereof is also required to be minimized Thereby yielding optimization problems with regularization terms.
The application provides a method for performing convex optimization on an HELM model:
whereinRepresenting a minimized objective functionTime output weightIs used for the value of (a) and (b),as a function of the object to be processed,in order to constrain the function of the signal,the representation takes the L2 norm,in order to train the sample,in order to train the number of samples,is a constant value, and is used for the treatment of the skin,is the error between the target true and predicted values.
Further, an objective function for convex optimization problem augmentation, i.e., a lagrangian function, is defined:
wherein the method comprises the steps ofRepresenting the lagrangian function, as an objective function,in order to constrain the function of the signal,as the weight of the material to be weighed,as a constant of the gradient,in order to train the sample,in order to train the number of samples,is a constant value, and is used for the treatment of the skin,the representation takes the L2 norm,is the firstThe gradient constant of the individual training samples is set,for the target truth value,is the error between the target true and predicted values.
Further, from the above Lagrangian function, the optimal solution needs to meet the following form according to the KKT (Coulomb-Tak) condition:
weighting the weightIs equal to zero, resulting in the following equation:
, (1)
to make the error between the target true value and the predicted valueIs equal to zero, resulting in the following equation:, (2)
let the gradient constantIs equal to zero, resulting in the following equation:
, (3)
Wherein the method comprises the steps ofThe lagrangian function is represented as such,the output weight is represented as a function of the output weight,is the firstThe gradient constant of the individual training samples is set,as a constant of the gradient,as the error between the target true and predicted values,is a constant value, and is used for the treatment of the skin,in order to conceal the transpose of the layer output matrix,in order to train the number of samples,the hidden layer row vectors are represented as such,is the target true value.
Further, if a small training data set exists, the method) "Small training dataset" means that the number of training samples is (far) less than the number of ELM hidden layer nodes. Bringing formula (1) and formula (2) into formula (3) to obtain:
the combined type (1) and the formula (2) can be obtained:
further, if a big training data set exists, the method) "Large training dataset" refers to a number of training samples that is far greater than the number of ELM hidden layer nodes. In this case, the following formulas (1) and (2) are used to indicate:
further, it is further obtainable from the formula (3):
, (4)
; (5)
further, the simultaneous formula (4) and the formula (5) can be obtained:
further, obtaining a closed solution of the final output weight of the convex optimization;
wherein the method comprises the steps ofThe output weight is represented as a function of the output weight,representation ofThe order-unit matrix is used for the data processing,representation ofThe order-unit matrix is used for the data processing,the inverse matrix is represented by a representation,represents the Moore-Penrose generalized inverse of the matrix,in order to hide the layer output matrix, In order to conceal the transpose of the layer output matrix,is a constant value, and is used for the treatment of the skin,for the target true value output,in order to train the number of samples,the number of nodes of the hidden layer of the memristor for the inner layer and the outer layer is increased.
Closed-form solution of output weights by convex optimizerRandomly inputting weightsOptimal bias of memristor hardware network model of hierarchical overrun learning machine is reversely calculatedWhereinAn inverse function representing the sigmod activation function,closed solution for expressing convex optimization output weightMoore-Penrose generalized inverse,a set of training data is represented and,representing the target matrix. To the obtained optimal biasStoring may be used to update the initial biasAccelerating model training convergence and matching with random input weightAnd finishing the new training.
Those skilled in the art will appreciate that by 1T1R is meant a structure in which one transistor is connected in series with one memristor or resistance change cell. And the transistor is used for realizing the gating of the memristor unit, so that the crosstalk phenomenon caused by the leakage current problem is avoided.
Referring to fig. 4, fig. 4 is a schematic diagram of a spiking neuron circuit of a 3D printing intelligent assistance system provided in an embodiment of the present application, it being understood that the schematic diagram may also include additional blocks not shown and/or blocks shown may be omitted, the scope of the present application not being limited in this respect.
In FIG. 4The tube represents the subthreshold transistor of the spiking neuron circuit,representing the feedback capacitor(s),an input voltage signal representing a spiking neuron circuit,representing the storage voltage of the input parasitic storage capacitance,representing the input parasitic storage capacitance,representing the reference voltage of the operational amplifier,an output voltage signal representing the spiking neuron circuit.
Specifically, the spiking neuron circuit functions to convert an input current into an output of the neuron. After the input circuit is built, to simulate the neuron behavior, the input current is equivalent to the input of each neuron, and the output of each neuron can be regarded as the following spike rate formula:
wherein,is the sampling period of the output signal,representing the maximum number of spikes in a cycle,is a hervelacts step function,indicating the leakage current is indicated by the fact that,representing the input current. If the input current is less than the leakage current, the leakage current will not allow the neuron to spike.
In some embodiments, the spiking neuron output may be used for counter clocking, thus at a certain sampling timeIn, if each neuronProduction ofThe secondary spike can then treat the counter output as Is a quantization case of (2). Then, in the second stage, the calculation is continued to obtain
The field of additive manufacturing is expanding rapidly and new materials, techniques and solutions are emerging. Artificial intelligence assisted 3D printing is playing its unique role from determining the best material for a job to improving the build quality of a product by eliminating human error.
Since memristors are used as electronic synaptic devices, the migration of ions in memristors is very similar to the diffusion process of neurotransmitters in nerve synapses, so that the use of memristors to simulate synapses in a nerve network is a great trend, and the memristors are widely used in the nerve network to store synaptic weights. Numerous experiments have demonstrated that simulating synapses in a neural network with memristors would have great potential advantages.
The memristor is used as a basic passive device, has nanoscale size and nonvolatile property, can realize continuous change of synaptic weight when simulating nerve synapses, realizes integration of memory and calculation, and can construct a neural network structure with higher integration level, so that the artificial neural network has learning and memory capabilities, and meanwhile, the functions of the artificial neural network are diversified.
Therefore, the memristor network optimization-based mechanical arm 3D printing intelligent auxiliary system provided by the application adopts the memristor RRAM circuit as a hardware neural network computing unit, and the MPSOC multiprocessor system-on-chip is used as a control system of the 3D printing mechanical arm, so that more intelligent system assistance is provided for environments requiring high stability, such as biological 3D printing requiring in-situ printing, cultural relic repairing 3D printing and the like, aiming at real-time intelligent decisions requiring complex environments. Meanwhile, the memristor hardware network is adopted as a 3D printing intelligent auxiliary system, and the method has the advantages of low calculation power consumption and small delay.
The embodiment of the application also provides a 3D printing intelligent auxiliary method, which comprises the following steps:
acquiring real-time picture data of a 3D printing mechanical arm, and carrying out feature extraction and binarization;
initializing and training a hierarchical overrun learning machine memristor hardware network model based on hierarchical overrun learning machine memristor hardware network optimization based on the extracted feature data to acquire training tag data;
and carrying out randomized weight distribution based on the training tag data, and sending the obtained weight distribution data to the memristor hardware network of the hierarchical overrun learning machine so as to obtain a parameter adjustment decision of the mechanical arm based on an obtained output result of the memristor hardware network model of the hierarchical overrun learning machine, thereby assisting in the control of the mechanical arm in 3D printing.
In particular, referring to fig. 5, fig. 5 is a flowchart of a 3D printing intelligent assistance method provided in an embodiment of the present application, it should be understood that the flowchart may also include additional blocks not shown and/or blocks shown may be omitted, the scope of the present application not being limited in this respect.
As shown in fig. 5, the method includes the steps of firstly, executing left side flow, firstly, acquiring real-time video of a 3D printing process through a camera beside a mechanical arm, secondly, acquiring real-time image data, extracting features, binarizing, thirdly, training a HELM model optimized based on a memristor hardware network, fourthly, calibrating a printing position, adjusting mechanical arm speed, acceleration and path point time parameterization through a decision circuit based on an FPGA, adjusting a spray head flow rate, and fifthly, adjusting the spray head flow rate through a 3D printing spray nozzle controller.
Then executing right side flow, firstly, inputting a printing file G-code path, secondly, converting a 3D printing path into a mechanical arm movement track, such as MOVJ joint movement, MOVL linear movement and MOVC arc movement, thirdly, planning the mechanical arm movement track based on a Ruckig time optimal track generation algorithm in an ROS open source movement planning library OMPL, acquiring speed, acceleration and path parameters in left side flow, and fourthly, converting the planned joint angle into a control signal of a mechanical arm motor through an action server, and controlling an entity mechanical arm.
The specific implementation method refers to the foregoing system embodiment, and is not repeated here.
The present application may be a method, apparatus, system, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for performing the various aspects of the present application.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present application may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present application are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information for computer readable program instructions, which may execute the computer readable program instructions.
Various aspects of the present application are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Note that all features disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic set of equivalent or similar features. Where used, further, preferably, still further and preferably, the brief description of the other embodiment is provided on the basis of the foregoing embodiment, and further, preferably, further or more preferably, the combination of the contents of the rear band with the foregoing embodiment is provided as a complete construct of the other embodiment. A further embodiment is composed of several further, preferably, still further or preferably arrangements of the strips after the same embodiment, which may be combined arbitrarily.
While the application has been described in detail with respect to the general description and specific embodiments thereof, it will be apparent to those skilled in the art that certain modifications and improvements may be made thereto based upon the application. Accordingly, such modifications or improvements may be made without departing from the spirit of the application and are intended to be within the scope of the invention as claimed.

Claims (10)

1. A 3D printing intelligent assistance system, comprising: the multiprocessor system-on-chip and the hierarchical overrun learning machine memristor hardware network are connected, wherein,
the multi-processor on-chip system is used for acquiring real-time picture data of the 3D printing mechanical arm, initializing and training a hierarchical overrun learning machine memristor hardware network model based on hierarchical overrun learning machine memristor hardware network optimization based on the real-time picture data to acquire training tag data, carrying out randomization weight distribution based on the training tag data, and sending the obtained weight distribution data to the hierarchical overrun learning machine memristor hardware network to obtain parameter adjustment decisions of the mechanical arm based on an output result of the hierarchical overrun learning machine memristor hardware network model, thereby assisting control of the mechanical arm in 3D printing.
2. The 3D printing intelligent assistance system of claim 1, wherein after deriving the parameter adjustment decisions for the robotic arm, the multiprocessor system-on-a-chip is further configured to:
and acquiring a 3D printing path code, planning the motion trail of the mechanical arm based on the 3D printing path code, fusing the parameter adjustment decision of the mechanical arm, and performing parameterization adjustment on the mechanical arm so as to realize control on the mechanical arm.
3. The 3D printing intelligent assistance system of claim 1, wherein the hierarchical overrunning learner memristor hardware network model comprises: a multiplexer, a generating circuit, a memristor array and an inner layer overrun learning machine memristor node, wherein,
the multiplexer is used for carrying out randomized weight distribution based on the input weight distribution data;
the generation circuit is used for generating analog direct current according to the input weight distribution data and inputting the analog direct current into the memristor array so that the memristor array copies the weight distribution data into memristor nodes of the inner-layer overrun learning machine of each column and multiplies the weight distribution data with random weights generated in the current mirror image array;
the memristor array is used for summing currents in the same column according to kirchhoff's law, and taking a current summation result as input of hidden layer neurons;
the memristor nodes of the inner-layer overrun learning machine are used for obtaining the output result of the hidden layer neurons, completing small grid calculation, obtaining output weights through a column scanner, and finally obtaining the output result of the memristor hardware network model of the hierarchical overrun learning machine.
4. The 3D printing intelligent assistance system of claim 3 wherein the hierarchical overrun learner memristor hardware network further comprises:
And the peak neuron circuit is used for converting the current output by the memristor array into an output result of the hidden layer neuron.
5. The 3D printing intelligent assistance system of claim 3, wherein the output result of the hierarchical overrun learner memristor hardware network model is expressed as:
wherein,representing the output of the memristor hardware network model of the hierarchical overrunning learning machine,
the hidden matrix is represented by a representation of the hidden matrix,
representing M independent inner layer overrun learner memristor nodes,
representing the output weights of the hierarchical overrun learner memristor hardware network model of all overrun learner models,
representing the output weights of M independent inner layer overrunning learner memristor nodes,
representing the output weight of each individual inner layer overrunning learner memristor node,
O m output of memristor node of inner layer overrun learning machine is represented, H m A hidden layer output matrix representing memristor nodes of the inner layer overrun learning machine,
represents the Moore-Penrose generalized inverse of the matrix,
is a sigmod activation function that,
the object matrix is represented by a matrix of objects,
representing a given training dataset, the training dataset being composed of real-time picture data pre-processing,
and->The random input weight and the initial bias of the memristor node of the inner layer overrun learning machine are respectively represented.
6. The 3D printing intelligent assistance system of claim 5, wherein initializing and training a hierarchical overrun learner memristor hardware network model based on hierarchical overrun learner memristor hardware network optimization based on real-time picture data comprises:
minimizing the output weight parameters of the memristor hardware network model of the hierarchical overrun learning machine in the process of training the memristor hardware network model; the method specifically comprises the following steps:
based on the formula:and->Performing convex optimization on a hierarchical overrun learning machine memristor hardware network model,
wherein the method comprises the steps ofRepresenting a minimization of the objective function->Time output weight +.>Is a value of->For the purpose of +.>Representing constraint functions->Representing taking L2 norm ++>Representing training samples, ++>Representing the number of training samples, +.>Representing a constant->Representing an error between the target truth value and the predicted value;
defining an objective function for convex optimization problem augmentation:
wherein the method comprises the steps ofRepresents a Lagrangian function,/->Representing an objective function +.>Representing constraint functions->The output weight is represented as a function of the output weight,represents the gradient constant +_>Representing a constant->Representing taking L2 norm ++>Indicate->Gradient constant of individual training samples, +.>Representing a target truth value->Representing an error between the target truth value and the predicted value;
Based on the objective function, obtaining a closed solution of the final output weight parameter of convex optimization:
wherein the method comprises the steps ofRepresenting output weights, ++>Representation->Order identity matrix>Representation->Order identity matrix>Representing inversion matrix>Representing hidden layer output matrix,/>Representing the transpose of the hidden layer output matrix, +.>Representing a constant->Representing target truth value output,/->Representing the number of training samples, +.>And the number of memristor hidden layer nodes of the overrun learning machine is represented.
7. The intelligent 3D printing support system according to claim 2, wherein,
acquiring real-time picture data of a 3D printing mechanical arm, comprising:
acquiring real-time picture data of an original image of a mechanical arm, which is acquired by a mechanical arm image acquisition device connected with a multiprocessor system-on-chip;
acquiring a 3D printing path code, and planning a motion trail of the mechanical arm based on the 3D printing path code, wherein the method comprises the following steps:
acquiring a sliced 3D printing path code input through a storage medium, converting the motion trail of the mechanical arm based on the 3D printing path code, converting the 3D printing path into the motion trail of the mechanical arm, solving the motion trail of the mechanical arm in an inversion way, and planning the motion trail of the mechanical arm by using a planning tool.
8. The intelligent 3D printing support system according to claim 1, wherein,
the weight allocation data comprises address and weight parameter data;
the parameter adjustment decisions of the mechanical arm comprise three parameter adjustment decisions of the speed, the acceleration and the path of the mechanical arm.
9. A 3D printing intelligent assistance method, characterized by comprising:
acquiring real-time picture data of a 3D printing mechanical arm, and carrying out feature extraction and binarization;
initializing and training a hierarchical overrun learning machine memristor hardware network model based on hierarchical overrun learning machine memristor hardware network optimization based on the extracted feature data to acquire training tag data;
and carrying out randomized weight distribution based on the training tag data, and sending the obtained weight distribution data to a hierarchical overrun learning machine memristor hardware network so as to obtain a parameter adjustment decision of the mechanical arm based on an output result of the acquired hierarchical overrun learning machine memristor hardware network model, thereby assisting in the control of the mechanical arm in 3D printing.
10. A computer storage medium having stored thereon a computer program, which when executed by a machine performs the steps of the method according to claim 9.
CN202410037425.3A 2024-01-10 2024-01-10 Intelligent auxiliary system, method and storage medium for 3D printing Pending CN117532885A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410037425.3A CN117532885A (en) 2024-01-10 2024-01-10 Intelligent auxiliary system, method and storage medium for 3D printing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410037425.3A CN117532885A (en) 2024-01-10 2024-01-10 Intelligent auxiliary system, method and storage medium for 3D printing

Publications (1)

Publication Number Publication Date
CN117532885A true CN117532885A (en) 2024-02-09

Family

ID=89796304

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410037425.3A Pending CN117532885A (en) 2024-01-10 2024-01-10 Intelligent auxiliary system, method and storage medium for 3D printing

Country Status (1)

Country Link
CN (1) CN117532885A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004082592A (en) * 2002-08-28 2004-03-18 Seiko Epson Corp Printer with cache memory
US20170161606A1 (en) * 2015-12-06 2017-06-08 Beijing University Of Technology Clustering method based on iterations of neural networks
CN109101698A (en) * 2018-07-19 2018-12-28 广州科技贸易职业学院 A kind of Feature Selection Algorithms based on injection molding model, device and storage medium
ES2703455A1 (en) * 2017-09-21 2019-03-08 Univ Valencia Politecnica MATERIAL FOR CONSTRUCTION BY MOLDING, EXTRUSION OR 3D PRINTING (Machine-translation by Google Translate, not legally binding)
CN110288257A (en) * 2019-07-01 2019-09-27 西南石油大学 A kind of depth transfinites indicator card learning method
EP3629202A1 (en) * 2018-09-26 2020-04-01 Siemens Aktiengesellschaft Method for optimizing a model of a component generated by an additive production method, method for producing a component, computer program and data carrier
CN112115579A (en) * 2020-08-12 2020-12-22 江苏师范大学 Multi-target optimization method for injection molding process parameters of glass fiber reinforced plastics
WO2022016102A1 (en) * 2020-07-16 2022-01-20 Strong Force TX Portfolio 2018, LLC Systems and methods for controlling rights related to digital knowledge
CN115047531A (en) * 2022-06-16 2022-09-13 重庆大学 Transient electromagnetic data inversion method based on ELM network
CN115346096A (en) * 2022-06-27 2022-11-15 电子科技大学 Pulse neural network model constructed based on memristor
CN115534319A (en) * 2022-09-21 2022-12-30 成都航空职业技术学院 3D printing path planning method based on HGEFS algorithm
US20230098602A1 (en) * 2020-12-18 2023-03-30 Strong Force Vcn Portfolio 2019, Llc Robotic Fleet Configuration Method for Additive Manufacturing Systems
US20230182399A1 (en) * 2020-06-25 2023-06-15 Holo, Inc. Methods and systems for three-dimensional printing management
CN116921703A (en) * 2023-06-29 2023-10-24 江苏科技大学 Material increase process reasoning method based on machine learning

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004082592A (en) * 2002-08-28 2004-03-18 Seiko Epson Corp Printer with cache memory
US20170161606A1 (en) * 2015-12-06 2017-06-08 Beijing University Of Technology Clustering method based on iterations of neural networks
ES2703455A1 (en) * 2017-09-21 2019-03-08 Univ Valencia Politecnica MATERIAL FOR CONSTRUCTION BY MOLDING, EXTRUSION OR 3D PRINTING (Machine-translation by Google Translate, not legally binding)
CN109101698A (en) * 2018-07-19 2018-12-28 广州科技贸易职业学院 A kind of Feature Selection Algorithms based on injection molding model, device and storage medium
EP3629202A1 (en) * 2018-09-26 2020-04-01 Siemens Aktiengesellschaft Method for optimizing a model of a component generated by an additive production method, method for producing a component, computer program and data carrier
CN110288257A (en) * 2019-07-01 2019-09-27 西南石油大学 A kind of depth transfinites indicator card learning method
US20230182399A1 (en) * 2020-06-25 2023-06-15 Holo, Inc. Methods and systems for three-dimensional printing management
WO2022016102A1 (en) * 2020-07-16 2022-01-20 Strong Force TX Portfolio 2018, LLC Systems and methods for controlling rights related to digital knowledge
CN112115579A (en) * 2020-08-12 2020-12-22 江苏师范大学 Multi-target optimization method for injection molding process parameters of glass fiber reinforced plastics
US20230098602A1 (en) * 2020-12-18 2023-03-30 Strong Force Vcn Portfolio 2019, Llc Robotic Fleet Configuration Method for Additive Manufacturing Systems
CN115047531A (en) * 2022-06-16 2022-09-13 重庆大学 Transient electromagnetic data inversion method based on ELM network
CN115346096A (en) * 2022-06-27 2022-11-15 电子科技大学 Pulse neural network model constructed based on memristor
CN115534319A (en) * 2022-09-21 2022-12-30 成都航空职业技术学院 3D printing path planning method based on HGEFS algorithm
CN116921703A (en) * 2023-06-29 2023-10-24 江苏科技大学 Material increase process reasoning method based on machine learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
侯悦民;季林红;金德闻;: "神经传递通路的神经网络建模", 计算机应用与软件, no. 11, 15 November 2008 (2008-11-15), pages 138 - 140 *
李晗缦;王丽丹;段书凯;: "改进的超限学习机及其在不平衡数据中的应用", 西南大学学报(自然科学版), no. 06, 17 June 2020 (2020-06-17), pages 145 - 153 *
杨晟院;陈瑶;易飞;刘新;: "基于2维流形的STL曲面网格重建算法", 软件学报, no. 12, 24 March 2017 (2017-03-24), pages 248 - 256 *

Similar Documents

Publication Publication Date Title
Davies et al. Advancing neuromorphic computing with loihi: A survey of results and outlook
Woźniak et al. Deep learning incorporating biologically inspired neural dynamics and in-memory computing
Pei et al. Towards artificial general intelligence with hybrid Tianjic chip architecture
Lin et al. Exploring context with deep structured models for semantic segmentation
Kim et al. Variational temporal abstraction
US9646243B1 (en) Convolutional neural networks using resistive processing unit array
US10708522B2 (en) Image sensor with analog sample and hold circuit control for analog neural networks
US10373051B2 (en) Resistive processing unit
Almási et al. Review of advances in neural networks: Neural design technology stack
KR102483643B1 (en) Method and apparatus for training model and for recognizing bawed on the model
Yadav et al. An introduction to neural network methods for differential equations
JP7399517B2 (en) Memristor-based neural network parallel acceleration method, processor, and device
US11087204B2 (en) Resistive processing unit with multiple weight readers
Lotfi Rezaabad et al. Long short-term memory spiking networks and their applications
Fouda et al. Spiking neural networks for inference and learning: A memristor-based design perspective
Soman et al. Recent trends in neuromorphic engineering
Zhang et al. Brain-inspired active learning architecture for procedural knowledge understanding based on human-robot interaction
KR20230029759A (en) Generating sparse modifiable bit length determination pulses to update analog crossbar arrays
Sun et al. Overlooked poses actually make sense: Distilling privileged knowledge for human motion prediction
CN117532885A (en) Intelligent auxiliary system, method and storage medium for 3D printing
Yu et al. Multi‐stream adaptive spatial‐temporal attention graph convolutional network for skeleton‐based action recognition
Rizzardo et al. Sim-to-real via latent prediction: Transferring visual non-prehensile manipulation policies
US11163707B2 (en) Virtualization in hierarchical cortical emulation frameworks
KR20230005309A (en) Efficient Tile Mapping for Row-by-Row Convolutional Neural Network Mapping for Analog Artificial Intelligence Network Inference
Alshubaily Efficient neural architecture search with performance prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination