CN114505105B - Micro-fluidic chip based on memory calculation - Google Patents

Micro-fluidic chip based on memory calculation Download PDF

Info

Publication number
CN114505105B
CN114505105B CN202210037583.XA CN202210037583A CN114505105B CN 114505105 B CN114505105 B CN 114505105B CN 202210037583 A CN202210037583 A CN 202210037583A CN 114505105 B CN114505105 B CN 114505105B
Authority
CN
China
Prior art keywords
module
memory
microfluidic
machine learning
micro
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210037583.XA
Other languages
Chinese (zh)
Other versions
CN114505105A (en
Inventor
刘洋
王弘喆
刘益安
于奇
胡绍刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202210037583.XA priority Critical patent/CN114505105B/en
Publication of CN114505105A publication Critical patent/CN114505105A/en
Application granted granted Critical
Publication of CN114505105B publication Critical patent/CN114505105B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B01PHYSICAL OR CHEMICAL PROCESSES OR APPARATUS IN GENERAL
    • B01LCHEMICAL OR PHYSICAL LABORATORY APPARATUS FOR GENERAL USE
    • B01L3/00Containers or dishes for laboratory use, e.g. laboratory glassware; Droppers
    • B01L3/50Containers for the purpose of retaining a material to be analysed, e.g. test tubes
    • B01L3/502Containers for the purpose of retaining a material to be analysed, e.g. test tubes with fluid transport, e.g. in multi-compartment structures
    • B01L3/5027Containers for the purpose of retaining a material to be analysed, e.g. test tubes with fluid transport, e.g. in multi-compartment structures by integrated microfluidic structures, i.e. dimensions of channels and chambers are such that surface tension forces are important, e.g. lab-on-a-chip
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Dispersion Chemistry (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Analytical Chemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Hematology (AREA)
  • Clinical Laboratory Science (AREA)
  • Chemical Kinetics & Catalysis (AREA)
  • Micromachines (AREA)
  • Physical Or Chemical Processes And Apparatus (AREA)

Abstract

The invention relates to the technical field of microfluidic chips, in particular to a microfluidic chip based on memory calculation. Aiming at the bottleneck problem of a memory wall in the application of machine learning in the microfluidic technical field at the present stage, the microfluidic control module, the sensing module and the memory computing module are integrated into the microfluidic control chip by combining the memory computing technology with the microfluidic control chip from the perspective of hardware architecture, so that the calculation which is originally required to be performed in an independent computing unit in the machine learning is transferred to a storage unit of the microfluidic control chip, the computational efficiency of the machine learning on the microfluidic control is effectively improved, and the power consumption of a system is reduced.

Description

Micro-fluidic chip based on memory calculation
Technical Field
The invention relates to the technical field of microfluidic chips, in particular to a microfluidic chip based on memory calculation.
Background
Microfluidics (Microfluidics) is a precisely controlled microscale fluid (10) -9 ~10 -18 L) that can be manipulated based on a variety of disciplinary principles (e.g.: electrical manipulation techniques, optical manipulation techniques, magnetic manipulation techniques, acoustic manipulation techniques); the Microfluidic Chip (also called as Lab-on-a-Chip) integrates the functions of sample pretreatment, reaction, sorting, detection, analysis and the like in the fields of biology, chemistry, medicine and the like into a micron-scale Chip, and completes the whole process of sample analysis on the Chip; the method has the advantages of small volume, low sample consumption, high detection speed, simple and convenient operation, multifunctional integration, convenience in carrying and the like, and has wide application prospect in many fields.
In the working process of the traditional microfluidic chip, a professional is often required to manually observe and intervene, so that the microfluidic chip is in a stable working state; the real-time data volume generated by the microfluidic chip in the working process is large, and the effective characteristic information of the data is difficult to define and extract, so that the microfluidic control and analysis is one of the difficulties in the field; aiming at the problem, in recent years, researchers apply machine learning to the field of microfluidic chips, optimize functions of operation, control, analysis and the like of the microfluidic chips and make extraordinary breakthrough; machine learning has the advantages of being unique in the aspects of big data analysis, feature extraction and the like, and has shown excellent application potential in the field of microfluidic chips.
However, the technical development of machine learning in the field of microfluidic technology is currently the innovation and optimization of machine learning algorithms, and the computing platform is still based on the traditional von Neumann architecture (von Neumann architecture). In the von neumann architecture, a storage unit and a computing unit are two independent units, data in the storage unit needs to be transmitted to the computing unit according to instructions during computing, and the computing unit transmits the data to the storage unit again after completing computing. Meanwhile, because the microfluidic chip usually generates a large amount of real-time data in the working process, the bottleneck problem of a memory wall is particularly obvious; the advantages of applying machine learning to the field of microfluidic chips are obvious, and currently, corresponding technical improvement is mainly considered in the software level, and optimization is not considered in the hardware level; therefore, for a hardware computing platform of machine learning in the field of microfluidic chips, technical breakthroughs in the hardware architecture level become urgent needs, and no relevant technical means are available in the industry.
Disclosure of Invention
Aiming at the problems or the defects, the invention provides a micro-fluidic chip based on memory calculation in order to solve the bottleneck problem of a memory wall in the application of the machine learning in the micro-fluidic technical field at the present stage; structurally combining a memory computing technology with the microfluidic chip, and integrating the microfluidic module, the sensing module and the memory computing module into the microfluidic chip; the basic characteristic information (such as speed, shape, arrangement, temperature and the like) of the microfluidic module is obtained in real time through the sensing module and is used as an input variable of machine learning; the calculation originally performed in the calculation unit by machine learning is transferred to the storage unit of the memory calculation module for performing, so that the calculation efficiency of the machine learning on the microfluidic chip is effectively improved, and the power consumption of the system is reduced.
A micro-fluidic chip based on memory calculation comprises a micro-fluidic module, a sensing module and a memory calculation module.
The microfluidic module comprises at least one microfluidic driving module and at least one microfluidic channel, and each microfluidic driving module is based on various discipline principles (such as an electrical control technology, an optical control technology, a magnetic control technology and/or an acoustic control technology).
Each micro-channel selects and designs the material, parameters and shape of the micro-channel according to the functional requirements of the micro-fluidic module, and combines the micro-fluidic driving module to carry out micro-scale fluid (10) -9 ~10 -18 L) to realize the micro-fluidic function.
The sensing module comprises at least one image sensing module and/or a physical sensing module, the image sensing module is used for acquiring image basic characteristic information (such as shape, arrangement and the like) of the microfluidic module in real time, and the physical sensing module is used for acquiring physical basic characteristic information (such as speed, temperature and the like) of the microfluidic module in real time and outputting corresponding machine learning input variables.
The memory computing module receives the machine learning input variable output by the sensing module, computes in a storage unit of the memory computing module, completes training and/or inference of machine learning, feeds a training result back to the memory computing module, and updates the stored model parameters; the inferred result is output to a microfluidic module or peripheral equipment of the microfluidic chip, so that the specific task of machine learning is realized.
Further, the specific tasks of machine learning are sample detection, sample classification, and/or automatic manipulation.
Further, the microfluidic driving module is based on an electrical manipulation technology, an optical manipulation technology, a magnetic manipulation technology and/or an acoustic manipulation technology.
Further, the microfluidic function realized by the microfluidic module is a sample pretreatment, reaction and/or sorting function.
Further, the machine learning algorithm is an artificial neural network.
The micro-fluidic chip based on memory calculation has the following working process:
step 1, global reset, and initializing the states of all modules in the microfluidic chip. And storing the model parameters of the machine learning to a storage unit of the memory calculation module.
And 2, adding a micro-scale fluid experimental sample into the micro-channel, and combining each micro-fluidic driving module to accurately control the experimental sample in the micro-channel so as to realize the micro-fluidic function.
And 3, the sensing module acquires basic characteristic information (such as speed, shape, array, temperature and the like) of the microfluidic module in real time and outputs corresponding machine learning input variables.
And 4, transmitting the machine learning input variable to a memory computing module, and performing machine learning computation on a storage unit of the memory computing module to finish machine learning training and/or inference (such as artificial neural network computation).
Step 5, feeding back the training result to a memory calculation module, and updating the stored model parameters; and the inferred result is output to a microfluidic module or peripheral equipment of the microfluidic chip, so that the specific task of machine learning is realized.
In summary, in the invention, in view of a hardware architecture, the memory computing technology is combined with the microfluidic chip, and the microfluidic module, the sensing module and the memory computing module are integrated into the microfluidic chip, so that the computation originally required by the machine learning in the independent computing unit is transferred to the storage unit of the microfluidic chip, thereby effectively improving the computational efficiency of the machine learning on the microfluidic and reducing the power consumption of the system.
Drawings
FIG. 1 is a schematic diagram of a frame of a memory-based computational microfluidic chip according to the present invention;
FIG. 2 is a full workflow diagram of an embodiment;
FIG. 3 is a schematic frame diagram of a microfluidic module according to an embodiment;
FIG. 4 is a schematic diagram of the structure of an RRAM crossbar memory array in an embodiment;
FIG. 5 is a schematic diagram of the matrix multiplication of the RRAM crossbar memory array in an embodiment;
FIG. 6 is a block diagram of an embodiment of a memory compute module;
FIG. 7 is a flowchart of the operation of the artificial neural network in the embodiment.
Detailed Description
In order to more clearly explain the technical solution of the present invention, the present invention is further described in detail with reference to the accompanying drawings and examples.
In the embodiment, an RRAM device (resistive random access memory) is used as a memory cell; the 1R structure is adopted as a storage unit gating organization scheme of a cross storage array, and an SAW device (surface acoustic wave device) is adopted as a micro-fluidic drive module; an artificial neural network is adopted as a machine learning algorithm, and the attached drawings are simply introduced; obviously, the memory cell employed by the framework is not limited to RRAM devices; the storage unit gating organization scheme adopted by the framework is not limited to a 1R structure; the microfluidic driving module adopted by the frame is not limited to the SAW device; the machine learning algorithm employed by the framework is not limited to artificial neural networks, and the drawings in the following description are merely one embodiment of the present invention, and other drawings can be obtained by those skilled in the art without inventive efforts.
Fig. 1 is a schematic diagram of a frame of a micro-fluidic chip based on memory computing according to the present invention. The invention consists of three large modules, namely a micro-fluidic module, a sensing module and a memory computing module. The three modules are integrated on the same substrate (such as PDMS substrate), and the microfluidic module accurately controls the micro-scale fluid experimental sample to realize microfluidic functions (such as pretreatment, reaction, sorting and the like); the sensing module can acquire basic characteristic information (such as speed, shape, arrangement, temperature and the like) of the microfluidic module and output corresponding analog voltage; the memory calculation module is used for storing the model parameters of the machine learning, calculating in the storage unit, finishing the training and/or inference of the machine learning, feeding the training result back to the memory calculation module, and updating the stored algorithm model. And outputting the inference result to a microfluidic chip or peripheral equipment to realize specific tasks of machine learning (such as sample detection, sample analysis, automatic control and the like).
In this embodiment, the memory computation module includes a cross storage array, a word/bit line driver module, a current limit module, a normalization module, and an activation function module.
And the cross storage array is used for storing the weight data of the artificial neural network and finishing the matrix multiplication calculation of machine learning. The rows and columns of the cross array are referred to as Word lines (Word Line) and Bit lines (Bit Line), respectively, and the Word lines and Bit lines are connected to a Word/Bit Line driver block, which typically has a plurality of organization schemes for the memory cells: one transistor as a select transistor in series with one memory device (1T-1R configuration); a rectifying device (e.g., a diode) is connected in series with a memory device (1D-1R configuration); a single memory device serves as a memory unit (1R structure) and plays a role in gating the memory unit; the memory cell is usually a nonvolatile memory device (e.g., resistive Random Access Memory (RRAM), phase Change Random Access Memory (PCRAM), ferroelectric Random Access Memory (FRAM), magnetoresistive Random Access Memory (MRAM), etc.).
At least one of the cross storage arrays is formed by arranging a nonvolatile memory as a storage unit and is used for storing the weight data of the artificial neural network and finishing the matrix multiplication calculation of the artificial neural network.
And at least one word/bit line driving module is used for selecting memory addresses, controlling the working state of each memory cell in the crossed memory array and performing read/write operation on data in the memory cells.
At least one of the current limiting modules accurately erases the conductance value of the memory unit selected by the word/bit line driving module by limiting the maximum current of the word/bit line driving module.
And the normalization module is used for scaling the input voltage/current signals according to a ratio column so as to enable the input voltage/current signals to fall into a specific interval, and the specific interval is determined according to the practical application condition, so that the size of the input signals can fall into the reading interval of the storage unit.
And at least one of the activation function modules acquires the output current of the cross storage array as an input variable of the activation function, outputs the corresponding analog voltage as an output variable, and is used for increasing the nonlinearity of the matrix multiplication result.
Fig. 2 is a complete work flow diagram of the present embodiment.
The work flow of this embodiment mainly includes 5 steps:
(1) And global reset, all modules of the microfluidic chip are initialized, and then the conductance value of each RRAM storage unit is erased and written through the word/bit line driving module, so that the weight data of the artificial neural network is stored in the memory computing module.
(2) Micro-scale fluid experimental samples are added into a micro-channel of the micro-fluidic module, and the experimental samples in the micro-channel are accurately controlled, so that the micro-fluidic functions (such as sample pretreatment, reaction and/or sorting functions) are realized.
(3) The sensing module extracts basic characteristic information (such as speed, shape, arrangement, temperature and the like) of the microfluidic module, outputs corresponding analog voltage as an input variable of the artificial neural network.
(4) The analog voltage is transmitted to the memory computing module.
4-1, the memory calculation module acquires the analog voltage output by the sensing module, converts the analog voltage into a reading interval which can fall into the RRAM storage unit through the standardization module, and transmits the reading interval to the word line of the crossed storage array.
4-2, the cross storage array completes matrix multiplication calculation through the physical characteristics of the cross storage array, and the magnitude of the analog current output by the bit line is output to the activation function module as the result of the matrix multiplication calculation.
And 4-3, activating the function module to increase the nonlinearity of matrix multiplication calculation, finishing calculation and outputting the calculation result.
(5) The memory calculation module outputs a calculation result to finish the training and/or the inference of the artificial neural network, if the training is carried out, the calculation result is fed back to the memory calculation module to update the weight data stored in the memory calculation module; if the inference is carried out, the calculation result is output to the microfluidic chip and/or peripheral equipment, and the specific tasks of machine learning (such as sample detection, sample analysis, automatic control and the like) are completed.
If the artificial neural network is a multilayer network, the calculation results (analog currents) of the input layer and the hidden layer are transmitted to the standardization module, converted into analog voltages with corresponding sizes, and the step 4-2 is skipped to calculate the next layer.
Fig. 3 is a schematic frame diagram of the microfluidic module and the sensing module in this embodiment.
The microfluidic module comprises a microfluidic channel and a microfluidic driving module based on the SAW device; the micro-channel is a micro-scale fluid channel prepared by a micro-process technology, and micro-scale fluid experimental samples can be accurately controlled by selecting and designing micro-channels with different materials, parameters and shapes and combining a micro-fluidic driving module; an SAW device is integrated around a micro-channel and used as a micro-fluidic drive module, the SAW device converts an input electric energy signal into acoustic energy through an interdigital transducer (IDT), selects and designs the material, the shape and the parameters of the SAW device, and adjusts the power (and/or) the resonant frequency of the input electric energy signal to generate acoustic waves (such as Rayleigh wave, lamb wave, love wave, and the like) with different types and parameters; the acoustic energy is converged on the surface of the substrate, so that microfluid on the surface of the substrate and substances in the fluid can be effectively driven, separated and the like; the sensing module comprises an image sensor and a physical sensor, wherein the image sensor is used for acquiring the basic characteristic information (such as shape, arrangement and the like) of the microfluidic image in real time, and the physical sensor is used for acquiring the basic characteristic information (such as speed, temperature and the like) of the microfluidic image in real time;
fig. 4 is a schematic structural diagram of a RRAM crossbar memory array with a 1R structure in this embodiment.
The RRAM cross memory array of the memory unit with the 1R structure with the simplest structure is adopted in the embodiment; the cross memory array comprises a plurality of word lines and a plurality of bit lines, wherein a layer of micro/nano-scale insulating material (such as hafnium oxide, titanium dioxide, nickel oxide and the like) is arranged at the cross position of each word/bit line, and the word/bit lines are respectively used as top/bottom electrodes to form an RRAM memory cell with a typical MIM structure (metal-insulation-metal structure); tong (Chinese character of 'tong')By selecting specific word lines and bit lines, read/write operations can be performed on specific RRAM memory cells; for example, when the word/bit line driving module selects the word line WL i And bit line BL j Then, the corresponding RRAM memory cell M ij Selecting the selected plants; the conductance value of the RRAM memory cell changes according to the voltage applied to two ends of the device; let the word line (top electrode) to the bit line (bottom electrode) be in the forward direction, when the word bit voltage is in the forward direction, if and only if the word bit voltage exceeds the forward resistance change threshold V set In time, the conductance of the RRAM memory cell becomes large, a phenomenon known as the Set process; in the Set process, the variation of the conductance value can be controlled by adjusting the magnitude of the limiting current and the voltage application duration, the larger the limiting current is, the longer the voltage application duration is, the larger the variation of the conductance value is, and vice versa; when the word bit voltage is negative, if and only if the word bit voltage exceeds a negative resistance change threshold V reset When the resistance of the RRAM memory cell becomes small, this phenomenon is called Reset process; in the Reset process, the change of the conductance value can be controlled by adjusting the magnitude of the limiting current and the voltage application time, the larger the limiting current is, the longer the voltage application time is, the larger the change of the conductance value is, and vice versa; when the word/bit line driving unit reads the conductance value of the RRAM memory cell, the word bit voltage is set to be far lower than the forward threshold voltage V Set Positive small voltage signal V read (usually 0.1 to 0.5V) and a current value I read By the formula:
Figure BDA0003468612210000061
the conductance of the RRAM memory cell can be obtained.
Fig. 5 illustrates the matrix multiplication principle based on RRAM crossbar memory array in this embodiment.
The RRAM cross storage array is based on ohm law and kirchhoff law, matrix multiplication calculation of a neural network is carried out in a cross matrix by utilizing a self structure, and memory calculation is realized; in the matrix multiplication, the input voltage of the word line i is V i (i =1,2,3, \ 8230;, n; n is the number of wordlines) and the conductance value of the RRAM memory cell Mij between wordline i and bitline j is G ij (j =1,2,3, \ 8230;, m; m is the number of bit lines), the current I through the memory cell ij ij Are merged at a bit line j, the output current I of the bit line j j Calculate the result for the matrix multiplication:
Figure BDA0003468612210000062
fig. 6 is a schematic diagram of a framework of the memory computing module in this embodiment.
The memory calculation module comprises an RRAM cross storage array, a word/bit line driving module, a current limiting module, a standardization module and an activation function module; the RRAM cross storage array is used for storing weight data of the neural network and completing matrix multiplication calculation of the neural network; the word/bit line driving module is used for selecting a memory address, controlling the working state of each memory cell in the cross memory array and performing read/write operation on data in the memory cells; the current limiting module accurately erases the conductance value of the memory unit selected by the word/bit line driving module by adjusting the output current of the word/bit line driving module; the standardization module is used for converting the input voltage/current signals to proper size, scaling the input voltage/current signals according to the ratio, enabling the input voltage/current signals to fall into a specific interval and ensuring that the size of the input signals falls into a reading interval of the cross memory matrix; and the activation function module carries out non-linearization on the matrix multiplication calculation result and outputs a final calculation result.
Fig. 7 is a schematic diagram of a framework of the artificial neural network according to the present embodiment. The artificial neural network of the embodiment comprises an input layer, a hidden layer and an output layer; the number of the hidden layers and the specific number of the neurons in each layer can be determined according to the actual application condition; the strength of the association between the neuron i in the previous layer and the neuron j in the next layer is determined by the weight between the two neurons, and the greater the weight is, the greater the strength of the association between the two neurons is. Input voltage V of word line i in matrix multiplication calculation based on RRAM cross memory array i For the output signal of the upper layer of neurons i, the RRAM memory cell M ij Conductance value G of ij The output power of the bit line is the weight between the upper layer neuron i and the lower layer neuron jStream I j And the analog voltage is converted into a corresponding analog voltage by the normalization module to serve as an input signal of the next layer of neuron j.
According to the embodiment, the memory computing technology is combined with the microfluidic chip, the microfluidic module, the sensing module and the memory computing module are integrated into the microfluidic chip, so that the calculation of machine learning which is originally required to be performed by an external computing unit is transferred to the storage unit of the microfluidic chip, the computational efficiency of the machine learning for the microfluidic is effectively improved, and the power consumption of the system is reduced.

Claims (9)

1. A micro-fluidic chip based on memory calculation is characterized in that: the device comprises a microfluidic module, a sensing module and a memory computing module;
the microfluidic module comprises at least one microfluidic driving module and at least one microchannel, wherein each microchannel selects and designs the material, parameters and shape of the microchannel according to the functional requirements of the microfluidic module, and combines the microfluidic driving module to accurately control the microscale fluid so as to realize the microfluidic function;
the sensing module comprises at least one image sensing module and/or a physical sensing module, the image sensing module is used for acquiring the image basic characteristic information of the microfluidic module in real time, and the physical sensing module is used for acquiring the physical basic characteristic information of the microfluidic module in real time and outputting a corresponding machine learning input variable;
the memory calculation module comprises a cross storage array, a word/bit line driving module, a current limiting module, a standardization module and an activation function module; the memory computing module receives the machine learning input variable output by the sensing module, computes in a storage unit of the memory computing module, completes training and/or inference of machine learning, feeds a training result back to the memory computing module, updates model parameters stored in the memory computing module, and outputs an inference result to a microfluidic control module or peripheral equipment of a microfluidic control chip to realize a specific task of machine learning;
at least one cross memory array is arranged by taking a nonvolatile memory as a basic unit and used for storing model parameters of the artificial neural network and finishing matrix multiplication calculation of the artificial neural network; the rows and the columns of the crossed array are respectively called word lines and bit lines, the word lines and the bit lines are connected with a word/bit line driving module, and the storage units of the crossed array are nonvolatile storage devices;
at least one word/bit line driving module is used for selecting memory addresses, controlling the working state of each memory cell in the cross memory array and performing read/write operation on data in the memory cells;
at least one current limiting module, which accurately erases the conductance value of the memory unit selected by the word/bit line driving module by limiting the maximum current of the word/bit line driving module;
at least one standardization module, which is used for scaling the input signals according to the ratio so that the input signals fall into a specific interval, wherein the specific interval needs to ensure that the size of the input signals can fall into a reading interval or an erasing interval of the cross memory matrix;
and activating at least one function module for increasing the nonlinearity of the matrix multiplication result.
2. The memory computing-based microfluidic chip of claim 1, wherein: the specific tasks of machine learning are sample detection, sample classification, and/or automatic manipulation.
3. The memory computing-based microfluidic chip of claim 1, wherein:
the microfluidic driving module is based on an electrical control technology, an optical control technology, a magnetic control technology and/or an acoustic control technology.
4. The memory computing-based microfluidic chip of claim 1, wherein: the microfluidic function realized by the microfluidic module is a sample pretreatment, reaction and/or sorting function.
5. The memory computing-based microfluidic chip of claim 1, wherein:
the image basic characteristic information acquired by the image sensing module is in shape and/or arrangement; the physical basic characteristic information acquired by the physical sensing module is speed and/or temperature.
6. The memory computing-based microfluidic chip of claim 1, wherein: the machine learning algorithm is an artificial neural network.
7. The memory computing-based microfluidic chip of claim 1, wherein:
the organization scheme of the storage unit is as follows: a transistor as a selection transistor is connected in series with a memory device to form a 1T-1R structure; or a rectifying device and a storage device are connected in series to form a 1D-1R structure; or a single memory device as a 1R structure memory cell, functions as a gate of the memory cell.
8. The memory computing-based microfluidic chip of claim 1, wherein:
the memory unit is a Resistance Random Access Memory (RRAM), a Phase Change Random Access Memory (PCRAM), a Ferroelectric Random Access Memory (FRAM) and/or a Magnetoresistive Random Access Memory (MRAM).
9. The memory-computation-based microfluidic chip of claim 1, wherein the workflow is as follows:
step 1, global reset, initializing the states of all modules in the microfluidic chip; storing the model parameters of machine learning to a storage unit of a memory calculation module;
step 2, adding a micro-scale fluid experimental sample into the micro-channel, and combining each micro-fluidic driving module to accurately control the experimental sample in the micro-channel so as to realize the micro-fluidic function;
step 3, the sensing module acquires basic characteristic information of the microfluidic module in real time and outputs corresponding machine learning input variables;
step 4, the machine learning input variable is transmitted to a memory calculation module, and the machine learning calculation is carried out on a storage unit of the memory calculation module to finish the training and/or the inference of the machine learning;
step 5, feeding back the training result to a memory calculation module, and updating the stored model parameters; and the inference result is output to a microfluidic module or peripheral equipment of the microfluidic chip, so that the specific task of machine learning is realized.
CN202210037583.XA 2022-01-13 2022-01-13 Micro-fluidic chip based on memory calculation Active CN114505105B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210037583.XA CN114505105B (en) 2022-01-13 2022-01-13 Micro-fluidic chip based on memory calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210037583.XA CN114505105B (en) 2022-01-13 2022-01-13 Micro-fluidic chip based on memory calculation

Publications (2)

Publication Number Publication Date
CN114505105A CN114505105A (en) 2022-05-17
CN114505105B true CN114505105B (en) 2022-11-11

Family

ID=81550758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210037583.XA Active CN114505105B (en) 2022-01-13 2022-01-13 Micro-fluidic chip based on memory calculation

Country Status (1)

Country Link
CN (1) CN114505105B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115468916A (en) * 2022-08-03 2022-12-13 天津大学 On-chip fluid control module, acoustic fluid chip and analysis device
CN115876840A (en) * 2022-11-23 2023-03-31 杭州未名信科科技有限公司 Gas detection system integrating sensing and calculating, detection method and detection equipment
CN116124334A (en) * 2023-01-10 2023-05-16 杭州未名信科科技有限公司 Pressure detection system, method, equipment and medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100349178C (en) * 2004-12-06 2007-11-14 中国科学院大连化学物理研究所 Micro flow controlling chip DNA molecular memory
US10222391B2 (en) * 2011-12-07 2019-03-05 The Johns Hopkins University System and method for screening a library of samples
CN105388131A (en) * 2014-09-09 2016-03-09 国家纳米科学中心 Fluorescence detection instrument and system based on micro-fluidic chip
CN105260730A (en) * 2015-11-24 2016-01-20 严媚 Machine learning-based contact-type imaging microfluid cell counter and image processing method thereof
CN112566721A (en) * 2018-05-28 2021-03-26 杭州纯迅生物科技有限公司 Method and apparatus for controlling and manipulating multiphase flow in microfluidics using artificial intelligence
CN112151095A (en) * 2019-06-26 2020-12-29 北京知存科技有限公司 Storage and calculation integrated chip and storage unit array structure
CN110515454B (en) * 2019-07-24 2021-07-06 电子科技大学 Neural network architecture electronic skin based on memory calculation
CN113607628B (en) * 2021-09-02 2023-02-10 清华大学 Method for processing cell image stream by nerve morphology calculation driving image flow cytometer

Also Published As

Publication number Publication date
CN114505105A (en) 2022-05-17

Similar Documents

Publication Publication Date Title
CN114505105B (en) Micro-fluidic chip based on memory calculation
US10692570B2 (en) Neural network matrix multiplication in memory cells
CN109460817B (en) Convolutional neural network on-chip learning system based on nonvolatile memory
US20220277199A1 (en) Method for data processing in neural network system and neural network system
CN100407471C (en) Integrated circuit device and neure
US20190138892A1 (en) Neural network device and method
CN110825345A (en) Multiplication using non-volatile memory cells
CN108475519A (en) Including memory and its device and method of operation
WO2020093726A1 (en) Maximum pooling processor based on 1t1r memory device
EP3506266A1 (en) Methods and systems for performing a calculation across a memory array
CN107533862A (en) Crossed array for calculating matrix multiplication
CN110362291B (en) Method for performing nonvolatile complex operation by using memristor
US11449740B2 (en) Synapse circuit with memory
CN113077829A (en) Memristor array-based data processing method and electronic device
CN106448729B (en) A kind of circuit and method for realizing bi-directional digital operation based on phase transition storage
Lebdeh et al. Memristive device based circuits for computation-in-memory architectures
CN115876840A (en) Gas detection system integrating sensing and calculating, detection method and detection equipment
CN110543937A (en) Neural network, operation method and neural network information processing system
WO2018137177A1 (en) Method for convolution operation based on nor flash array
EP4006906A1 (en) Apparatus and method for controlling gradual resistance change in synaptic element
Garbin A variability study of PCM and OxRAM technologies for use as synapses in neuromorphic systems
CN110597487A (en) Matrix vector multiplication circuit and calculation method
CN115719087A (en) Long-short term memory neural network circuit and control method
KR20230090849A (en) Neural network apparatus and electronic system including the same
US11996137B2 (en) Compute in memory (CIM) memory array

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant