WO2001091049A1 - Processing images in a parallel processor network - Google Patents

Processing images in a parallel processor network Download PDF

Info

Publication number
WO2001091049A1
WO2001091049A1 PCT/FI2001/000496 FI0100496W WO0191049A1 WO 2001091049 A1 WO2001091049 A1 WO 2001091049A1 FI 0100496 W FI0100496 W FI 0100496W WO 0191049 A1 WO0191049 A1 WO 0191049A1
Authority
WO
WIPO (PCT)
Prior art keywords
current
cell
parallel processor
current mirror
processor network
Prior art date
Application number
PCT/FI2001/000496
Other languages
French (fr)
Inventor
Ari Paasio
Asko Kananen
Original Assignee
Ari Paasio
Asko Kananen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ari Paasio, Asko Kananen filed Critical Ari Paasio
Priority to AU2001262382A priority Critical patent/AU2001262382A1/en
Priority to EP01936485A priority patent/EP1292915A1/en
Publication of WO2001091049A1 publication Critical patent/WO2001091049A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/065Analogue means

Definitions

  • the present invention relates to a method for processing images with an analog implementation in a parallel processor network and to parallel processor network and its cell applicable for processing images with an analog implementation .
  • a parallel processor network includes several identical processors (called cells) that are arranged in a regular form. Each cell has an income signal, a dynamic state and an outcome signal.
  • a parallel processor network for processing images is that one processor, i.e., a cell, corresponds to one image unit (pixel) , and therefore, it is possible to provide a parallel processor network that is able for very fast processing in theory.
  • a problem for integrating this kind of a parallel processor network is that the size of a processor has not been minimized to enable implementing integration of tens or hundreds of thousands of processors into a chip without lowering the liability when the error rate increases when the area of a circuit increases.
  • the cells are connected to their neighboring cells (they can also be connected for example to all other cells in a parallel processor network) , i.e., they affect to a dynamic state of their neighboring cells. This affect correlates straightforwardly to the income of the cell and its own dynamic outcome.
  • a feed-forward topology In a feed-forward topology, a certain group of income measures are multiplied with corresponding coefficients and the . sums of the multiplications are summed up. These sums, in turn, form the income measures to the next calculation level. In this topology the results of the calculation level advances only to the next calculation level . In a feedforward structure (or topology) , the multiplying and summing operations can practically be started to implement as soon as all the income terms needed for the operation are available. However, since all the second calculation level units need all the first calculation level outcomes as income information, it means that the second calculation level can start evaluation only when all the incomes from the first level are available.
  • the parallel ⁇ processor network implementation may be inconvenient to implement, because a physical memory required by each cell takes essentially a lot of space of the area of a chip.
  • an accuracy of the calculation is defined with a length of a digital word, i.e., the accuracy of the calculation can be increased by increasing the amount of the discrete states .
  • the area of the multiplying and summing structures depend on their inner word length, i.e., the requirements set by the accuracy of the calculation.
  • the speed of the calculation depends on the used clock frequency. In case the calculation structure is a recurrent type, the amount of the calculations needed is extremely huge .
  • the accuracy of the calculation can be increased until a certain limit for the multiplying calculation, for example by increasing the sizes of the transistors or by using compensation methods that are rather complicated. With simple analog multiplying structures are achieved about the same accuracy than with about six-seven bit digital implementation.
  • the speed of the analog calculation depends on the time constant that are in a circuit. Analog real-time evaluation is essentially faster than digital discrete time implementation.
  • connection for a basis to a cellular neural network structure a connection for implementing a current mirror structure shown in figure 1.
  • the cell's currents (IsumO) are summed up ' in an income of the current mirror 101, which has two transistors 102 and 103.
  • This current mirror 101 is implemented with mirroring ratio 1:1, and an outcome current (IsumO) is conducted to a current limiting structure 104.
  • Grid voltage VL of a transistor 105 of the current limiting structure 104 is set so that the transistor 105 limits the current passing through it to a certain maximum value.
  • the sum current (IsumO) is limited to a range, in which IsumO is bigger than 0 but smaller than the maximum value of the current.
  • the current passing through a transistor 106 is an outcome current of the cell, which is copied to neighboring cells described by the template of the cellular neural network so that positive multipliers are implemented according to a current mirror structure 107, which has two transistors 106 and 108, which has a mirroring ratio 1:K1, and Kl equals to a multiplier of the cellular neural network.
  • the negative multipliers are implemented by copying first the outcome current with mirror structure 109, which has two transistors 106 and 110 (mirroring ratio 1:1), and this current (10) is conducted, in turn, to a current mirror 111, which is formed by two transistors 112 and 113, which has a mirroring ratio 1:K2, and K2 equals to a multiplier of the cellular neural network.
  • the several current mirrors of the cell all increase the total area of the cell, which in turn, increase the total area ⁇ required by the network in a chip .
  • a method for processing image with an analog implementation in a parallel processor network wherein cells of the parallel processor network are arranged line by line, the method comprising: a) reading input information of a line into the parallel processor network by supplying an input measure of the line into each income point; b) start processing of line read previously into the parallel processor network essentially at the same time, in which processing of the line is implemented by conducting a measure of an income point of each cell of the line from output measures of neighborhood; and c) reading out output information of at least one already processed line essentially at the same time with points a) and b) , in which reading out output information of already processed line is implemented by conducting output measure of each cell of the line into an outcome point of a cell and conducting the output measure further to next processing structure .
  • the input information of the line is read partly from the memory of the line of the cells.
  • each cell of the parallel processor network forms one image unit, i.e., a cell .
  • the input information is read into the parallel processor network, in which in the processing of the input information a linear two dimensional low pass filtering is implemented.
  • the input measure is an input current
  • the output measure is an output current
  • processing the line read into the parallel processor network is implemented by conducting an output current of an input point of each cell of the line into a first of two current mirrors, scaling the current in the first current mirror, conducting an output current of the first current mirror into a second current mirror of the cell, and scaling the current in the second current mirror.
  • a parallel processor network which is applicable to processing image with an analog implementation, comprising: several cells that are connected to their neighboring cells and integrated into a chip, wherein each cell has an income point for receiving an input current, which is a sum of the output currents of the neighboring cells and a memory of the cell, two current mirrors, from which a first current mirror is arranged to scale an output current of the income point, and a second current mirror is arranged to scale the output current of the first current mirror, and at least one outcome point for conducting an output current to at least one income point of the neighboring cells.
  • each cell is arranged to ' receive currents from several neighboring cells and/or transferring currents to several neighboring cells .
  • a cell of a parallel processor network which is applicable to processing image with an analog implementation, wherein the cell is integrated into a chip and comprising: an income point for receiving an input current which is a sum of the output currents of the neighboring cells and a memory of the cell; two current mirrors, from which a first current mirror is arranged to scale an output current of the income point, and a second current mirror is arranged to scale the output current of the first current mirror; and at least one outcome point for conducting an output current to at least one income point of the neighboring cells.
  • the cell of the parallel processor network is arranged to implement low pass filtering.
  • the outcome point of the first current mirror is connected directly to an income point of a second current •mirror.
  • the outcome point of the second current mirror is connected directly to an income point of a first current mirror of a second processing structure.
  • the current mirror has at least two transistors . More preferably, at least two transistors in the current mirror are NMOS transistors or at least two transistors in the current mirror are PMOS transistors.
  • the first current mirror further comprises third transistor, which is arranged to forward a current form result of the input current scaled by the first current mirror.
  • a memory of the cell is preferably integrated into a chip.
  • conducting an output current from the outcome point of the cell is arranged to be achieved into each neighboring cells from transistors separated from each other.
  • mirroring ratios of current mirrors are constant.
  • mirroring ratios of current mirrors are programmable .
  • the present invention provides remarkable advantages compared to prior art, from which can be mentioned here e.g., low power consumption, small needed area in integration and speed of calculation.
  • Figure 1 shows a current mirror connection in a cellular neural network for implementing a cell according to prior art .
  • Figure 2 shows a network according to one embodiment of the present invention.
  • Figure 3 shows a method according to one embodiment of the present invention in a flowchart .
  • Figures 4a and 4b show a method according to one embodiment of the present invention in a table.
  • Figure 5 shows a current mirror according to prior art .
  • Figure 6 shows a current mirror structure according to one embodiment of the present invention.
  • Figure 7 shows a current mirror structure according to another embodiment of the present invention.
  • Figure 8 shows a current mirror structure according to third embodiment of the present invention.
  • Figures 9a and 9b show a matrix illustrating a symmetric state in relation to cell's neighbors, and a matrix resulted from scaling it .
  • Figure 10 shows a part of a resistive network.
  • a recurrent topology is implemented, wherein a result of a calculation level has both feed-forward type results and also defined starting information of the same level, i.e., the sums received from multiplying calculations .
  • This kind of a stability of a network and setting to its final value in general are more difficult to analyze because of their feedback.
  • the first calculation level is the only one, and therefore, an input of the first calculation level is the input of whole structure.
  • the basic values which correspond to some feature (e.g., a luminance) of the image that is processed, must be loaded to cells.
  • One way to load the basic values is to use separate sensor unit and a separate processor structure, and to handle the information transfer between the two with wide bus structures.
  • the most used way to load image information into processor network is to write this information into cells by one line at a time.
  • so called rolling according to present invention may be used to speed up the processing.
  • the processing of the first part of the image is done at the same time when the end part of the image is still loaded to the processor network.
  • the segmenting algorithm starts with low pass filtering an image, with which it is possible to remove errors occurred in transfer lines etc. From the image that is processed can be removed certain type of noise with linear two dimensional low pass filter according to the present invention.
  • a solution suitable for a parallel processor network width according to the present invention is full image width, but the height is only a small part of the height of the image.
  • the parallel processor network is shown exemplified in figure 2. The skilled man in the art appreciates that the processing image according to the present invention can be implemented also with a different network structure with the same principles, for example with resistive network.
  • the meaning of the neighboring cells is to create similar neighborhood to the cell lines than those would have in whole parallel processor network.
  • These neighboring cell lines cells are identical to the active cell lines cells. How many neighboring cell lines are needed is defined case by case and depends on the coefficients used in the parallel processor network. In the solution according to the figure has been used 5 lines as the number of the neighboring cell lines, and 24 lines has been used as the number of the active lines. Implementation of the control affects to the number of the active cells (the more lines, the fewer writing operations is needed to be done) .
  • the parallel processor network is surrounded by the edge cells, which are meant to provide for example so called zero-flux neighborhood for parallel processor network, in which the input information and states to the closest cells to the edge are copied to the corresponsive values of the edge cells and therefore provided a full neighborhood to the cells closest to the edge.
  • Function of the parallel processor network according to the present invention can be described as a rolling process, wherefrom the description of the part of the process is shown in a block diagram in figure 3.
  • the description starts in a situation where input information is read for 11 first lines 301.
  • the input information is read at least partly from the physical memory of the cells of the line.
  • the input information for the five first lines are written from different places. If the process is in the beginning of the image, the information to the five first lines is the same as to the sixth line (i.e., the first line of the image) . If the process is in the middle of the. image, these first lines ' gets the results received from the five last active cell lines in previous round.
  • evaluation (i.e., calculation) of the sixth line is started 302, which means that calculation of the lines 1-11 is started, when their states is beginning to affect the states of the neighborhood.
  • evaluation i.e., calculation
  • input information of the 13th line is read and the calculation of the 12th line is started 303, wherein the lines 1-12 are in the evaluation.
  • input information of the next line i.e., line 14
  • calculation of the 13th line is started 304.
  • information of the lines 6, 7, 8 are read to the next phase 304.
  • the next phase in the algorithm can be done at the same time as the low pass filtering. This is possible if the operation in the next phase is such that only the closest neighbors affect to the result ' . Three lines of the low pass part is read to the next phase at a time, then the result of the next phase is finished almost at the same time as the low pass filtering is completed.
  • a current mirror according to the present invention, which has a structure, which has current as an input and output measure.
  • NMOS transistors 501 and 502 form the current mirror.
  • K is a mirroring ratio, which is received from the scaling of the transistors (501 and 502) dimensions
  • K (W2*L1) / (L2*W1) , in which Wl and W2 are the width of the transistors (501 and 502) and LI and L2 are the lengths;
  • the current mirror may have several output currents, from which everyone is implemented with separate components.
  • NMOS transistor has two output currents (14 and 15) , the drain currents of the transistors 502 and 503.
  • the current mirror can be also implemented with PMOS transistors.
  • Fig ⁇ re 6 shows a current mirror structure according to the present invention, which can be used to implement low pass filtering for one pixel.
  • the structure according to the present invention has only two current mirrors, from which the first current mirror 601 comprises transistors 602 and 603, and the second current mirror 604 comprises transistors 605 and 606.
  • the transistors in the current mirrors can be e.g., NMOS transistors or PMOS transistors. That is not essential according to the present invention whether there are two NMOS or PMOS transistors in the first and second current mirrors, but there are transistors in these current mirrors. If the first current mirror comprises two (or more) NMOS transistors ' , the second current mirror should have two (or more) PMOS transistors, and vice versa.
  • the mirroring ratio is K3 : 1.
  • An income point 607 of a cell receives the input current, which is sum (Iin) of the output currents received from neighboring cells and memory of the cell .
  • the information coming from the memory of the cell is marked as 16.
  • the memory of the cell may be either integrated into a chip or the cell may have so called outer memory.
  • Drain of the transistor 602 functions as an income point for the first current mirror 601 receiving an output current from the income point 607 of the cell.
  • the first current mirror is arranged to scale the output current of the income point, where after an output current of the first current mirror 601 is conducted to said second current mirror 604, which has a mirroring ratio 1:K4.
  • An outcome point of the first current mirror 601 is connected directly to the income point of the second current mirror 604.
  • the second current mirror is arranged to scale the output current of the first current mirror.
  • An output current (lout) of the current mirror 604 which is drain current of the transistor 606, is conducted to corresponding income points of the neighboring processors (not shown in figure) .
  • the output point of the second current mirror of the first processing structure according to the present invention is preferably connected directly to an income point of the first current mirror of a second processing structure.
  • the second current mirror 604 has several output transistors, from which one of them 606 is shown in figure 6, having output currents (lout) that are conducted to corresponding income points of the neighboring processors.
  • the number of the outcome points is (N) .
  • the parallel processor network (and its cell) according to the present invention is compared to resistive network, which is known to the person skilled in the art, in calculations, and in which R0 ' is resistance of the constant potential (ground) , Rl is resistance of the neighboring cells, GO is a conductance corresponding to the resistance of the constant potential, and Gl is a conductance corresponding to the resistance of the neighboring cells.
  • a current form result 608 is received, which the first current mirror is scaled from the input current, and which can be read from drain of the transistor 609.
  • the mirroring ratios K3 and K4 can be either constants or they can be programmable depending on does the algorithm one or more different filtering operations. Using the constant mirroring ratios, the calculation multiplier is based on the relation of the transistor geometric.
  • Parallel connection structure can also be made to programmable, in which the control signals functions with digital on/off logic. These control measures define through switches, which parallel connected structures affects to the outcome of the structure.
  • Advantages of the present invention are that it allows also non-symmetric structures in relation to position (in the above it has only been presented symmetric) and replacing negative conductances with current mirror structure.
  • the cells of the parallel processor network according to the present invention are preferably integrated into a chip such that each cell is arranged to receive currents from numerous neighboring cells and/or to transfer currents to numerous neighboring cells.
  • Figure 7 shows a current mirror structure according to another embodiment of the present invention, in which it is assumed that conductances are positive.
  • the sum of the three currents is formed in the structure according to figure 7 from currents 18, which is conducted from current mirror 705 (which comprises transistors 706 and 707), 19, which is conducted from current mirror 708 (which comprises transistors 709 and 710), and 16, which can be conducted either from a memory integrated into a chip or from an outer memory.
  • the mirroring rations of the current mirrors should be chosen suitably.
  • the effect of the negative conductance is implemented by conducting an affect of a neighboring state 110 to a state 17 through a current mirror 801, which comprises 3 transistors (802, 803 and 804) .
  • I10_l Kal*I10.
  • Isum3 -I10*Kal + Ill*Kbl + 16
  • Isum3/K6 (-I10*Kal + Ill*Kbl + 16) / K6. This last formula can be adapted with a formula of the resistive network.
  • a current mirror 805, and transistors 806, 807 and 808 relating into it corresponds to the current mirror 701, and transistors 702, 703 and 704 respectively relating into it, shown in figure 7.
  • a current mirror 809, and transistors 810 and 811 relating into it corresponds to the current mirror 708, and transistors 709 and 710 respectively relating into it, shown in figure 7.
  • the current mirror structure according to the present invention can be used for implementing cellular neural network structure.
  • This kind of cellular neural network can be described as a resistive network, where from the corresponding mirroring ratios are calculated according to the principles described above.
  • the conductance values can be scaled such that some conductance value is always constant when there is no need to be able to be programmed that.
  • This means constant mirroring ratios in corresponsive structures in a current, mirror implementation, which remarkably reduces the need for programming of the structure, in case the transfer function is symmetric in relation to position.
  • One common filtering topology stated with multipliers of the parallel processor network is shown in a matrix of figure 9a.
  • the multipliers of this matrix can be scaled to correspond a matrix shown in figure 9b. If the multiplier matrix is symmetric, multipliers affecting neighbors can be always scaled to value 1, and then, with adjusting one multiplier, self feedback, can be implemented all the transfer functions being within the limits of programmable.
  • result is a result received from a stable parallel processor network.
  • output measures of a parallel processor network according to one embodiment of the present invention are available in current form all the time, also during the calculation can be taken samples from a state of a parallel processor network.
  • the result does not necessarily correspond to the result received from linear resistive network according to prior art, but also this result received from a current mirror structure can be enough in some applications.
  • Low pass filtering can also be implemented e.g., with resistive network, a part of that is shown in figure 10.
  • resistive network one node, i.e., one cell, corresponds to each image unit (pixel) .
  • the image information is brought to the nodes e.g., as current, which is directly comparable to the intensity of the image unit.
  • transfer function of filtering can be controlled in relation to position. Normally, it is assumed that image information remains unchanged for a certain time, so that the resistive network is able to set into stable state, when the voltages (Vx) of the node is the result of processing. Therefore, filtering happens only in relation to position, not in relation to time.
  • new image information may be brought to a filter by changing the currents of current sources to correspond new image information.

Abstract

According to the present invention there is provided a method for processing image with an analog implementation in a parallel processor network. Input information is read for a line into the parallel processor network. Essentially at the same time already into the parallel processor network read line(s) is started to processing, and reading out output information of already processed line(s). A parallel processor network implementing the method comprises cells that are integrated into a chip. These cells comprise an income point for receiving an input current, two current mirrors, from which a first current mirror is arranged to scale an output current of the income point, and a second current mirror arranged to scale the output current of the first current mirror, and at least one outcome point for conducting an output current to an income point of at least one neighboring cell.

Description

PROCESSING IMAGES IN A PARALLEL PROCESSOR NETWORK
Field of the invention
The present invention relates to a method for processing images with an analog implementation in a parallel processor network and to parallel processor network and its cell applicable for processing images with an analog implementation .
Background of the invention
A parallel processor network includes several identical processors (called cells) that are arranged in a regular form. Each cell has an income signal, a dynamic state and an outcome signal.
The idea in a parallel processor network for processing images is that one processor, i.e., a cell, corresponds to one image unit (pixel) , and therefore, it is possible to provide a parallel processor network that is able for very fast processing in theory. A problem for integrating this kind of a parallel processor network is that the size of a processor has not been minimized to enable implementing integration of tens or hundreds of thousands of processors into a chip without lowering the liability when the error rate increases when the area of a circuit increases.
In a parallel processor network, the cells are connected to their neighboring cells (they can also be connected for example to all other cells in a parallel processor network) , i.e., they affect to a dynamic state of their neighboring cells. This affect correlates straightforwardly to the income of the cell and its own dynamic outcome. These features enable a real-time signal processing, because the data processing occurs in all cells at the same time. US 5,519,811 describes a feed-forward type analog neural network transistor level solution for image identification.
In a feed-forward topology, a certain group of income measures are multiplied with corresponding coefficients and the. sums of the multiplications are summed up. These sums, in turn, form the income measures to the next calculation level. In this topology the results of the calculation level advances only to the next calculation level . In a feedforward structure (or topology) , the multiplying and summing operations can practically be started to implement as soon as all the income terms needed for the operation are available. However, since all the second calculation level units need all the first calculation level outcomes as income information, it means that the second calculation level can start evaluation only when all the incomes from the first level are available.
Since all the second level multiplying and summing operations can be started only when all the incomes from the first level are available in the feed-forward type topology, the parallel processor network implementation may be inconvenient to implement, because a physical memory required by each cell takes essentially a lot of space of the area of a chip.
In a digital implementation an accuracy of the calculation is defined with a length of a digital word, i.e., the accuracy of the calculation can be increased by increasing the amount of the discrete states . This corresponds to increasing the digital word length in multiplying and summing structures . The area of the multiplying and summing structures, in turn, depend on their inner word length, i.e., the requirements set by the accuracy of the calculation. In the digital implementation the speed of the calculation depends on the used clock frequency. In case the calculation structure is a recurrent type, the amount of the calculations needed is extremely huge . In an analog implementation the accuracy of the calculation can be increased until a certain limit for the multiplying calculation, for example by increasing the sizes of the transistors or by using compensation methods that are rather complicated. With simple analog multiplying structures are achieved about the same accuracy than with about six-seven bit digital implementation. The speed of the analog calculation depends on the time constant that are in a circuit. Analog real-time evaluation is essentially faster than digital discrete time implementation.
In the article, M. Anguita, F. J. Pelayo, E. Ros, D. Palomar and A. Prieto, "Focal-Plane and Multiple Chip VLSI Approaches to CNN's", Analog Integrated Circuits and Signal Processing, Vol. 15, pages 263-275, 1998, they disclose for a basis to a cellular neural network structure a connection for implementing a current mirror structure shown in figure 1. In shown connection' the cell's currents (IsumO) are summed up ' in an income of the current mirror 101, which has two transistors 102 and 103. This current mirror 101 is implemented with mirroring ratio 1:1, and an outcome current (IsumO) is conducted to a current limiting structure 104. Grid voltage VL of a transistor 105 of the current limiting structure 104 is set so that the transistor 105 limits the current passing through it to a certain maximum value. The sum current (IsumO) is limited to a range, in which IsumO is bigger than 0 but smaller than the maximum value of the current. The current passing through a transistor 106 is an outcome current of the cell, which is copied to neighboring cells described by the template of the cellular neural network so that positive multipliers are implemented according to a current mirror structure 107, which has two transistors 106 and 108, which has a mirroring ratio 1:K1, and Kl equals to a multiplier of the cellular neural network. The negative multipliers are implemented by copying first the outcome current with mirror structure 109, which has two transistors 106 and 110 (mirroring ratio 1:1), and this current (10) is conducted, in turn, to a current mirror 111, which is formed by two transistors 112 and 113, which has a mirroring ratio 1:K2, and K2 equals to a multiplier of the cellular neural network.
The several current mirrors of the cell all increase the total area of the cell, which in turn, increase the total area required by the network in a chip .
Summary of the present invention
It is an object of the present invention to provide a method and an apparatus, with which the area of each cell can be decreased. This decrease of the area provides advantages, especially when it is desirable to integrate several processors into same chip for forming a parallel processor network .
According to a basic idea of the present invention there is provided a method for processing image with an analog implementation in a parallel processor network, wherein cells of the parallel processor network are arranged line by line, the method comprising: a) reading input information of a line into the parallel processor network by supplying an input measure of the line into each income point; b) start processing of line read previously into the parallel processor network essentially at the same time, in which processing of the line is implemented by conducting a measure of an income point of each cell of the line from output measures of neighborhood; and c) reading out output information of at least one already processed line essentially at the same time with points a) and b) , in which reading out output information of already processed line is implemented by conducting output measure of each cell of the line into an outcome point of a cell and conducting the output measure further to next processing structure . Preferably, the input information of the line is read partly from the memory of the line of the cells. Preferably, each cell of the parallel processor network forms one image unit, i.e., a cell .
Preferably, the input information is read into the parallel processor network, in which in the processing of the input information a linear two dimensional low pass filtering is implemented.
Preferably, the input measure is an input current, and the output measure is an output current.
Preferably, processing the line read into the parallel processor network is implemented by conducting an output current of an input point of each cell of the line into a first of two current mirrors, scaling the current in the first current mirror, conducting an output current of the first current mirror into a second current mirror of the cell, and scaling the current in the second current mirror.
According to a basic idea of the present invention there is provided a parallel processor network, which is applicable to processing image with an analog implementation, comprising: several cells that are connected to their neighboring cells and integrated into a chip, wherein each cell has an income point for receiving an input current, which is a sum of the output currents of the neighboring cells and a memory of the cell, two current mirrors, from which a first current mirror is arranged to scale an output current of the income point, and a second current mirror is arranged to scale the output current of the first current mirror, and at least one outcome point for conducting an output current to at least one income point of the neighboring cells.
Preferably, each cell is arranged to' receive currents from several neighboring cells and/or transferring currents to several neighboring cells . According to a basic idea of the present invention there is provided a cell of a parallel processor network, which is applicable to processing image with an analog implementation, wherein the cell is integrated into a chip and comprising: an income point for receiving an input current which is a sum of the output currents of the neighboring cells and a memory of the cell; two current mirrors, from which a first current mirror is arranged to scale an output current of the income point, and a second current mirror is arranged to scale the output current of the first current mirror; and at least one outcome point for conducting an output current to at least one income point of the neighboring cells.
Preferably, the cell of the parallel processor network is arranged to implement low pass filtering.
Preferably, the outcome point of the first current mirror is connected directly to an income point of a second current •mirror. Preferably, the outcome point of the second current mirror is connected directly to an income point of a first current mirror of a second processing structure.
Preferably, the current mirror has at least two transistors . More preferably, at least two transistors in the current mirror are NMOS transistors or at least two transistors in the current mirror are PMOS transistors.
Preferably, the first current mirror further comprises third transistor, which is arranged to forward a current form result of the input current scaled by the first current mirror.
A memory of the cell is preferably integrated into a chip.
Preferably, conducting an output current from the outcome point of the cell is arranged to be achieved into each neighboring cells from transistors separated from each other.
Preferably, mirroring ratios of current mirrors are constant. Alternatively, mirroring ratios of current mirrors are programmable .
The present invention provides remarkable advantages compared to prior art, from which can be mentioned here e.g., low power consumption, small needed area in integration and speed of calculation.
Brief description of the drawings
The present invention, its implementations and advantages are disclosed in the following as exemplifying by referring to the accompanying drawings at the same time.
Figure 1 shows a current mirror connection in a cellular neural network for implementing a cell according to prior art .
Figure 2 shows a network according to one embodiment of the present invention.
Figure 3 shows a method according to one embodiment of the present invention in a flowchart .
Figures 4a and 4b show a method according to one embodiment of the present invention in a table.
Figure 5 shows a current mirror according to prior art .
Figure 6 shows a current mirror structure according to one embodiment of the present invention.
Figure 7 shows a current mirror structure according to another embodiment of the present invention.
Figure 8 shows a current mirror structure according to third embodiment of the present invention.
Figures 9a and 9b show a matrix illustrating a symmetric state in relation to cell's neighbors, and a matrix resulted from scaling it .
Figure 10 shows a part of a resistive network.
Detailed description of the present invention
In a method according to the present invention a recurrent topology is implemented, wherein a result of a calculation level has both feed-forward type results and also defined starting information of the same level, i.e., the sums received from multiplying calculations . This kind of a stability of a network and setting to its final value in general are more difficult to analyze because of their feedback. Usually in recurrent topology, the first calculation level is the only one, and therefore, an input of the first calculation level is the input of whole structure. One advantage to prior art according to the present invention, is that the calculation in the recurrent structure can be started when all needed input information for a calculation unit is available.
Before processing images in a parallel processor network can start, the basic values, which correspond to some feature (e.g., a luminance) of the image that is processed, must be loaded to cells. One way to load the basic values is to use separate sensor unit and a separate processor structure, and to handle the information transfer between the two with wide bus structures. The most used way to load image information into processor network is to write this information into cells by one line at a time. When the image is loaded into the processor network by line after line or correspondingly, so called rolling according to present invention may be used to speed up the processing. According to the present invention the processing of the first part of the image is done at the same time when the end part of the image is still loaded to the processor network.
One algorithm implementing segmentation, that may be used according to the present invention, is disclosed in article A. Stoffels, T. Roska, L. 0. Chua, "Object-Oriented Image Analysis for Very-Low-Bitrate Video-Coding Systems Using the CNN Universal Machine", International Journal of Circuit Theory and Applications, Vol. 25, pages 235-258, 1997, which is incorporated here by a reference.
The segmenting algorithm starts with low pass filtering an image, with which it is possible to remove errors occurred in transfer lines etc. From the image that is processed can be removed certain type of noise with linear two dimensional low pass filter according to the present invention. A solution suitable for a parallel processor network width according to the present invention is full image width, but the height is only a small part of the height of the image. The parallel processor network is shown exemplified in figure 2. The skilled man in the art appreciates that the processing image according to the present invention can be implemented also with a different network structure with the same principles, for example with resistive network.
To the height of the parallel processor network affect how many 'active lines' 201 (i.e., cell lines from which the result of the calculation is read) is implemented and how many neighboring cell lines 202 remarkably affects to the result. The meaning of the neighboring cells is to create similar neighborhood to the cell lines than those would have in whole parallel processor network. These neighboring cell lines cells are identical to the active cell lines cells. How many neighboring cell lines are needed is defined case by case and depends on the coefficients used in the parallel processor network. In the solution according to the figure has been used 5 lines as the number of the neighboring cell lines, and 24 lines has been used as the number of the active lines. Implementation of the control affects to the number of the active cells (the more lines, the fewer writing operations is needed to be done) . In addition to active cell lines the parallel processor network is surrounded by the edge cells, which are meant to provide for example so called zero-flux neighborhood for parallel processor network, in which the input information and states to the closest cells to the edge are copied to the corresponsive values of the edge cells and therefore provided a full neighborhood to the cells closest to the edge.
Function of the parallel processor network according to the present invention can be described as a rolling process, wherefrom the description of the part of the process is shown in a block diagram in figure 3. The description starts in a situation where input information is read for 11 first lines 301. The input information is read at least partly from the physical memory of the cells of the line. Depending on •whether the process is in the beginning or in the middle of the image to be processed, the input information for the five first lines are written from different places. If the process is in the beginning of the image, the information to the five first lines is the same as to the sixth line (i.e., the first line of the image) . If the process is in the middle of the. image, these first lines' gets the results received from the five last active cell lines in previous round.
At the same time when the input information for the 12th line is read to the parallel processor network, evaluation (i.e., calculation) of the sixth line is started 302, which means that calculation of the lines 1-11 is started, when their states is beginning to affect the states of the neighborhood. Thereafter, on the next clock period of loading input information of the 13th line is read and the calculation of the 12th line is started 303, wherein the lines 1-12 are in the evaluation. On the next clock period, in turn, input information of the next line (i.e., line 14) is read and calculation of the 13th line is started 304. At the same time output, information of the lines 6, 7, 8 are read to the next phase 304. When reading input information to the 15th Line, the calculation of the 14th line is started, the calculation of the 1st line is ended and output information for the lines 7, 8, 9 are read to the next phase 305. This functioning is also shown in figure 4. The rolling according to the figure roll six times (when the 144 pixels is used as a height of the image, and 24 active cell lines is used) , until being in the end of the image, when input information of the last line of the image is read in addition to the last active line to five closest neighboring cell lines (corresponds to the action implemented in the beginning of the image) .
Since the result of the low pass filtering according to the present invention finishes one line at the time, the next phase in the algorithm can be done at the same time as the low pass filtering. This is possible if the operation in the next phase is such that only the closest neighbors affect to the result'. Three lines of the low pass part is read to the next phase at a time, then the result of the next phase is finished almost at the same time as the low pass filtering is completed.
In figure 5 there is shown a current mirror according to the present invention, which has a structure, which has current as an input and output measure. In the figure, NMOS transistors 501 and 502 form the current mirror. An output current 14 of the current mirror depends ' on the input current according to a formula K*I4 = 13. In this 13 is the input current of the mirror and K is a mirroring ratio, which is received from the scaling of the transistors (501 and 502) dimensions K = (W2*L1) / (L2*W1) , in which Wl and W2 are the width of the transistors (501 and 502) and LI and L2 are the lengths; The current mirror may have several output currents, from which everyone is implemented with separate components. In the. figure, NMOS transistor has two output currents (14 and 15) , the drain currents of the transistors 502 and 503. The current mirror can be also implemented with PMOS transistors.
Figμre 6 shows a current mirror structure according to the present invention, which can be used to implement low pass filtering for one pixel. In the structure according to the present invention has only two current mirrors, from which the first current mirror 601 comprises transistors 602 and 603, and the second current mirror 604 comprises transistors 605 and 606. The transistors in the current mirrors can be e.g., NMOS transistors or PMOS transistors. That is not essential according to the present invention whether there are two NMOS or PMOS transistors in the first and second current mirrors, but there are transistors in these current mirrors. If the first current mirror comprises two (or more) NMOS transistors', the second current mirror should have two (or more) PMOS transistors, and vice versa. In the first current mirror 601 the mirroring ratio is K3 : 1.
An income point 607 of a cell receives the input current, which is sum (Iin) of the output currents received from neighboring cells and memory of the cell . The information coming from the memory of the cell is marked as 16. The memory of the cell may be either integrated into a chip or the cell may have so called outer memory. Drain of the transistor 602 functions as an income point for the first current mirror 601 receiving an output current from the income point 607 of the cell. The first current mirror is arranged to scale the output current of the income point, where after an output current of the first current mirror 601 is conducted to said second current mirror 604, which has a mirroring ratio 1:K4. An outcome point of the first current mirror 601 is connected directly to the income point of the second current mirror 604.
The second current mirror is arranged to scale the output current of the first current mirror. An output current (lout) of the current mirror 604, which is drain current of the transistor 606, is conducted to corresponding income points of the neighboring processors (not shown in figure) . The output point of the second current mirror of the first processing structure according to the present invention is preferably connected directly to an income point of the first current mirror of a second processing structure. The second current mirror 604 has several output transistors, from which one of them 606 is shown in figure 6, having output currents (lout) that are conducted to corresponding income points of the neighboring processors. The number of the outcome points is (N) .
The parallel processor network (and its cell) according to the present invention is compared to resistive network, which is known to the person skilled in the art, in calculations, and in which R0 ' is resistance of the constant potential (ground) , Rl is resistance of the neighboring cells, GO is a conductance corresponding to the resistance of the constant potential, and Gl is a conductance corresponding to the resistance of the neighboring cells. Transfer function of the current mirror structure's filter is defined with mirroring ratios K3 and K4. If (K3-N*K4)/K4 = G0/G1 = Rl/RO the transfer function of the current mirror filter in relation to position corresponds to a transfer function of the resistive network. When the preferred current mirror structure according to the present invention is used, a current form result 608 is received, which the first current mirror is scaled from the input current, and which can be read from drain of the transistor 609. This is one advantage of the present invention compared to prior art. The mirroring ratios K3 and K4 can be either constants or they can be programmable depending on does the algorithm one or more different filtering operations. Using the constant mirroring ratios, the calculation multiplier is based on the relation of the transistor geometric. Parallel connection structure can also be made to programmable, in which the control signals functions with digital on/off logic. These control measures define through switches, which parallel connected structures affects to the outcome of the structure.
Advantages of the present invention are that it allows also non-symmetric structures in relation to position (in the above it has only been presented symmetric) and replacing negative conductances with current mirror structure.
The cells of the parallel processor network according to the present invention are preferably integrated into a chip such that each cell is arranged to receive currents from numerous neighboring cells and/or to transfer currents to numerous neighboring cells.
Figure 7 shows a current mirror structure according to another embodiment of the present invention, in which it is assumed that conductances are positive. A current Isum incoming to the NMOS current mirror 701, which has transistors 702 and 703, is a sum of three currents and it can be presented with formula Isum = I8*Ka + I9*Kb + 16, in which currents 18 and 19 correspond to voltages VI and V2 in a case of resistive network. The sum of the three currents is formed in the structure according to figure 7 from currents 18, which is conducted from current mirror 705 (which comprises transistors 706 and 707), 19, which is conducted from current mirror 708 (which comprises transistors 709 and 710), and 16, which can be conducted either from a memory integrated into a chip or from an outer memory. State measure 17 is an output current of the NMOS current mirror 701, which in case according to figure 7 can be conducted from outcome points of the transistor (transistors 703 and 704) to income points of the next device (which can be next current mirror or next cell) , which can also be written into a form 17 = Isum / K5 = (I8*Ka + I9*Kb + 16) / K5. From this it can be noted that the form of the output current is in the same form as in corresponding resistive network (V0 = (V1*G1 + V2*G2 + I) / (GO + Gl + G2) ) . In order that a value of the current 17 would correspond to result V0 of calculation of the resistive network, the mirroring rations of the current mirrors should be chosen suitably. In this case, it is chosen that Ka = Gl, Kb = G2 , K5 = GO + Gl + G2. Since the relations of the conductances Gl and G2 to conductance GO define the transfer function of the resistive network in relation to position, the mirroring ratios must be defined through the conductance relations . For implementing this, it is assumed that Gl = G2, wherefrom it is received that Ka = Kb = Gl, K5 = GO + 2*G1.
After this solving from latter formula GO and receiving a result GO / Gl = (K5 - 2*G1) / Gl = (K5 - 2*Ka) / Ka. With the same principle the current mirrors can be set to correspond the connection topology of the resistive network, and by changing the mirroring ratios can be set the transfer function of the current mirror implementation to correspond the transfer function of the resistive network.
In the embodiment presented above, it was assumed that all the conductance values of the resistive network were positive. However, in some applications there is need for negative conductances for connecting the knot points. In figure 8, there is shown a current mirror structure, which has in case of the previous embodiment conductance Gl is negative, then the current passing through conductance GO can be written in form of V0*G0 = (V0 - Vl)*|Gl| + (V2 - V0) *G2 + I, from which V0 can be solved, which is V0 = (-Vl*|Gl| + V2*G2 + I) / (GO - I Gl | + G2). In this case, the effect of the negative conductance is implemented by conducting an affect of a neighboring state 110 to a state 17 through a current mirror 801, which comprises 3 transistors (802, 803 and 804) . Then, since sum Isum = K6*I10 = (K6/Kal) * I10_l, there is received I10_l = Kal*I10. Then Isum3 = -I10*Kal + Ill*Kbl + 16, and 17 = Isum3/K6 = (-I10*Kal + Ill*Kbl + 16) / K6. This last formula can be adapted with a formula of the resistive network. In figure 8, a current mirror 805, and transistors 806, 807 and 808 relating into it, corresponds to the current mirror 701, and transistors 702, 703 and 704 respectively relating into it, shown in figure 7. In figure 8, a current mirror 809, and transistors 810 and 811 relating into it, corresponds to the current mirror 708, and transistors 709 and 710 respectively relating into it, shown in figure 7.
When the multipliers of the cellular neural network guarantees that all the states remain on linear area during processing, the current mirror structure according to the present invention can be used for implementing cellular neural network structure. This kind of cellular neural network can be described as a resistive network, where from the corresponding mirroring ratios are calculated according to the principles described above.
Since the transfer function of the resistive network in relation to position depends only on the conductance relations, the conductance values can be scaled such that some conductance value is always constant when there is no need to be able to be programmed that. This, in turn, means constant mirroring ratios in corresponsive structures in a current, mirror implementation, which remarkably reduces the need for programming of the structure, in case the transfer function is symmetric in relation to position. One common filtering topology stated with multipliers of the parallel processor network is shown in a matrix of figure 9a. The multipliers of this matrix can be scaled to correspond a matrix shown in figure 9b. If the multiplier matrix is symmetric, multipliers affecting neighbors can be always scaled to value 1, and then, with adjusting one multiplier, self feedback, can be implemented all the transfer functions being within the limits of programmable.
In the embodiments described earlier it is assumed that result is a result received from a stable parallel processor network. However, since' output measures of a parallel processor network according to one embodiment of the present invention are available in current form all the time, also during the calculation can be taken samples from a state of a parallel processor network. In this case, the result does not necessarily correspond to the result received from linear resistive network according to prior art, but also this result received from a current mirror structure can be enough in some applications.
Low pass filtering according to the present invention can also be implemented e.g., with resistive network, a part of that is shown in figure 10. In resistive network, one node, i.e., one cell, corresponds to each image unit (pixel) . These nodes are connected to constant potential (ground) with recistances, that values R0 , and corresponding conductance GO is received as a inverse value of value of the resistance GO = l/RO . Further, the nodes are connected to neighboring nodes with resistances that values Rl, and corresponding conductance Gl = 1/R1. The image information is brought to the nodes e.g., as current, which is directly comparable to the intensity of the image unit. By setting resistance relation R0/R1, transfer function of filtering can be controlled in relation to position. Normally, it is assumed that image information remains unchanged for a certain time, so that the resistive network is able to set into stable state, when the voltages (Vx) of the node is the result of processing. Therefore, filtering happens only in relation to position, not in relation to time. When the result of filtering is read into a memory, new image information may be brought to a filter by changing the currents of current sources to correspond new image information.

Claims

Claims
1. A method for processing image with an analog implementation in a parallel processor netwo'rk, wherein cells of the parallel processor network are arranged line by line, the, method comprising: a) reading input information of a line into the parallel processor network by supplying an input measure of the line into each income point ; b) start processing of line read previously into the parallel processor network essentially at the same time, in which processing of the line is implemented by conducting a measure of an income point of each cell of the line from output measures of neighborhood; and c) reading out output information of at least one already processed line essentially at the same time with points a) and b) , in which reading out output information of already processed line is implemented by conducting output measure of each cell of the line into an outcome point of a cell and conducting the output measure further to next processing structure .
2. A method according to claim 1, wherein the input information of the iine is read partly from the memory of the line of the cells.
3. A method according to claim 2, wherein each cell of the parallel processor network forms one image unit, i.e., a cell.
4. A method according to any one of the claims 1 to 3 , wherein the input information is read into the parallel processor network, in which in the processing of the input information a linear two dimensional low pass filtering is implemented.
5. A method according to any one of claims 1 to 4 , wherein the input measure is an input current, and the output measure is an output current .
6. A method according to any one of claims 1 to 5 , wherein processing the line read into the parallel processor network is implemented by conducting an output current of an income point of each cell of the line into a first of two current mirrors, scaling the current in the first current mirror, conducting an output current of the first current mirror into a second current mirror of the cell, and scaling the current in the second current mirror .
7. A parallel processor network, which is applicable to processing image with an analog implementation, comprising: several cells that are connected to their neighboring cells and integrated into a chip, wherein each cell has an income point for receiving an input current, which is a sum of the output currents of the neighboring cells and a memory of the cell, two' current mirrors, from which a first current mirror is arranged to scale an output current of the income point, and a second current mirror is arranged to scale the output current of the first current mirror, and at least one outcome point for conducting an output current to at least one income point of the neighboring cells.
8. A parallel processor network according to claim 7,' wherein each cell is arranged to receive currents from several neighboring cells and/or transferring currents to several neighboring cells.
9. A cell of a parallel processor network, which is applicable to processing image with an analog implementation, wherein the cell is integrated into a chip and comprising: an income point for receiving an input current which is a sum of the output currents of the neighboring cells and a memory of the cell; two current mirrors," from which a first current mirror is arranged to scale an output current of the income point, and a second current mirror is arranged to scale the output current of the first current mirror; and at least one outcome point for conducting an output current to at least one income point of the neighboring cells .
10. A cell of a parallel processor network according to claim 9, wherein the cell is arranged to implement low pass filtering.
11. A cell of a parallel processor network according to claim 9 or 10, wherein the outcome point of the first current mirror is connected directly to an income point of a second current mirror .
12. A cell of a parallel processor network according to any one of claims 9 to 11, wherein the outcome point of the second current mirror is connected directly to an income point of a first current mirror of a second processing structure .
13. A cell of a parallel processor network according to any one of claims 9 to 12, wherein the current mirror has at least two transistors.
14. A cell of a parallel processor network according to claim 13, wherein at least two transistors in the current mirror are NMOS transistors .
15. A cell of a parallel processor network according to claim 13 , wherein at least two transistors in the current mirror are PMOS transistors .
16. A cell of a parallel processor network according to any one of claims 9 to 15, wherein the first current mirror further comprises third transistor.
17. A cell of a parallel processor network according to claim 16, wherein the third transistor is arranged to forward a current form result of the input current scaled by the first current mirror.
18. A cell of a parallel processor network according to any one of claims 9 to 17, wherein the memory of the cell is integrated into a chip .
19. A cell of a parallel processor network according to any one of claims 9 to 18, wherein conducting an output current from the outcome point of the cell is arranged to be achieved into each neighboring cells from transistors separated from each other.
20. A cell of a parallel processor network according to any one of claims 9 to 19, wherein mirroring ratios of current mirrors are constant .
21. A cell of a parallel processor network according to any one of claims 9 to 19, wherein mirroring ratios of current mirrors are programmable .
PCT/FI2001/000496 2000-05-22 2001-05-22 Processing images in a parallel processor network WO2001091049A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2001262382A AU2001262382A1 (en) 2000-05-22 2001-05-22 Processing images in a parallel processor network
EP01936485A EP1292915A1 (en) 2000-05-22 2001-05-22 Processing images in a parallel processor network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FI20001221 2000-05-22
FI20001221A FI112884B (en) 2000-05-22 2000-05-22 Image processing in a parallel processor network, suitable network and cell

Publications (1)

Publication Number Publication Date
WO2001091049A1 true WO2001091049A1 (en) 2001-11-29

Family

ID=8558430

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2001/000496 WO2001091049A1 (en) 2000-05-22 2001-05-22 Processing images in a parallel processor network

Country Status (4)

Country Link
EP (1) EP1292915A1 (en)
AU (1) AU2001262382A1 (en)
FI (1) FI112884B (en)
WO (1) WO2001091049A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0308008A2 (en) * 1987-09-16 1989-03-22 Philips Electronics Uk Limited A method of and a circuit arrangement for processing sampled analogue electrical signals
US5131072A (en) * 1988-08-31 1992-07-14 Fujitsu, Ltd. Neurocomputer with analog signal bus
US5155802A (en) * 1987-12-03 1992-10-13 Trustees Of The Univ. Of Penna. General purpose neural computer
US5204549A (en) * 1992-01-28 1993-04-20 Synaptics, Incorporated Synaptic element including weight-storage and weight-adjustment circuit
US5914868A (en) * 1996-09-30 1999-06-22 Korea Telecom Multiplier and neural network synapse using current mirror having low-power mosfets

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0308008A2 (en) * 1987-09-16 1989-03-22 Philips Electronics Uk Limited A method of and a circuit arrangement for processing sampled analogue electrical signals
US5155802A (en) * 1987-12-03 1992-10-13 Trustees Of The Univ. Of Penna. General purpose neural computer
US5131072A (en) * 1988-08-31 1992-07-14 Fujitsu, Ltd. Neurocomputer with analog signal bus
US5204549A (en) * 1992-01-28 1993-04-20 Synaptics, Incorporated Synaptic element including weight-storage and weight-adjustment circuit
US5914868A (en) * 1996-09-30 1999-06-22 Korea Telecom Multiplier and neural network synapse using current mirror having low-power mosfets

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HUERTAS J L ET AL: "Analog VLSI implementation of cellular neural networks. In cellular neural networks and their application CNNA-92 Proceedings", 1992 IEEE, 14 October 1992 (1992-10-14) - 16 October 1992 (1992-10-16), pages 141 - 150, XP002901896 *
YAZDI N ET AL: "Pipelined analog multi-layer feedforward neural networks", 1993 IEEE INTERNATIONAL SYMPOSIUM, vol. 4, 3 May 1993 (1993-05-03) - 6 May 1993 (1993-05-06), pages 2768 - 2771, XP002901895 *

Also Published As

Publication number Publication date
FI20001221A (en) 2001-11-23
AU2001262382A1 (en) 2001-12-03
EP1292915A1 (en) 2003-03-19
FI112884B (en) 2004-01-30

Similar Documents

Publication Publication Date Title
Kinget et al. A programmable analog cellular neural network CMOS chip for high speed image processing
Serrano et al. A modular current-mode high-precision winner-take-all circuit
US10424370B2 (en) Sensor device with resistive memory for signal compression and reconstruction
DE102011054997A1 (en) Sense amplifier with negative capacitance circuit and device with the same
JPH0650533B2 (en) Electronic circuit and template matching circuit
Harrer et al. A current-mode DTCNN universal chip
EP0834115B1 (en) Circuit for producing logic elements representable by threshold equations
Anguita et al. Analog CMOS implementation of a discrete time CNN with programmable cloning templates
DE102022100200A1 (en) Compute-In-Memory storage array, CIM storage array
Baktir et al. Analog CMOS implementation of cellular neural networks
WO2001091049A1 (en) Processing images in a parallel processor network
Sah et al. Memristor bridge circuit for neural synaptic weighting
Krieg et al. Analog signal processing using cellular neural networks
CN108449556B (en) Cross-line time delay integration method and device and camera
JP3479333B2 (en) Method and apparatus for performing a Gaussian recursive operation on a pixel set of an image
CN115458005A (en) Data processing method, integrated storage and calculation device and electronic equipment
US20050231398A1 (en) Time-mode analog computation circuits and methods
Sunayama et al. Cellular νMOS circuits performing edge detection with difference-of-Gaussian filters
Morie et al. A 1-D CMOS PWM cellular neural network circuit and resistive-fuse network operation
Halonen et al. Programmable analog VLSI CNN chip with local digital logic
Li On frequency weighted minimal L/sub 2/sensitivity of 2-D systems using Fornasini-Marchesini LSS model
Wiehler et al. A one-dimensional analog VLSI implementation for nonlinear real-time signal preprocessing
US6100741A (en) Semiconductor integrated circuit utilizing insulated gate type transistors
DE102019119744A1 (en) CONFIGURABLE PRECISE NEURONAL NETWORK WITH DIFFERENTIAL BINARY, NON-VOLATILE STORAGE CELL STRUCTURE
TWI828206B (en) Memory device and operation method thereof for performing multiply-accumulate operation

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ CZ DE DE DK DK DM DZ EC EE EE ES FI FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2001936485

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2001936485

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Ref document number: 2001936485

Country of ref document: EP